This blog post is part of a series on SAP Data Warehouse Cloud and SAP HANA Cloud CI/CD. Before reviewing the details in this blog post, I recommend checking  “SAP Data Warehouse Cloud, SAP HANA Cloud HDI CI/CD Automation Approach” for an overview of the use case scenario, toolset and concepts.

Introduction

This blog post will look into the implementation details of the automation pipelines for SAP Data Warehouse Cloud and SAP HANA Cloud HDI introduced in the earlier blog “SAP Data Warehouse Cloud, SAP HANA Cloud HDI CI/CD Automation Approach.” I’ll do a code walkthrough of each pipeline and show end-to-end automation of the scenario discussed in the overview blog. Review the additional setup pre-requisite topics section towards the end of this blog before trying on the system.

Recap

Before going into details of each pipeline, let’s recap pipeline flow and transport landscape setup. Figure (a) depicts the transport landscape. The key point is that both DEV and QA HDI containers would be under the same subaccount (linked cloud foundry org) and space. This can be extended to a 3-system landscape with DEV QA and PRD, with production on a separate SAP HANA Cloud tenant or similar approach.

Transport%20Landscape%20setup

Figure (a) Transport Landscape setup

Figure (b) outlines the automation flow; two pipelines are linked to two separate GIT repos for the HDI container and SAP Data Warehouse Cloud artifacts. The flow can either start from the HDI container pipeline or the SAP Data Warehouse cloud pipeline. Suppose it involves committing HDI container artifacts via VS code or Business Application Studio. Webhook will trigger the HDI pipeline to build, deploy, validate, and upload MTA archives to SAP Cloud Transport Management. SAP Cloud Transport Management will move the MTA archives through the landscape. If all the earlier steps are successful, it will trigger the SAP Data Warehouse cloud pipeline. SAP Data Warehouse cloud pipeline flows through the build, deploy and validation of SAP Data Warehouse cloud artifacts, deploying them into QA space.

Figure (b) Automation flow

As mentioned at the bottom of Figure (a), all the artifacts are deployed on the same SAP HANA Cloud database tenant. A separate HDI container service and open SQL schema access would be linked to each DEV and QA SAP Data Warehouse Cloud space. The SAP Data Warehouse Cloud artifacts from open SQL schema can be accessed via linked user-provided service in the HDI container enabling SQL data warehousing. Even though our primary use case did not need accessing/enhancing SAP Data Warehouse Cloud artifacts on the HDI, bi-directional access is possible, as outlined in many of the earlier SQL data warehousing blogs, for example, “Access the SAP HANA Cloud database underneath SAP Data Warehouse Cloud.”

Pipeline 1 – SAP HANA Cloud HDI Container

For pipeline 1, I am using an existing HANA Cloud GIT repo leveraging Cloud Application Programming Model. Using project “Piper,” we provide instructions for the build server within the CAP project. This is done using two artifacts – Jenkinsfile and .pipeline/config.yml. You can create these two files manually in your project or generate them using  the @sap/cds-dk command line interface (CLI) as below:

cds add pipeline

This is how the Jenkinsfile and config.yml would look for the HDI container,

Jenkinsfile:

@Library('piper-lib-os') _
node() {
    stage('prepare') {
        deleteDir()
        checkout scm
        setupCommonPipelineEnvironment script: this,
        verbose: true
    }
    stage('build') {
        mtaBuild script: this,
        mtaBuildTool: 'cloudMbt',
        verbose: true               
    }
    stage('deploy') {
        cloudFoundryDeploy script: this,
        deployTool:'mtaDeployPlugin',
        verbose: true
    }
    stage('Validation') {
        npmExecuteScripts script: this,
        verbose: true
    }
    stage('tmsUpload') {
       tmsUpload script: this
    }
        
    stage('Trigger_DWC_Pipeline') {
      build 'HDACSM/dwc_cli_ctms/master'
    } 
}

.pipeline/config.yml

steps:
  ###  Stage Build 
  mtaBuild:
    buildTarget: 'CF'
  ###  Stage Deploy CF deploy
  cloudFoundryDeploy:
    deployTool: 'mtaDeployPlugin'
    deployType: 'standard'
    cloudFoundry:
      org: 'CF-ORG-ABC-E2E-DEMOS'
      space: 'ABC-E2E-DEMOS-DWC-SPACE'
      credentialsId: 'CF-CREDENTIALSID'
      database_id: XeeabcdX-abcd-481d-abcd-00b0417Xabcd
  ###  Stage Validation, 
  #Execute npm script 'test' to validate db artifacts.
  npmExecuteScripts:
    buildDescriptorExcludeList:
      - db/package.json
    runScripts: 
      - "test"
  ###  Stage tmsUpload, Trigger cTMS to move through landscape
  tmsUpload:
    credentialsId: 'BTP-TMS'
    nodeName: 'IMPORT_DEV'
    verbose: 'true'

As shown in the files above, the full coordination and sequencing of pipelines are done using the Piper library. Details of what happens in each stage, prepare, build, deploy and tmsUpload have been explained in the blog post “SAP Business Technology Platform – integration aspects in the CI/CD approach“; please refer to the “Automation pipeline and end point” section. The only addition here is the validation stage and triggering pipeline 2. For the validation stage, I used a unit testing code similar to what Thomas Jung explained in “SAP HANA Basics For Developers: Part 12 Unit Tests”. I have adapted the code from the git repo https://github.com/SAP-samples/hana-opensap-cloud-2020 to use gulp 4.0 since the earlier 3.9 version would no longer work with the latest node version. In the validation stage, we run the “SELECT * FROM” query on each of the views and tables in the HDI container to ensure nothing is broken after changes to the repo.

Now let’s look at specific elements in mta.yml and mtaext files, which helps in realizing the landscape as shown in figure (b).  The tmsUpload method uploads the solution package to an import buffer of the SAP Cloud Transport Management service (cTMS) node. The MTA extension descriptor (mtaext) is uploaded to the nodes in the transport landscape of SAP Cloud transport management, which helps apply QA configuration.

  • Add app-name under deployer module, service name and schema name under com.sap.xs.hdi-container resource parameter. Override the app-name, service name and schema name in the mtaext file.
    • app-name: HDIDEV -> HDIQA
    • service-name: SP_PROJECTDEV_DWC_S1 -> SP_PROJECTQA_DWC_S1
    • schema: SP_PROJECTDEV_DWC_HDI -> SP_PROJECT_QA_DWC_HDI
      • If you remove schema, you will end with guids as schema names. Functionality wise it would be ok; however, having schema names might make maintenance easier.
  • Rather than the actual UPS service name, the key added under the SERVICE_REPLACEMENTS group is used under hdbgrants and synonymconfig to override the UPS service name under mtaext.
    • service-name: UPS_SQL_SP_PROJECTDEV -> UPS_SQL_SP_PROJECTQA

mta.yaml

ID: MVP_SP
version: 0.0.1
modules:
# --------------------Deployer (side car)--------------
- name: db # deployer
# -----------------------------------------------------
  type: hdb
  path: db
  parameters:
    app-name: HDIDEV
  requires:
  - name: HDI_SPDEV # Depends on the HDI Container
    properties:
      TARGET_CONTAINER: ~{hdi-container-name}
  - name: DWC.UPS_SQL_SP_PROJECTDEV  
    group: SERVICE_REPLACEMENTS
    properties:
      key: ups_schema_access
      service: '~{ups_sql_sp}'
resources:
# --------------------HDI Container--------------
- name: HDI_SPDEV
# -----------------------------------------------
  type: com.sap.xs.hdi-container
  parameters:
    config:
      database_id: Xee99a70-abcd-481d-abcd-00b0417Xabcd # to deploy against DWC HC
      schema: SP_PROJECTDEV_DWC_HDI
    service-name: SP_PROJECTDEV_DWC_S1
  properties:
    hdi-container-name: ${service-name}
# --------------------UPS user provided service --------------
# to be created first in the CF space in which this HDI shared service gets created. 
# Credential you get from DWC Space management. You use this service to access DWC views
- name: DWC.UPS_SQL_SP_PROJECTDEV 
# # --------------------------------------------------------------
  type: org.cloudfoundry.existing-service
  parameters:
    service-name: UPS_SQL_SP_PROJECTDEV
  properties:
    ups_sql_sp: ${service-name}

MTA Extension File .mtaext

_schema-version: "3.1"
ID: MVP_SP.config.first
extends: MVP_SP

modules:
- name: db
  type: hdb
  path: db
  parameters:
    app-name: HDIQA
resources:
  - name: VI_HDI_SPDEV
    type: com.sap.xs.hdi-container
    parameters:
      service: hana
      service-plan: hdi-shared
      config:
        database_id: Xee99a70-abcd-481d-abcd-00b0417Xabcd
        schema: SP_PROJECT_QA_DWC_HDI
      service-name: SP_PROJECTQA_DWC_S1
    properties:
      hdi-container-name: ${service-name}
  - name: DWC.UPS_SQL_SP_PROJECTDEV 
    type: org.cloudfoundry.existing-service
    parameters:
        service-name: UPS_SQL_SP_PROJECTQA

Pipeline 2 – SAP Data Warehouse Cloud Pipeline

SAP Data Warehouse Cloud pipeline Jenkinsfile and config.yml are as below. Prepare step is used to checkout code from source control, and initialize Piper commonPipelineEnviroment. Build and deploy steps call Build.js and Deploy.js nodeJS files, respectively. The parameters for the Build and Deploy steps come from config.xml except for the SAP Data Warehouse Cloud login credential, which is stored as a secret in Jenkins and passed using the withCredentials module. This would mask the credentials field even in the build server logs.  As shown in the Dockerfile code below, a custom docker image is used to ensure all the dependencies are met. And the Build.js is called inside the docker container. Please refer to the comments inside the Dockerfile on how to build the docker image.

Jenkinsfile:

@Library('piper-lib-os') _

node() {

    stage('prepare') {
        deleteDir()
        checkout scm
        setupCommonPipelineEnvironment script: this
        verbose: true
    }

    stage('build') {
        withCredentials([
        usernamePassword(credentialsId: "DWC_CredentialsID",
        usernameVariable: 'DWC_USER',
        passwordVariable: 'DWC_PASS')
        ])
        {
            dockerExecute(
                script: this, 
                dockerImage: 'vishwagi/puppeteer-dwc-node-docker:latest',
                dockerEnvVars: ['DWC_PASS':'$DWC_PASS','DWC_USER':'$DWC_USER',])
                {
                sh 'node Build.js';
                }
                verbose: true    
        }            
    }

    stage('deploy') {

        withCredentials([
        usernamePassword(credentialsId: "DWC_CredentialsID",
        usernameVariable: 'DWC_USER',
        passwordVariable: 'DWC_PASS')
        ])
        {
            dockerExecute(
                script: this, 
                dockerImage: 'vishwagi/puppeteer-dwc-node-docker:latest',
                dockerEnvVars: ['DWC_PASS':'$DWC_PASS','DWC_USER':'$DWC_USER',])
                {
                sh 'node Deploy.js';
                }
                verbose: true    
        }
    }
    stage('Validation') {
        npmExecuteScripts script: this,
        verbose: true
    }

}

.pipeline/config.yml

steps:
  ###  Stage Build and Deploy set env variables
  dockerExecute:
    dockerEnvVars:
      DWC_URL: 'https://dwc-ab-abcd.eu10.hcs.cloud.sap/'
      DWC_PASSCODE_URL: 'https://dwc-ab-abcd.authentication.eu10.hana.ondemand.com/passcode'
      HDIDEV: 'SP_PROJECTDEV_DWC_HDI'
      HDIQA: 'SP_PROJECT_QA_DWC_HDI'
      SPACE: 'SP_PROJECTDEV'
      SPACEQA: 'SP_PROJECTQA'
      LABELQA: 'DWC_QA'
      ENTITIES: ''
      SPACE_DEFINITION_FILE: 'SP_PROJECTDEV.json'
      NEW_SPACE_DEFINITION_FILE:  'SP_PROJECTQA.json'

  ###  Stage Validation, Execute npm script 'test' to validate db artifacts.
  npmExecuteScripts:
    buildDescriptorList:
      - srv/package.json
    runScripts: 
      - "test"

Dockerfile.

FROM geekykaran/headless-chrome-node-docker:latest

LABEL version="1.0"
LABEL author = "Vishwa Gopalkrishna"

RUN apt update; 
    apt upgrade;

RUN npm cache clean -f; 
    npm install n -g; 
    n stable; 
ADD package.json package-lock.json /

# The steps below are to enhance Docker image
# otherwise the image from Docker Hub can be used as is.
# open terminal in the same folder as Dockerfile and run below
# Command #1 to create package.json file.
# npm init --yes 

# Command #2 install dependencies, these would be written in package.json file
# npm install @sap/dwc-cli fs-extra puppeteer path
#  Now if you check the package.json and package-lock.json you should see the dependency list.

RUN npm install

# #3 Build command
# docker build -t vishwagi/puppeteer-dwc-node-docker:latest .


# Version 1.0 image has below packages
# ***IMPORTANT other @sap/dwc-cli version may need changes to Build.js 
#        "@sap/dwc-cli": "^2022.14.0",
#        "fs-extra": "^10.1.0",
#        "path": "^0.12.7",
#        "puppeteer": "^15.3.0"

Build.js and Deploy.js files are nodeJS files wrapped around @sap/dwc-cli commands. Both these modules use a headless chromium browser for automated passcode retrieval (puppeteer). Please refer to Jascha Kanngiesser’s dwc-cli blog post explaining the passcode retrieval details. With SAP Data Warehouse Cloud’s latest version, there is support for OAuth authentication, which should simplify the Build.js even further. I’ll write a follow-on blog updating the Build and Deploy JS files with OAuth authentication, keep a look out for my updates here.

Functionality-wise, Build.js downloads the DEV space entities to a file parses it to translate them to QA space entities, changing the relevant parameters like the label, mapped HDI name, DB user etc. And Deploy.js updates/creates the QA space with appropriate entity changes. The parameters from config.yml and secrets are retrieved as environment parameters.

Build.js

const puppeteer = require("puppeteer");
const exec = require("child_process").exec;
const fs = require('fs-extra');

const SPACE_DEFINITION_FILE = process.env.SPACE_DEFINITION_FILE;
const NEW_SPACE_DEFINITION_FILE = process.env.NEW_SPACE_DEFINITION_FILE;
const SPACE = process.env.SPACE;
const SPACEQA = process.env.SPACEQA;
const LABELQA = process.env.LABELQA;
const ENTITIES = process.env.ENTITIES;
const HDIDEV = process.env.HDIDEV;
const HDIQA = process.env.HDIQA;
const DWC_URL = process.env.DWC_URL;
const DWC_PASSCODE_URL = process.env.DWC_PASSCODE_URL; 
const USERNAME = process.env.DWC_USER;
const PASSWORD = process.env.DWC_PASS;

let page;

const getPasscode = async () => {
    console.log('Inside get passcode module');
    await page.waitForSelector('div.island > h1 + h2', {visible: true, timeout: 5000}); 
    await page.reload();  
    return await page.$eval('h2', el => el.textContent);
}

const execCommand = async (command) => new Promise(async (res, rej) => {
    const passcode = await getPasscode();
    console.log('Passcode OK');
    const cmd = `${command} -H ${DWC_URL} -p ${passcode}`;
    console.log('command for space download', cmd);
    
    exec(cmd, (error, stdout, stderr) => {
        if (error) {
            console.error(`error: ${error.message}`);
            if (error.code === 1) {
                res({ error, stdout, stderr });
            }else {
            rej({ error, stdout, stderr });
            }
        }
        else{
            res({ error, stdout, stderr });
        }
      
        console.log(`stdout:n${stdout}`);
        console.log(`error:n${error}`);
        console.log(`stderr:n${stderr}`);

    });
});

(async () => {
    const browser = await puppeteer.launch({args: ['--no-sandbox', '--disable-setuid-sandbox']}); 
    page = await browser.newPage();
    await page.goto(DWC_PASSCODE_URL);

    await page.waitForSelector('#logOnForm', {visible: true, timeout: 5000}); 
    if (await page.$('#logOnForm') !== null) {
        await page.type('#j_username', USERNAME);
        await page.type('#j_password', PASSWORD);
        await page.click('#logOnFormSubmit');
    }
 
//--------- READ DEV SPACE ------------------//

   console.log(process.env);
   await execCommand(`dwc cache-init`);
   await execCommand(`dwc spaces read -s ${SPACE} -o ${SPACE_DEFINITION_FILE} -d ${ENTITIES}`);
  
//--------- CREATE/UPDATE QA SPACE ------------------//

    const spaceContent = await fs.readFile(SPACE_DEFINITION_FILE, 'utf-8')
    console.log('Read file');
    const replacer = new RegExp(HDIDEV, 'gi')
    const spaceContentQA = spaceContent.replace(replacer, HDIQA);

//  parse the downloaded space definition file
    const spaceDefinition = JSON.parse(spaceContentQA);
//  We need to update the SPACE ID as well the dbuser as it is specific to space
// First lets get the current space name and label and get the dbusername.
    const dbuser_name = SPACE +'#'+ spaceDefinition[SPACE].spaceDefinition.label;
//  copy the dbuser details into a placeholder for now, we will attach the same config to new dbuser.
    const dbuser_details = spaceDefinition[SPACE].spaceDefinition.dbusers[dbuser_name];
    
    console.log(dbuser_details);
    console.log(spaceDefinition[SPACE].spaceDefinition.dbusers)
    
    // update to new dbusername
    const dbuser_name_new = SPACEQA+'#'+LABELQA;

    // const dbuserjson = JSON.stringify([dbuser_name_new]: dbuser_details)
// parse the created json otherwise it would add double escape / later
    const dbuser_json = JSON.parse(JSON.stringify({ [dbuser_name_new] : dbuser_details}));
    // Udpate laberl and dbuser details with new one
    spaceDefinition[SPACE].spaceDefinition.label = LABELQA;
    spaceDefinition[SPACE].spaceDefinition.dbusers = dbuser_json;
   
// Change root node to new QA space
    var json = JSON.stringify({ [SPACEQA] : spaceDefinition[SPACE]});
    // console.log(json);

//    Write the space details to the file to be consumed by deploy later.
    await fs.writeFile(NEW_SPACE_DEFINITION_FILE, json, 'utf-8');

    console.log('MAIN after executing commands');

    await browser.close();
})();

Deploy.js

const puppeteer = require("puppeteer");
const path = require('path');
const exec = require("child_process").exec;

const NEW_SPACE_DEFINITION_FILE = process.env.NEW_SPACE_DEFINITION_FILE;
const DWC_URL = process.env.DWC_URL;
const DWC_PASSCODE_URL = process.env.DWC_PASSCODE_URL; 
const USERNAME = process.env.DWC_USER;
const PASSWORD = process.env.DWC_PASS;

let page;

const getPasscode = async () => {
    console.log('Inside get passcode module');
    await page.waitForSelector('div.island > h1 + h2', {visible: true, timeout: 20000}); 
    await page.reload();  
    return await page.$eval('h2', el => el.textContent);
}

const execCommand = async (command) => new Promise(async (res, rej) => {
    const passcode = await getPasscode();
    console.log('Passcode OK');
 
    const cmd = `${command} -H ${DWC_URL} -p ${passcode}`;
    console.log('command for space download', cmd);
    
    exec(cmd, (error, stdout, stderr) => {
        if (error) {
            console.error(`error: ${error.message}`);
            if (error.code === 1) {
                res({ error, stdout, stderr });
            }else {
            rej({ error, stdout, stderr });
            }
        }
        else{
            res({ error, stdout, stderr });
        }
      
        console.log(`stdout:n${stdout}`);
        console.log(`error:n${error}`);
        console.log(`stderr:n${stderr}`);

    });
});

(async () => {
    const browser = await puppeteer.launch({args: ['--no-sandbox', '--disable-setuid-sandbox']}); 
    page = await browser.newPage();
    await page.goto(DWC_PASSCODE_URL);

    await page.waitForSelector('#logOnForm', {visible: true, timeout: 10000}); 
    if (await page.$('#logOnForm') !== null) {
        await page.type('#j_username', USERNAME);
        await page.type('#j_password', PASSWORD);
        await page.click('#logOnFormSubmit');
    }

//    console.log(process.env);
   await execCommand(`dwc cache-init`);
 
//--------- CREATE SPACE ------------------//
//  The below command will create dwc space from the supplied .json(-f) file    

   await execCommand(`dwc spaces create -f ${NEW_SPACE_DEFINITION_FILE}`);
   console.log('MAIN after executing commands');

   await browser.close();
})();

 

I’ll add a video here of the code walkthrough and end-to-end demo soon; watch this space.

 

Additional Setup Pre-requisite Topics

  1. Service Broker Mapping to enable SQL data warehousing
  2. Bi-directional access SAP Data Warehouse Cloud artifacts  <-> SAP HANA Cloud HDI Container – Tutorial
  3. Project “Piper” Jenkins setup
    • Project “Piper” is one of SAP’s solutions for continuous integration and delivery, as detailed in the solutions overview. Piper is used here in the current scenario because of its added flexibility in setting CICD automation. Start CX Server as a build server for your CICD pipeline. The CX Server is part of the project “Piper.” It’s a lifecycle-management tool to bootstrap a preconfigured Jenkins instance. Thanks to its lifecycle-management scripts, it uses Docker images and can be used out of the box.

Conclusion

Along with the earlier overview blog post, this blog post details the use case scenario, automation flow, challenges and the approach for CI/CD automation with SAP Data Warehouse Cloud and SAP HANA Cloud!. With SAP Cloud Transport Management, project “Piper” and @sap/dwc-cli CI/CD automation can be realized. SAP Data Warehouse Cloud pipeline is automated using a wrapper around dwc-cli, and Docker image is used to ensure dependencies are met. Also, both DEV and QA HDI containers linked to SAP Data Warehouse Cloud can be deployed automatically under the same subaccount (linked cloud foundry org) and space.

Let me know your thoughts about the approach, and feel free to share this blog. If some section is unclear, let me know, and I’ll add more details. All your feedback and comments are welcome. If you have any questions, please do not hesitate to ask in the Q&A area as well.

Sara Sampaio

Sara Sampaio

Author Since: March 10, 2022

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x