This is the 2nd blog post of the “SAP Data and Analytics Showcase” series of blog posts. We recommend you to look into our overall blog post to gain a better understanding of the end-to-end scenario and use case which combine multiple capabilities of SAP Data and Analytics solutions.
- Configure the database module to enable machine learning capabilities within your CAP project
- Implement a Node.js script to run machine learning models and generate prediction results
- Deploy dedicated services and CAP project in SAP BTP
Prerequisite: Create a CAP Project from Business Application Studio
First, let’s create a Full Stack Cloud Application in Business Application Studio. For this step, you can refer to this blog post for details. The below CAP Node-js application called “ml_hana_demo” is established with the empty database (db) and Node.js backend (srv) modules as prerequisites.
Detailed Implementation Steps for Sub Scenario 1
First, let’s review our sub scenario 1 again – training data and CAP application are located in the same HDI container of the same HANA Cloud instance. Afterwards, you can follow the below steps to configure the new-created CAP project in Business Application Studio.
Step 1: Create a database user and roles, grant privileges in HANA Database Explorer
In HANA Database Explorer before your first deployment, a database user called PAL_ACCESS_GRANTOR and two new roles are created, and the related PAL privileges are granted to these two roles. You can use the following SQL statements for this step.
CREATE USER PAL_ACCESS_GRANTOR PASSWORD <password> NO FORCE_FIRST_PASSWORD_CHANGE;
-- create role
CREATE ROLE "data::external_access_g";
CREATE ROLE "data::external_access";
GRANT "data::external_access_g", "data::external_access" TO PAL_ACCESS_GRANTOR WITH ADMIN OPTION;
GRANT AFL__SYS_AFL_AFLPAL_EXECUTE_WITH_GRANT_OPTION, AFL__SYS_AFL_AFLPAL_EXECUTE_WITH_GRANT_OPTION to "data::external_access_g";
GRANT AFL__SYS_AFL_AFLPAL_EXECUTE, AFL__SYS_AFL_AFLPAL_EXECUTE_WITH_GRANT_OPTION to "data::external_access";
Step 2: Create a user provided service in your space on Business Technology Platform
Under your space on BTP, you can create a user provided service (called “ml_hana_bas_ups” in our case) with the username and password in the previous step 3. This user provided service provides the necessary authorisation to the runtime HDI user to run machine learning models.
{
"password": "Your password",
"tags": [
"hana"
],
"user": "PAL_ACCESS_GRANTOR"
}
Step 3: Adapt mta.yaml and package.json and bind the user provided service to database module
After creating the user provided service on BTP, we need to bind this service to the database module of our CAP project. To do so, you can refer to the following statements in mta.yaml:
- name: ml_hana_demo-db-deployer
type: hdb
path: db #gen/db
requires:
- name: ml_hana_demo-db
properties:
TARGET_CONTAINER: ~{hdi-service-name}
- name: cross-container-service-1
group: SERVICE_REPLACEMENTS
properties:
key: ServiceName_1
service: ~{the-service-name}
parameters:
buildpack: nodejs_buildpack
resources:
- name: cross-container-service-1
type: org.cloudfoundry.existing-service
parameters:
service-name: ml_hana_bas_ups
properties:
the-service-name: ${service-name}
Besides the mta.yaml file, we need to adapt the package.json in the root of the project. These two adjustments enable the generated project fully compatible and to contain HANA native artefacts.
First, we update the @sap/cds
version to ^6
in the dependencies section.
"dependencies": {
"@sap/cds": "^6",
"express": "^4",
"hdb": "^0.18.3",
"sap-hdb-promisfied": "^2.202205.1"
}
Afterwards, we replace the cds section with the following codes in the same package.json:
"cds": {
"build": {
"tasks": [
{
"for": "hana",
"dest": "../db"
},
{
"for": "node-cf"
}
]
},
"requires": {
"db": {
"kind": "hana-cloud"
}
}
}
Step 4: Import generated design-time artefacts into CAP project in Business Application Studio
As described in our first blog post, data scientist generated the following design-time artefacts via hana-ml library in Jupyter. We need to import these files into the db module of our CAP project.
- hdbprocedure: the file “base_additivemodelforecast1_fit” is used to train machine learning model. The file “base_additivemodelforecast1_predict” is used to run prediction based on trained model. Additionally, we’ve created an additional procedure (not generated via hana-ml library) “base_additivemodelforecast1_modelwrapper” to call the training procedure for machine learning models and write ML models into hdbtables. You can look at our model wrapper procedure as an example, which will call the training procedure to generate machine learning models.
PROCEDURE base_additivemodelforecast1_modelwrapper()
LANGUAGE SQLSCRIPT
SQL SECURITY INVOKER
AS
BEGIN
lt_data = SELECT "STATION_UUID", "DATE", "E5" FROM "CPM_ML_FUEL_PRICES_RNK_TEST" WHERE "DATE" < '2022-06-14 00:00:00.000';
lt_holiday = SELECT * FROM CPM_ML_PAL_ADDITIVE_MODEL_ANALYSIS_HOLIDAY;
CALL BASE_ADDITIVEMODELFORECAST1_FIT(
:lt_data,
:lt_holiday
);
END
- hdbgrants: the file “ml_hana_bas_ups” grants the required roles “AFL__SYS_AFL_AFLPAL_EXECUTE” and “AFL__SYS_AFL_AFLPAL_EXECUTE_WITH_GRANT_OPTION” to the runtime database user to execute the PAL procedure successfully.
{
"ml_hana_bas_ups": { --> Your user provided service name
"object_owner": {
"roles": [
"data::external_access_g". --> Roles created in Step 1 to call PAL procedure
]
},
"application_user": {
"roles": [
"data::external_access"
]
}
}
}
- hdbsynonym: the file “synonym” enables the runtime user to access the artefacts under the system schema “_SYS_AFL”.
{
"SYSAFL::PALMASSIVEADDITIVEMODELANALYSIS": {
"target": {
"object": "PAL_MASSIVE_ADDITIVE_MODEL_ANALYSIS",
"schema": "_SYS_AFL"
}
},
"SYSAFL::PALMASSIVEADDITIVEMODELPREDICT": {
"target": {
"object": "PAL_MASSIVE_ADDITIVE_MODEL_PREDICT",
"schema": "_SYS_AFL"
}
}
}
Step 5: Create database tables in the CAP CDS file in Business Application Studio
To train and use the ML models successfully, table structures, e.g., for training data, test data, ML models, and prediction results, need to be defined via CAP CDS service (in the file of “data-model.cds”) as hdbtables. This step needs to be done before deployment of CAP project. For instance, we created the following entity to store ML models in the CDS file. You can look into the procedure called “base_additivemodelforecast1_fit” above.
entity PAL_ADDITIVE_MODEL_ANALYSIS_MODEL {
GROUP_ID : String(100);
ROW_INDEX : Integer;
MODEL_CONTENT : LargeString;
}
All the tables required in the training and prediction procedures in our use case are listed as follows:
We’ve prepared three CSV files (training data and testing data) and imported them into the data folder under db module. During deployment of CAP project, the entities “Stations”, “Prices” and “FUEL_PRICES_RNK_TEST” are filled with these datasets automatically.
Step 6: Develop Javascript scripts under Node.js module to call database procedures
First, we add the dependency of “sap-hdb-promisfied” in the package.json file, which is required to call database procedures in Javascript. You can refer to the following statements in package.json:
"dependencies": {
"@sap/cds": "^5",
"express": "^4",
"hdb": "^0.18.3",
"sap-hdb-promisfied": "^2.202205.1"
}
Secondly, we define service functions under oData service in the Node.js module of CAP project (*/srv/cat-service.cds) to enable users to call procedures inside these functions.
using cpm.ml as my from '../db/data-model';
service CatalogService {
entity Stations as projection on my.Stations;
entity Prices as projection on my.Prices;
entity Prediction_Results as projection on my.PAL_ADDITIVE_MODEL_PREDICT_FORECAST_RESULT_TBL_1;
function Prices_Predict() returns Boolean;
function Model_Train_Wrapper() returns Boolean;
}
Finally, we implement the logic using Javascript to train machine learning models and generate prediction results. You can use the below codes as example.
const cds = require('@sap/cds')
module.exports = cds.service.impl(function () {
//Generate Prediction Results
this.on('Prices_Predict', async () => {
try {
const dbClass = require("sap-hdb-promisfied")
let db = new dbClass(await dbClass.createConnectionFromEnv())
let dbProcQuery = "CALL BASE_ADDITIVEMODELFORECAST1_PREDICT()"
// @ts-ignore - CDS Types aren't updated for this new Stored Procedure option yet
console.log("------Before running db procedure--------")
let result = await db.execSQL(dbProcQuery)
console.log("------After running db procedure--------")
console.table(result)
return true
} catch (error) {
console.error(error)
return false
}
})
//Train Machine Learning Models and Store Models in HANA Tables
this.on('Model_Train_Wrapper', async () => {
try {
const dbClass = require("sap-hdb-promisfied")
let db = new dbClass(await dbClass.createConnectionFromEnv())
let dbProcQuery = "CALL BASE_ADDITIVEMODELFORECAST1_MODELWRAPPER()"
// @ts-ignore - CDS Types aren't updated for this new Stored Procedure option yet
console.log("------Before running db procedure--------")
let result = await db.execSQL(dbProcQuery)
console.log("------After running db procedure--------")
console.table(result)
return true
} catch (error) {
console.error(error)
return false
}
})
})
Step 7: Call service functions to train machine learning models and generate prediction results
First, the function “Model_Train_Wrapper()” from the oData service needs to called to train ML models. Afterwards, the function “Prices_Prediction()” is called to run predictions. The functions can be found under the metadata of oData service.
As our final step, let’s check the prediction results together via URL of oData Service. You can implement a simple UI for it using Fiori. We skip this UI part in this blog post.
Conclusion
Congratulations! Now you’ve finished all the implementation steps required to develop a machine learning application using SAP Cloud Application Programming Model, SAP Business Application Studio, and capabilities from SAP HANA Cloud, e.g., hana-ml PAL/APL library. We hope you are able to develop your own machine learning application after this session!
Thank you for your time, and please stay tuned and curious about our upcoming blog posts!