In part 1, I gave you an overview of a solution to extract supplier data from a Sourcing event in SAP Ariba Sourcing, and save it in a mailing list in SAP Qualtrics XM for Suppliers.
In this post, I will detail 2 ways in which we can create SAP HANA Cloud database tables to store the information extracted from SAP Ariba Sourcing.
Process
There are two methods to create SAP HANA Cloud artifacts:
HANA Database project
In this method we create database artifacts in a classic database schema using declarative SQL
You can follow this great tutorial by Thomas Jung to learn how to create this kind of projects.
For this exercise, I created the project in the SAP Business Application Studio as explained in the tutorial. I used these files to create the tables:
rfx.hdbtable
COLUMN TABLE rfx (
id NVARCHAR(15) NOT NULL COMMENT 'Internal ID',
title NVARCHAR(1024) COMMENT 'Title',
created_at DATETIME COMMENT 'Created at',
updated_at DATETIME COMMENT 'Updated at',
event_type NVARCHAR(30) COMMENT 'Event Type',
event_state NVARCHAR(30) COMMENT 'Event State',
status NVARCHAR(30) COMMENT 'Event Status',
PRIMARY KEY(id)
) COMMENT 'RFx Information'
rfx_invited_users.hdbtable
COLUMN TABLE rfx_invited_users (
id NVARCHAR(15) NOT NULL COMMENT 'Internal ID',
unique_name NVARCHAR(1024) NOT NULL COMMENT 'Contact Unique Name',
full_name NVARCHAR(1024) COMMENT 'Full Name',
first_name NVARCHAR(250) COMMENT 'First Name',
last_name NVARCHAR(250) COMMENT 'Last Name',
email NVARCHAR(250) COMMENT 'Email',
phone NVARCHAR(30) COMMENT 'Phone',
fax NVARCHAR(30) COMMENT 'Fax',
awarded BOOLEAN COMMENT 'Awarded',
PRIMARY KEY(id, unique_name)
) COMMENT 'Users invited to RFx'
rfx_organizations.hdbtable
COLUMN TABLE rfx_organizations (
id NVARCHAR(15) NOT NULL COMMENT 'Internal ID',
item INTEGER COMMENT 'Item No',
org_id NVARCHAR(15) COMMENT 'Organization ID',
name NVARCHAR(1024) COMMENT 'Name',
address NVARCHAR(1024) COMMENT 'Address',
city NVARCHAR(1024) COMMENT 'City',
state NVARCHAR(1024) COMMENT 'State',
postal_code NVARCHAR(10) COMMENT 'Postal Code',
country NVARCHAR(100) COMMENT 'Country',
contact_id NVARCHAR(1024) NOT NULL COMMENT 'Contact Unique Name',
PRIMARY KEY(id, item)
) COMMENT 'RFx Organizations'
After creating all files, the project should look like this:
After deployment, you should see all 3 tables in the SAP HANA Cloud database explorer:
Now you can use JDBC (for example) to access the tables in the SAP HANA Cloud database.
Multi-target application project, CAP and CDS
In this method we create a multi-target application and use CAP and CDS to generate the SAP HANA database tables, as well as the OData services to access the database.
You can follow this very complete tutorial by Thomas Jung to learn how to create this kind of projects.
For this exercise, I used Visual Studio Code to create the project. You can follow this awesome tutorial by René Jeglinsky to learn how to setup Visual Studio Code for CAP development, and how to deploy the project to the BTP.
I used these files for the CDS artifacs:
schema.cds
namespace com.aribaxm.service;
type InternalId : String(15);
type SDate : DateTime;
type XLText : String(2050);
type LText : String(1024);
type SText : String(30);
type MText : String(250);
entity Rfx {
key id : InternalId;
title : LText;
createdAt : SDate;
updatedAt : SDate;
eventType : SText;
eventState : SText;
status : SText;
}
entity RfxInvitedUsers {
key id : InternalId;
key uniqueName : LText;
fullName : LText;
firstName : MText;
lastName : MText;
email : MText;
phone : SText;
fax : SText;
awarded : Boolean;
}
entity RfxOrganizations {
key id : InternalId;
key item : Integer;
orgId : SText;
name : LText;
address : LText;
city : LText;
state : LText;
postalCode : SText;
country : MText;
contactId : LText;
}
service.cds
using com.aribaxm.service as aribaxm from '../db/schema';
service CatalogService @(path:'/api/v1') {
entity Rfx as projection on aribaxm.Rfx;
entity RfxInvitedUsers as projection on aribaxm.RfxInvitedUsers;
entity RfxOrganizations as projection on aribaxm.RfxOrganizations;
}
server.js
"use strict";
const cds = require("@sap/cds");
const cors = require("cors");
//const proxy = require("@sap/cds-odata-v2-adapter-proxy");
cds.on("bootstrap", app => app.use(cors()));
module.exports = cds.server;
The mta.yaml should look something like this:
---
_schema-version: '3.1'
ID: AribaXM
version: 1.0.0
parameters:
enable-parallel-deployments: true
build-parameters:
before-all:
- builder: custom
commands:
- npm install --production
- npx -p @sap/cds-dk cds build --production
modules:
- name: aribaxm-srv
type: nodejs
path: gen/srv
parameters:
buildpack: nodejs_buildpack
build-parameters:
builder: npm-ci
provides:
- name: srv-api # required by consumers of CAP services (e.g. approuter)
properties:
srv-url: ${default-url}
requires:
- name: aribaxm-db
- name: aribaxm-db-deployer
type: hdb
path: gen/db
parameters:
buildpack: nodejs_buildpack
requires:
- name: aribaxm-db
resources:
- name: aribaxm-db
type: com.sap.xs.hdi-container
parameters:
service: hana # or 'hanatrial' on trial landscapes
service-plan: hdi-shared
properties:
hdi-service-name: ${service-name}
And the package.json should look something like this:
{
"name": "aribaxm",
"version": "1.0.0",
"description": "A simple CAP project.",
"repository": "<Add your repository here>",
"license": "UNLICENSED",
"private": true,
"dependencies": {
"@sap/cds": "^5",
"express": "^4",
"@sap/hana-client": "^2",
"cors": "^2"
},
"devDependencies": {
"sqlite3": "^5",
"@sap/hdi-deploy": "^4"
},
"engines": {
"node": "^16"
},
"scripts": {
"start": "cds run"
},
"eslintConfig": {
"extends": "eslint:recommended",
"env": {
"es2020": true,
"node": true,
"jest": true,
"mocha": true
},
"globals": {
"SELECT": true,
"INSERT": true,
"UPDATE": true,
"DELETE": true,
"CREATE": true,
"DROP": true,
"CDL": true,
"CQL": true,
"CXL": true,
"cds": true
},
"rules": {
"no-console": "off",
"require-atomic-updates": "off"
}
},
"cds": {
"requires": {
"db": {
"kind": "hana"
}
},
"hana": {
"deploy-format": "hdbtable"
}
}
}
After creating all files, the project should look like this:
After deployment, you should see all 3 tables in the SAP HANA Cloud Database Explorer:
And these 2 applications in the BTP space:
Now you can use OData to access the database. In this exercise I didn’t added access control, so you should use your BTP user to execute the OData services from Postman and check the access.
Final Thoughts
Now you have 2 methods of creating SAP HANA Cloud database artifacts, both from the SAP Business Application Studio or Visual Studio, so you can access the database from JDBC or OData services.
Go ahead to part 3, where I will detail the Integration package in the Integration Suite to read and insert data in the database using both methods (JDBC and OData).
I encourage you to follow my profile to be notified of the next blog post; and please, let me know your thoughts or questions you have in the comments section below.