SAP Cloud Application Programming Model (CAP) is being used vastly as a preferred way of development approach while working with SAP BTP.

Developers who used to work as an ABAPers earlier, might have realized that while working with ABAP it was really easy when it comes to consuming Database Tables across applications/object.

But in case of SAP Cloud Application programming model, each database object is member of HDI Container and the schema beneath it.

HDI Containers are not accessible directly, unless specified explicitly.

Hence, it’s very important to understand – how to include the required HDI Containers as part of our project and use the database artifact which are available as part of those containers. (e.g., Tables, Views, Procedures, etc)

Understanding HDI Containers and Schema

HDI Containers – The SAP HANA Deployment Infrastructure (HDI) provides a service that enables you to deploy database development artifacts to so-called containers. This service includes a family of consistent design-time artifacts for all key HANA platform database features which describe the target (run-time) state of SAP HANA database artifacts, for example: tables, views, or procedures. All these database artifacts are clubbed into a wrapper called Schema, staged, built and then deployed into SAP HANA Cloud Database in the form of a database container.

(You can refer following link for deeper understanding of HDI Containers – HDI Containers | SAP Help Portal)

Schema – A schema is like a container which contains all the different elements or objects of a relational database. Catalog node of the SAP HANA Information Modeler is responsible for containing all the element of the system.

Within the Catalog node, the relational SAP HANA database is divided into sub-databases known as schemas. In a nutshell, schema is a logical container which ties together all the relevant database artifact, e.g., synonyms, tables, views, procedures, functions, etc.

Schema Naming – Schema can be defined uniquely withing the HANA System using either static or dynamic schema names.

In case of static schema names definition, the schema names will be uniform across all the system landscapes, e.g., Development, Quality, Production, so on. For this purpose, it’s expected that the schema name is specified as part of mta.yaml file while defining hdi container resource as follows –

resources:
- name: hdi_BLG_DB_CENTRAL
  type: com.sap.xs.hdi-container
  parameters:
    config:
      schema: BLG_DB_CENTRAL
  properties:
    hdi-container-name: ${service-name}

HDI%20Container%20With%20Static%20Schema%20Name

HDI Container With Static Schema Name

In case of dynamic schema names, the schema names get generated dynamically during the first-time creation of HDI Container in the system.

resources:
- name: hdi_BLG_DB_MASTER_WITH_DYNAMIC_SCHEMA
  type: com.sap.xs.hdi-container
  properties:
    hdi-container-name: ${service-name}

HDI%20Container%20With%20Dynamically%20Generated%20Schema%20name

HDI Container With Dynamically Generated Schema name

Steps to Consume HDI Containers in SAP Cloud Application Programming (CAP) Backend

To Consume another HDI Container in SAP Cloud Application Programming Model (CAP) service, first step will be to specify those containers as external resources in mta.yaml file of consuming application (resources section).

resources:
  - name: hdi_BLG_DB_CONSUMINGSERVICE
    type: com.sap.xs.hdi-container
    parameters:
      service: hana
      service-plan: hdi-shared
    properties:
      hdi-container-name: ${service-name}
  - name: cross-container-central-db-wus
    type: org.cloudfoundry.existing-service
    parameters:
      service-name: hdi_BLG_DB_CENTRAL
    properties:
      the-service-name: ${service-name}
  - name: cross-container-master-db-wds
    type: org.cloudfoundry.existing-service
    parameters:
      service-name: hdi_BLG_DB_MASTER_WITH_DYNAMIC_SCHEMA
    properties:
      the-service-name: ${service-name}    

Specify external data containers in the requires section of hdb module with the group as SERVICE_REPLACEMENTS in the mta.yaml file of the consuming application.

Take a note of the key specified with each container specification below – it can be anything meaningful and unique. We will be required to use this key while working on the role assignment of individual container level objects.

modules:
  - name: blg_cap_consumingservice-db-deployer
    type: hdb
    path: gen/db
    parameters:
      buildpack: nodejs_buildpack
    requires:
      - name: hdi_BLG_DB_CONSUMINGSERVICE
        properties:
          TARGET_CONTAINER: ~{hdi-container-name}
      - name: cross-container-central-db-wus
        group: SERVICE_REPLACEMENTS
        properties:
          key: central-db-uniform-schema
          service: ~{the-service-name}
      - name: cross-container-master-db-wds
        group: SERVICE_REPLACEMENTS
        properties:
          key: central-db-dynamic-schema
          service: ~{the-service-name}

 

Once you have specified the dependencies as specified in above steps, you have to identify the scenario which suits your use case –

a. Consuming Database Artifacts from HDI Container with Static/Uniform Schema name –

In case the existing container you are consuming is having a Static Schema specified, you are required to create two files (under – db/src/ folder ) for creating reference of the original object (called Synonym) –

<filename>.hdbgrants
 (under db/src/ – purpose of this file is to specify the role of the existing container <write/read>, which defines the type of access consuming application will receive)

{
    "central-db-uniform-schema": {
        "object_owner": {
            "container_roles" : [ "BLG_CENTRAL::Write#" ]
        },
        "application_user": {
            "container_roles" : [ "BLG_CENTRAL::Write" ]
        }
    }
}​

<filename>.hdbsynonym (under /db/src – purpose of this file is to specify the list of artifacts which consuming application is willing to refer, e.g., in this example we are using a View from remote container)

{
    "BLG_EXP_EXTERNAL_INSPECTION_VIEW": {
        "target": {
            "object": "BLG_VIEWS.V_V1_INSPECTIONS",
            "schema": "BLG_DB_CENTRAL"
        }
    }
}

b. Consuming Database Artifacts from HDI Container with Dynamically generated/non-specified Schema name –

In case the existing container you are consuming is having a dynanically geberated Schema, you are required to create three files (2 files under – db/src/ folder and 1 file under – db/cfg/ folder) for creating reference of the original object (called Synonym).

<filename>.hdbgrants (under db/src/ – purpose of this file is to specify the role of the existing container <write/read>, which defines the type of access consuming application will receive)

{
    "central-db-dynamic-schema": {
        "object_owner": {
            "container_roles" : [ "BLG_MASTER::ReadOnly#" ]
        },
        "application_user": {
            "container_roles" : [ "BLG_MASTER::ReadOnly" ]
        }
    }
}​

<filename>.hdbsynonym (under /db/src – purpose of this file is to specify the list of artifacts which consuming application is willing to refer, e.g., in this example we are using a table from remote container)

{
    "BLG_EXP_EXTERNAL_MATERIALGROUP_MASTER": {}
}

<filename>.hdbsynonymconfig (under /db/cfg – purpose of this file is to specify the connection to dynamic schema for individual remote object)

{
    "BLG_EXP_EXTERNAL_MATERIALGROUP_MASTER": {
        "target": {
            "object": "BLG_MASTER_TABLES.MATERIALGROUP",
            "schema.configure": "central-db-dynamic-schema/schema"
        }
    }
}

Once we have specified the remote resources, these artifacts can be further used/consumed as per requirement.

For example, if I want to use this remote resource as part of my data model, I can specify those using annotation – @cds.persistence.exists as follows –

namespace blg.exp;

@cds.persistence.exists
entity EXTERNAL_INSPECTION_VIEW : cuid, managed {
    NAME        : String(36);
    SDATE       : Date;
    EDATE       : Date;

    MATERIALGROUP_ID : String(36);

    MaterialGroup : Association to one EXTERNAL_MATERIALGROUP_MASTER on MaterialGroup.ID = MATERIALGROUP_ID;
}

@cds.persistence.exists
entity EXTERNAL_MATERIALGROUP_MASTER : cuid, managed {
    NAME        : String(36);

    Inspections : Association to many EXTERNAL_INSPECTION_VIEW on Inspections.MATERIALGROUP_ID = ID;
}

 

This data model can be exposed as part of the service like any other local entities –

using blg.exp as be from '../db/data-model';

service CAPService {
    entity Inspections as select from be.EXTERNAL_INSPECTION_VIEW;

    @readonly entity MaterialGroups as select from be.EXTERNAL_MATERIALGROUP_MASTER;
}

 

Once after deploying the consuming service, we should be able to find this artifact under Synonym block of consuming application HDI Container Schema –

HDI%20Container%20With%20Synonyms%20node%20%28after%20final%20deployment%29

HDI Container With Synonyms node (after final deployment)

Summary

In the current blog post, we have learned basics of HDI Containers, Schema and how to consume existing schema in SAP Cloud Application Programming Model (CAP) Service.

For more understanding, please use the following references which have helped me in gaining knowledge on this feature and get motivated to write this blog post:

I would encourage you to read through other blog posts on such topics at: SAP Cloud Application Programming Model | SAP | SAP Blogs

You can post and answer questions on related topics at: All Questions in SAP Cloud Application Programming Model | SAP Community

Please provide your feedback and ask questions in the comments section.

Sara Sampaio

Sara Sampaio

Author Since: March 10, 2022

5 1 vote
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x