In July 2019, we witnessed an evolution of the SAP BTP strategy with a strong focus on building differentiating business service capabilities and clear intentions to partner with hyperscale cloud providers like Amazon, Microsoft, AliCloud and GCP for commodity technical services like open-source databases and data stores; where these hyperscalers are already market leaders.

In February 2020, considering customer/partner feedback about challenges with the BYOA (Bring Your Own Account) approach, we announced an update to the backing service strategy, with plans to deliver a ‘fully managed’ backing service offering:

  • PostgreSQL, hyperscaler option

  • Redis, hyperscaler option

For more details, please read: Managed hyperscaler backing services on SAP Cloud Platform: Best of both worlds!

PostgreSQL, Hyperscaler Option

PostgreSQL, Hyperscaler Option is offered via SAP Cloud Platform Enterprise Agreement (CPEA) contracts. Considering in China we only support the Subscription model, so it took some time and effort to get exceptional approvals and go through the commercialization process. Now, PostgreSQL, Hyperscaler Option is available for customers on BTP@AliCloud in China!

Here are the SKUs for the PostgreSQL, Hyperscaler Option on BTP@AliCloud in China:

Service Plan SKU Description
PostgreSQL, hyperscaler option, standard compute 8015632 PostgreSQL service for small scale productive usage
PostgreSQL, hyperscaler option, premium compute 8015661 PostgreSQL service for large scale productive usage
PostgreSQL, hyperscaler option, storage 8015643 Storage for PostgreSQL service
PostgreSQL, hyperscaler option, HA storage 8015642 HA Storage for PostgreSQL service

Compute

  • Standard

    • Defines compute resources in the ratio of 1 CPU Core: 2GB RAM.

    • Provides baseline performance for application workloads.

    • Suitable for development and small scale production use cases.

    • Available in Blocks of 2GB Memory (RAM)

    • Supported block sizes – 1 or 2.

  • Premium

    • Defines compute resources in the ratio of 1 CPU Core: 4GB RAM.

    • Provides high performance for application workloads.

    • Suitable for medium to large scale production use cases.

    • Available in Blocks of 4GB Memory (RAM)

    • Supported block sizes – 1,4,8 or 16.

Storage

  • Storage

    • Defines general-purpose disk storage for PostgreSQL.

    • Used for Single-AZ/non-HA storage use cases.

    • Available in Blocks of 5GB Storage (Disk).

  • HA Storage

    • Defines general-purpose disk storage for PostgreSQL with bandwidth considerations for High Availability (Multi-AZ).

    • Used for Multi-AZ/HA storage use cases.

    • Available in Blocks of 5GB Storage (Disk).

Usage

Sizing Suggestion

Constructing a PostgreSQL, hyperscaler option instance requires a combination of Compute and Storage materials.

You cannot create an instance with only compute or storage materials.

There are two types of instance configurations supported.

  • Single node PostgreSQL instance created within an Availability Zone (AZ):

    Price of service = x + y

    • x blocks of compute will be charged – either Standard or Premium

    • y blocks of storage will be charged

  • 2 node PostgreSQL instance with Primary and Secondary nodes distributed between two Availability Zones (AZs):

    Price of service = 2x + z

    • 2x blocks of compute will be charged – either Standard or Premium (x for primary node + x for secondary node)

    • z blocks of HA storage will be charged

Examples:

  • For development and testing: 2 standard (4GB RAM) + 4 storage (20GB)

  • For medium scale production: 4 premium (2 premium * 2 AZ, 16GB RAM) + 45 HA-storage (225GB)

For more details: Sizing

Create PostgreSQL, Hyperscaler Option instance with Parameters

After making sure that you have sufficient quota assigned for your instance, then you can create an instance with CLI or Cockpit.

cf marketplace -e postgresql-db
cf create-service SERVICE PLAN SERVICE_INSTANCE [-c PARAMETERS_AS_JSON]

For example:

cf create-service postgresql-db standard devtoberfest-database -c '{"memory": 2, "storage": 20, "engine_version": "13", "multi_az": false}'

Which needs 1 standard + 4 storage.

Retrieve Configurations of Instance If Required Once the Service Instance is Created

cf service <service-instance-name>  //Note the service instance id
cf curl /v2/service_instances/<service-instance-id>/parameters

Sample output:

{
"memory": 2,
"storage": 20,
"engine_version": "13",
"multi_az": true,
"locale": "en_US",
"postgresql_extensions": [
   "ltree",
   "citext",
   "pg_stat_statements",
   "pgcrypto",
   "fuzzystrmatch",
   "hstore",
   "btree_gist",
   "btree_gin",
   "pg_trgm",
   "uuid-ossp"
]} 

Use the ‘PostgreSQL, hyperscaler option’ Extension APIs

  • /postgresql-db/instances/:id/monitoring-admin, PUT

  • /postgresql-db/org/:orgId/space/:spaceId/deleted-instances, GET

  • /postgresql-db/instances/:id/extensions, PUT/DELETE

For more details: Use the ‘PostgreSQL, hyperscaler option’ Extension APIs

Export Data from PostgreSQL Service Instance

cf enable-ssh YOUR-HOST-APP
cf create-service-key MY-DB EXTERNAL-ACCESS-KEY
cf service-key MY-DB EXTERNAL-ACCESS-KEY
cf ssh -L 63306:<hostname>:<port> YOUR-HOST-APP
psql -d <dbname> -U <username> -p 63306 -h localhost
pg_dump -p 63306 -U <username> <dbname> > /c/dataexport/mydata.sql
psql -p 63306 -U <username> -d <dbname> -c "COPY <tablename> TO stdout DELIMITER ',' CSV HEADER;" > /c/dataexport/<tablename>.csv

For more details: Export Data from PostgreSQL Service Instance

Backup and Restore

For PostgreSQL database instances:

  • Full snapshot/backup of data is taken daily for standard and premium service plan instances.

  • DB transaction logs (WAL logs) are archived to Object Storage continuously to support Point-In-Time Recovery (PITR).

  • Backup retention period is 14 days.

Restore to a specified time:

cf create-service postgresql-db <service_plan> <service_instance_name> -c '{"source_instance_id": < >, "restore_time":< >}'

For more details: Restore for PostgreSQL, Hyperscaler Option

Run and Deploy CAP with PostgreSQL

CAP supports PostgreSQL, for more details, please read CAP – Database Support. With CAP, the deployment can be done without any extra work (creating a User Defined Service, manually managing the database schema…), just by leveraging CAP, the additional PostgreSQL related modules as well as the MTA tools.

If you’d like to get a sample project on how to consume PostgreSQL in CAP using Java and Node.js, please stay tuned for the next blog.

Connect to PostgreSQL in Non-CAP Application

Without CAP, you have to connect to PostgreSQL and manage the database schema by additional effort of coding. Anyway, you can do it in the same way as the BYOA (Bring Your Own Account) approach. You can consume the PostgreSQL instance via app binding and the app runtime environment variable.

Sara Sampaio

Sara Sampaio

Author Since: March 10, 2022

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x