This is the 5th blog post in the series Centralised Transport Naming Standards.

See also blog posts:

Centralised Transport Naming Standards

Centralised Transport Naming Standards – SCP ABAP Environment Migration

Centralised Transport Naming Standards – Branch By Abstraction

Centralised Transport Naming Standards – Service Now Integration

Context

I mentioned at the end of my last blog that I was thinking of redeveloping the current solution using AWS functionality.  This has now occurred (recent technical go live) and we will now start the process of migrating our systems onto to the new solution.

Goal

The vision was to use serverless techniques as there was no actual requirement for a server to be involved in this development scenario at all. This would also have the sustainable benefit of helping us to shutdown the server it was currently hosted on as it is one of two solutions that require the server to be up 24/7. The other solution is the Central ATC and the current plan is to migrate this onto a BTP ABAP instance subject to this being possible (starting hopefully early 2023)

For the AWS Serverless Application Model (SAM) approach the following AWS primary services were employed:

  1. DynamoDB
  2. Lambda Functions
  3. API Gateway

Other supplementary services such as CloudWatch, CloudFormation and SNS were also employed.

Also, as we wanted to create a solution that was easy to update ourselves but also with a view of handing over, we also built a simple pipeline in Azure DevOps.

The development language chosen was Python – just because it is a popular language and in widespread use. Node.js was also a consideration due to it being used in CAP as well but since I’d never developed using Python before, I thought it would be a good learning experience for both myself and my colleague T. KASI SURYANARAYANA MURTHY that was initially assisting me and later took over as the main developer.

Solution Architecture is below

 

Desktop Development Setup

Prerequisites

  1. WSL2 – Windows Subsystem for Linux
  2. Docker Desktop (license may be required)
  3. Visual Studio Code
  4. AWS SAM CLI

Local Development was achieved using an AWS Serverless Application Model (SAM) setup in the shape of VSCode as the IDE of choice, Azure DevOps as the Git Repo and Docker was used to house a local version of DynamoDB and also to house the Lambda function runtime when debugging.

 

This setup took a while to complete – especially when I wanted to debug and ensure the container generated by VSCode for the Lambda function was able to communicate with the DynamoDb container. In short this was down to ensuring the same network was used via the “sam” entries in the below screenshot of the launch.json file.

Procedure

  1. Create the docker network before you specify it in the file
  2. Add a Launch configuration that specifies the docker network to use as well as specifying the containerBuild : true.This ensures the Lambda function is built inside a Docker container which is then attached to the specified network
  3. Include the event payload i.e. tns.json that contains the query StringParameters to simulate the payload from API Gateway.

 

Config Table Maintenance

Since DynamoDB was now to be used as the DB of choice, we needed a way for end users (team leads mainly) to update it with new prefixes and also to remove old ones. DynamoDB is not the best in terms of UX and therefore rather then being based in AWS, this part of the solution is actually housed within an internally developed web solution. This had recently been redeveloped on Python (coincidence rather than by design) and the Django UI framework which made it easier to control and provide access to for those tasked with updating the prefixes for their allocated systems.

For the initial load of the master data we used the admin functionality of DJango. (Note the extra ‘s’ is added by default. We need to rename it!).

Admin%20option%20for%20master%20data%20upload

Admin option for master data upload

For the ongoing maintenance i.e. the CRUD operations, specific screens were created for each table accessed via an overview menu.

Table Maintenance Menu

Prefix table Create operation

This is all very well but the website uses a local MySQL database and not the one AWS DynamDB one that we require. This then posed us with an challenge of how to then replicate the CRUD operations from the website solution’s MySQL database to our AWS based DynamoDB database and keep the two in sync.  This was solved relatively easily however via a 3rd API that was constructed and called from the 3 tailored screens mentioned above. This just uses the standard DynamoDB CRUD operations from boto3. See here for an example.

 

Service Now Integration

Naturally we did not want to lose any functionality by moving to AWS and wanted it to be entirely seamless from the end users (transport releasers) perspective. Therefore it was key to enable the the Service Now integration. Using the leanings from the previous ABAP OAuth implementation documented in the previous blog, this was relatively straightforward and we of course offloaded this to a separate API/Lambda call.

Service%20Now%20Integration

Service Now Integration

 

Azure DevOps Pipeline

This involves an internally developed AWS Token Broker task to obtain credentials and Checkmarx is used for static code analysis. Once code checks are passed and credentials obtained, an S3 bucket is created to deploy the code to and then SAM build and SAM deploy are called which in turn calls AWS CloudFormation to then create/update the main API and Lambda function.

Main API, Lambda Function and Cloud Formation

The other 2 APIs and the DynamoDb database are static and hence they are not required to be part of the pipeline.

 

Charm BAdI Call

To enable a safe switchover between the ABAP solution and the AWS solution we use a configuration table with a list of systems that are currently validated for TNS via ChaRM. The code checks for an existence of the system in the config table and if present calls the ABAP solution. If missing, calls AWS. In this manner, we can steadily rollout the AWS solution by ‘draining’ the table until empty.  Once complete, we will remove the ABAP call and the config table completely.

   select single system_id into ls_sysid
                from ZCAU_XXX_TNS
                where system_id =  sysid.
   if sy-subrc = 0.
     ls_called_app = 'XXX'.
     call function 'Z_CAU_TRANSPORT_CHECKS' destination 'XXXCLNT100'
       exporting
         sysid                 = sysid
         request               = request
         type                  = type
         owner                 = owner
         text                  = text
         attributes            = lt_attributes
       importing
         log                   = log
       exceptions
         system_failure        = 1
         communication_failure = 2
         others                = 3.
      if sy-subrc <> 0.
        append 'XXX System not available. Central transport naming checks not possible' to log.
      endif.
   else.
     ls_called_app = 'AWS'.
     ls_text = text+0(60).
     call function 'Z_CAU_TNS_AWS_CHECKS'
      exporting
        sysid                 = sysid
        request               = request
        type1                 = type
        owner                 = owner
        text                  = ls_text
      importing
        log                   = log
      exceptions
        system_failure        = 1
        communication_failure = 2
        others                = 3.
     if sy-subrc <> 0.
       if sy-subrc ge 500.  "Fallback on XXX if AWS is not reachable
         ls_called_app = 'XXX'.
         call function 'Z_CAU_TRANSPORT_CHECKS' destination 'XXXCLNT100'
            exporting
               sysid                 = sysid
               request               = request
               type                  = type
               owner                 = owner
               text                  = text
               attributes            = lt_attributes
             importing
               log                   = log
             exceptions
               system_failure        = 1
               communication_failure = 2
               others                = 3.
         if sy-subrc <> 0.
           append 'XXX System not available. Central transport naming checks not possible' to log.
         endif.
       else.
         append 'TNS Checks in AWS Lambda Failed. Central TNS checks not possible' to log.
       endif.
     endif.
   endif.

 

Errors and Monitoring

In order to track issues and be proactively alerted to problems we added CloudWatch to the 3 APIs and set several alerts to send SNS notifications to us if, for example, a db update errored.

Monitoring

Monitoring

Next Steps

We’ll continue to improve the solution in AWS in future, perhaps by removing API gateway as now you can call Lambda directly. Either that or we’ll use docker containers to house the Lambda functions.

But for a start, we’ll roll out the new solution, stabilise it and see then where the mood takes us. It’s been a great exercise in development over the last couple of years, enabling us to explore various areas previously ‘untapped’ and a great excuse to learn new technologies to keep us challenged and updating our skills and certifications 🙂

 

 

 

 

 

Sara Sampaio

Sara Sampaio

Author Since: March 10, 2022

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x