This blog is part of a blog series, so you can find the first page here (https://blogs.sap.com/2023/02/02/sap-cpi-ci-cd-from-from-zero-to-hero/). This is the agenda we’re following:

Release management

If you work with software development most likely you were involved in release management at some point in time. Release management is a wide complex area that usually involves many teams. It can go from high level activities such as release scope and planning, testing, to more technical stuff such as tracking technical dependencies, transporting changes, managing urgent bug fixes, technical go-live and so many other topics. That’s not by chance that we have a dedicated role to do this job.

In the past, as a consultant I’ve had the chance to setup and demo SAP ChaRM for two customers but never really worked with it as a user on a daily basis, so despite having very good impression about the tool, I’m not the best person to give “real usage” feedback about it. At first I thought the tool was very closely coupled to ERP world, but later when configuring CTS+ so that we could move Web Dynpro JAVA developments together with ABAP, I’ve changed my mind a bit. I mean, it’s still highly connected to the ERP world, but at least you have a bridge to the outside world that you can leverage on most of the developments done outside of the ERP realm.

Nowadays there’s also CALM (nice readings about it here https://blogs.sap.com/2022/06/12/cloud-application-lifecycle-managementcalm-pre-requisites-integration/ and here https://blogs.sap.com/2021/12/03/application-lifecycle-management-with-cloud-alm-pre-requisites-integration-project-management/)  which is evolving at a very fast pace, but we’re only using it for health monitoring and integration and exception monitoring (more on operations side). Nevertheless, I know that on the implementation side there’s the “Releases and Timelines” option that can be promising, but don’t know the details (please comment below if you have any experience with it). For now, our company decided to go for solution manager and I don’t see compelling arguments against it, but if you’re starting a big implementation project now I would recommend you to take a look into CALM and Solman to take an informed decision.

Releases%20and%20timelines%20tile%20supporting%20implementation

Releases and timelines tile supporting implementation

We’re working on a big company wide IT project that is split in waves. Different waves have different sizing’s, but the last one I had to do covered almost 100 interfaces at one go. IMHO, from technical perspective, I don’t think it’s a good idea to have so large releases since it makes everything exponentially complex, nevertheless I trust this decision was taken having the full context in mind and considering the larger scheme of things. I also fully understand that from management perspective is easier to keep track of larger buckets with fewer milestones.

Having said that, now many of these interfaces were handed over between many different people, some of them no longer in the company so that was an additional challenge now to move them from DEV->TEST->PREPROD->PROD and knowing exactly what to configure on all environments.

Release Scope

My first problem was to clearly identify which exact interfaces were agreed by all the parties to move to PROD. We had a high level power point definition on the scope of the wave, but not a direct match between Solman requirements and respective cloud integration packages. Fortunately this was identified earlier and for the most interfaces I was involved, I tried to use the Solman requirement ID on the cloud integration package description. That helped to rule out some, and include others but not to have a final holistic view on what to move exactly.

Package%20description%20referencing%20a%20solman%20requirement

Package description referencing a solman requirement

Next step, I’ve talked with our Scrum Master that manage our sprints in a very organized manner so she had a very nice list of user stories per sprint assigned to JIRA releases that represented the high level defined waves (kudos to you Marta Silva, Paulo Santos and Eric Gravil for having that so well defined). Still, there’s no match between a JIRA user story and a cloud integration package but I felt that we were going in the right direction. I’ve created a custom tag on the package for the JIRA user story reference allowing to supply many user stories for the same package. This way we would know which user stories touched which packages and most importantly why.

JIRA%20reference%20inside%20a%20package%20on%20cloud%20integration

JIRA reference inside a package on cloud integration

I’m not gonna lie, that was our first really major release and it was a bit bumpy. We had to talk to many of our architects and functional analysts to do a final cross check on the scope of integration packages to move and fill out these custom tags for all of them to make it reflect the truth and also making it consistent to what we had on JIRA. The process of the release was bumpy but fortunately the go-live itself was smooth with minor issues and none related with releases.

After the hypercare, we addressed this topic and as measure, we identified a new release manager role in the team and we’re now following JIRA releases as our single source of truth. A new integration was developed that creates a JIRA component for each CPI package. By using the JIRA reference tag above, we were able to associate User Stories with CPI Packages. Now when we get to the JIRA releases view we can see the list of all CPI packages on scope of that release.JIRA%20Release%20with%20components

JIRA Release with components

Configurations

Coming back to that major release nightmare story, now that we have a list of around 100 packages to move, we “just” need to go through all of them individually, navigate to the respective iflows and configure them with their respective values. Easy right? Not that much, it was very time consuming activity, I saw myself starting a big excel file and collecting by hand a few external parameters and their respective values until I gave up and decided to automate that. First, I created a CAP service that would read all the packages, all the iflows inside the packages and finally all external parameters keys and values for each iflow. In the end I got a file with 4000 lines, so I was glad I decided to go with automation instead of doing it manually and to be outdated fast. Later on, migrated this CAP service to our on premise CI/CD server since there was no big benefit in having it running on BTP. Also added this file to our git per environment extracting it daily

"Package";"Iflow";"ParameterKey";"ParameterValue";"DataType"
"FER_Common";"FER_Common_Notification";"VMEnvironment";"EnvSetting,Name,EnvSetting,EnvValue,Environment";"xsd:string"
"FER_Common";"FER_Common_Notification";"SMTP_Address";"yoursmtphost:yoursmtpport";"xsd:string"
"FER_Common";"FER_Common_Notification";"SMTP_OAuth2CredentialName";"CREDENTIAL_DUMMYVAL";"xsd:string"
"FER_Common";"FER_Common_Notification";"SMTP_ProxyType";"none";"xsd:string"
"FER_Common";"FER_Common_Notification";"SMTP_Authentication";"oauth";"xsd:string"
"FER_Common";"FER_Common_Notification";"SMTP_Protection";"starttls_mandatory";"xsd:string"
"FER_Common";"FER_Common_Notification";"SMTP_Timeout";"30000";"xsd:integer"
"FER_Common";"FER_Common_Notification";"SAP_ProductProfileId";"iflmap";"xsd:string"
"FER_Common";"FER_Common_Retry_SFTP";"Target_HandlingForExistingFiles";"Override";"xsd:string"
"FER_Common";"FER_Common_Retry_SFTP";"Target_PreventDirectoryTransversal";"1";"xsd:boolean"
"FER_Common";"FER_Common_Retry_SFTP";"Target_UseTemporaryFile";"0";"xsd:boolean"
"FER_Common";"FER_Common_Retry_SFTP";"Target_FlattenFileNames";"0";"xsd:boolean"
"FER_Common";"FER_Common_Retry_SFTP";"Target_ReconnectDelay";"1000";"xsd:integer"
"FER_Common";"FER_Common_Retry_SFTP";"Target_ChangeDirectoriesStepwise";"1";"xsd:boolean"
"FER_Common";"FER_Common_Retry_SFTP";"DataStoreName";"FER_TransactionsDS";"xsd:string"
"FER_Common";"FER_Common_Retry_SFTP";"Target_AutomaticallyDisconnect";"1";"xsd:boolean"
"FER_Common";"FER_Common_Retry_SFTP";"Target_CreateDirectories";"0";"xsd:boolean"
"FER_Common";"FER_Common_Retry_SFTP";"Target_Timeout";"10000";"xsd:integer"
"FER_Common";"FER_Common_Retry_SFTP";"Target_MaximumReconnectAttempts";"0";"xsd:integer"
"FER_Common";"FER_Common_Retry_SFTP";"SAP_ProductProfileId";"iflmap";"xsd:string"
"FER_Common_ErrorHandling";"FER_Common_ErrorHandler";"ExpirationPeriod";"180";"xsd:string"
"FER_Common_ErrorHandling";"FER_Common_ErrorHandler";"RetentionThresholdAlerting";"90";"xsd:string"
"FER_Common_ErrorHandling";"FER_Common_ErrorHandler";"OverwriteExistingMessage";"true";"xsd:boolean"

Having this list, I was able to review all interfaces with the respective architects/analysts at one go using this single file to decide on the external parameter values to use for each environment. Now the next question, how to apply it in mass?

Enhanced the service that get the list of parameters to also accept a post with the same format starting with TODO and having an input as above. The service would configure the iflows with the values I supplied and then would return TODO followed by the list of properties that we were unable to update. Then on a new line, the token DONE followed by the list of properties that were updated. This way, by checking the result I could figure it out if all parameters were applied successfully.

TODO
"FER_DummyPackage1";"FER_DummyIflow1";"Directory";"/Debug";"xsd:string"
"FER_DummyPackage1";"FER_DummyIflow2";"Directory";"/Debug";"xsd:string"
"FER_DummyPackage2";"FER_DummyIflow3";"Directory";"/Debug";"xsd:string"

Later added also the TODO_DEPLOY possible starting token variation instead of TODO, so that on top of configuring all the iflows, we also instruct the service that we want to also deploy the changed iflows in the end.

This CSV file brings some value added. Now we have:

  • A user friendly UI containing the list of interfaces we have and how they connect to each other as well as the scheduling being used. This was shared with our analysts already (kudos to Fred Hautecoeur for such a great job with this UI tool)
  • Mass apply of external parameters changes (as described above)
  • Auditable parameter changes (we know on a daily frequency what was changed and we keep history on git of those parameter changes for all our tenants)
  • Option to think about rollback transports (I’ll talk about it later)

Transports

We’ve talked about scope, configuration and now the transports itself. First question was on whether to use CTS+ (if you decide to proceed with CTS+ this awesome blog is a must https://blogs.sap.com/2018/04/10/content-transport-using-cts-cloud-integration-part-1/) and leveraging ChaRM and all release lifecycle on the ERP side, or to go with a more relaxed tool (CTMS), managed by our team with no connection to the releases on the ERP side.

I was discussing with my former manager and we decided to go with CTMS. The cost was low and it seemed to be a good fit for our basic needs. If I remember right, I believe I followed these blogs (https://blogs.sap.com/2021/08/09/sap-cloud-transport-management-available-in-sap-btp-free-tier/ and https://blogs.sap.com/2021/10/15/setup-sap-btp-cloud-transport-management-servicecloud-foundry-for-cloud-platform-integrationcpi/). If you ever configured STMS, CTS+ or NWDI runtime systems in the past, this would be very straightforward setup to do with much of the same concepts you saw already.

CTMS%20UI

CTMS UI

It’s worth noting that CTMS has a retention time and quota available, so it can be that when you try to forward your request to the next system the mtar is not there any longer since it was deleted in the meantime (this happened to us). You can still create a new transport request and add your mtar file to it but you need to keep the binary file. Therefore I strongly advise you to also do the mtar backup as we’re doing on any other place.

Retention%20time

Retention time

Now we wanted to transport multiple packages at once and also validate it according to the service now associated change request so we built a pipeline for transporting.

Transport%20flow

Transport flow

Idea would be to uncomment the packages you want to transport by removing the “#” (example below).

Transporting%20multiple%20packages%20validating%20with%20SNOW%20change%20request

Transporting multiple packages validating with SNOW change request

We’ve communicated to the team that all transports should follow this process. When triggered, the pipeline is:

  • Executing CPI Lint to make sure the code is following our guidelines
  • Checking if the last jenkins execution for the cpi package(s) you want to transport was successful so that we know that your unit tests were executed successfully as well
  • Creating an mtar file for each package you want to move (step 1 & 2 above)
  • Commit that mtar to git (CTMS has a retention period of 30 days to keep the mtar files as mentioned above) (step 2 above)
  • Create a git tag on our binaries git repo containing all the packages moved associated with that tag
  • Automatically create a transport request on CTMS following naming convention <packageid> – <description supplied on the pipeline> – <timestamp> (step 3 above)
  • For the packages you want to transport, backup the binaries of the packages from the target system (step 4 above)
  • For the packages you want to transport, backup the external parameters from the target system (step 5 above)
  • Transport the changes from DEV to TEST system (step 6 above)

Step 7 is done outside of the transport pipeline (being used only on the rollback pipeline that will be introduced later). We’re thinking about adding config files inside the iflows and apply them automatically, but for now, this is being done manually.

Naming%20convention%20followed

Example of a transport request

To move from TEST to PREPROD and then to PROD we created as well a custom pipeline that also asserts the SNOW change request is on the right status blocking any transports if not.

Forward%20transports

Forward transports

When executed, this pipeline is able to check and order by date the requests for the packages you want to move and making sure to forward only the latest one for each package selected. This also makes a backup of the binaries of the target environment as well as external parameters there for each transport package you want to move.

Finally, we make use of the backups created before transport to allow the developer to rollback a previously moved release (group of packages moved). Since we have backups of the previous values for external parameters and we also have the binaries backup, we can generate a new mtar and re-transport the backup changes again. If successful, we apply the external parameters csv file according to the mass changes configuration apply mechanism described above.

Rollback%20transport%20by%20reimporting%20an%20old%20one

Rollback transport by reimporting a backup

Next steps

  • Automate the release by looking into JIRA release, taking all the packages identified there and moving them together with their dependencies
  • Maybe try CTS+ with Integration Suite if time permits

Summary

In this topic, we introduced the pain on managing big releases and how we tried to minimize some of the complex challenges we had when moving our interfaces throughout environments. We’ve discussed how can you extract your iflow parameters and how we run transports.

I would invite you to share some feedback or thoughts on the comments sections. I’m sure there are still improvement or ideas for new rules that would benefit the whole community. You can always get more information about cloud integration on the topic page for the product.

Sara Sampaio

Sara Sampaio

Author Since: March 10, 2022

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x