In this blog I want to explain how we have designed Archiving in C4C (available from 2011, see this Product Information) and how it works (not immediately after you active it in the Business Configuration :slightly_smiling_face: ).

Also with an update for release 2208:
As we experienced in some customer systems with a heavy load we optimized for performance.

Let’s start with the activation:

As it is already stated in the Product Information you first need to choose in the Business Configuration for which of the Business Objects you want to switch Archiving on together with some retention periods..
We will come later to these retention periods.

Now this has the following consequence:

    • Early morning each day 2 jobs are triggered for Archiving.

 

    • Each job is working on all activated Business Objects consecutively.

 

    • The first job Move2Archive looks for potential candidates, let them verify by the resp. application, move these verified ones to the archive, and deletes them for the local database.

 

    • The other job DeleteFromArchive deletes the data finally from the archive

 

    • These topics are applied to each Business Object individually: 2 jobs per enabled Business Object

This means if you activate Archiving in the afternoon you have to wait until the next morning to see a potential result.

But what are the results?

Now we need to discuss the Retention Periods (they are pretty well explained in the blog mentioned above, but let me repeat):

    • Retention Period Before Archiving: This defines the “age” of the record. How long it was not changed anymore (for the techies: the LastChangeDate :slightly_smiling_face: ), giving a good estimate that this record is not used any longer.
      Nevertheless, this is no guarantee that this record can be archived. Think of a long running warranty. It will not be changed over years, but still be in effect.

 

    • Retention Period Before Deletion: Normally defined by legal requirements to keep the data for verification reasons.
      This period starts when the data was deleted locally.

I’ve been asked about the maximum and minimum values for the retention periods.
Minimum is easy: 0. This means every record is a candidate for Archiving.
Maximum is harder: Technically it is MAXINT (more than 2 billion days), but the calendar system will fault on such a big number.
Therefore we limit it to 50.000 days, which corresponds to more than 137 years.

So, the Move2Archive job collects all “old” records and archive them. These candidates are handed over to some business checks individually defined and implemented by each Business Object. In most cases they check at least for status values like Completed, Closed, …. Of course they can be even more sophisticated.
Update: for some Business Objects these business checks can be disabled.
They are explained in more detail in this separate blog.

The business check returns not only the verified instances, but add also potential archiving relevant dependent objects (RDO) like pricing documents for quotes. These RDO can not exist by their own. They need the verified instance as a kind of anchor. Therefore we will collect not only the data of the anchor object but also of the RDO and place them together in the archive file. They will also be deleted together, from the local tenant as well as from the archive.

This first part of the job “Collect and Verify” runs for at max one hour. After that time (or earlier if all candidates are verified) these verified instances are handed over to a second part “Archive and Delete“. The experience shows us that we can verify between 80.000 and 120.000 instances during this hour.

Now all data of a given verified instance together with its RDO is collected, transformed into JSON, placed in a ZIP file, and sent to the archive. This includes also attachments.
BTW: the “archive” is currently an S3 bucket in the resp. Data Center. This bucket is not direct accessible.

Next we will update the Archiving status of the objects to Archived, so you know that the Archiving process has been finished.
Finally the instance will be deleted from the local tenant as they exist now in the archive.
This will speed up the standard search as well clean up the result list.

This second part of the job runs until all handed over instances are archived and deleted. Normally this is achieved before the next day. But if this second part is still running when the first part of the job is scheduled the next day, this new job will stop immediately to let the job from the day before finish.

This means from now on you will find the archived data only in the Search Archive (Work Center Administrator –> View Archiving).
Some details :

    • Keep in mind that you need any search criteria (even a asterisk) to trigger the search.

 

    • You may also filter by object type to narrow down your result list.

 

    • By clicking on the link the Quick View will open.
      From here you can navigate to the Thing Inspector.

This will also be the place from where you can restore the data. The data would be transformed back to the local database, the Archiving status would be reset and the data in the archive would be deleted.
Furthermore from here you can also download the archived data in a JSON format to your local hard disk.

As the Restore action is executed on the values of the archived data, there are concerns that restored data would not fit to the current data due to some realignment runs executed in the time between the archiving collected the data and now.
Therefore this action is currently disabled on some UIs,
If you want this action to be enabled for you, please create an incident on this topic. You will need to sign that you are aware of the above mention risk.

The DeleteFromArchive will delete the last remains of the record. After this job no restore would be possible. The data is gone.

Details

Based on the comments below and other questions, let’s have a look at the impact for you as customers:

    • Visibility of the Archiving status on the UI
      There are several possibilities which may differ for each Business Object:

        • The field is added to the UI and visible
          You’re fine
        • The field is added to the UI marked as “pers. hidden”
          Here you need to switch to the Adaptation Mode to make it visible (maybe limited to a dedicated Business Role )
        • The field is not available
          Please request this field from the resp. application

 

    • What is the (visible) effect of the retention periods / executed jobs?
        • Move2Archive
            • If the data fulfills the business checks it is deleted. So you can only find it via the Search Archive.
        • DeleteFromArchive
            • The data cannot be found anymore

 

    • Is there any information about the result of the jobs?
        • You can (re)use the following UI to detect, which object types are deleted:
            • Work Center “Administrator”
            • Work Center View “General Settings”
            • Area “Data Management, View deleted data”

 

In the resp. OWL you can restrict the search on the objects which you have been activated for Archiving as well as the date “Deleted On”.
Important: The column “Deleted By” must be empty as Archiving is using a technical user which will not be shown.

 

        • Any explanation why an object has been archived or not would depend on the business checks from each application. There is nothing available for that.
        • The “Changed On” date in the Search Archive result list is the only information which tells you how long an object is in the archive. From this date and the Retention Period Before Deletion you can determine when the object will be deleted from the archive.

If there are requirements from your side for Archiving (I am sure, there are some :winking_face: ) please report them in the Idea Management, so we can collect and prioritize them.

Update
Some more information on the performance optimization (sorry, very technical):

    • We noticed that more Business Objects need to be verified if they are archivable. Therefore we doubled the runtime of the job Move2Archive and we tripled the number of processes used to send the data to the archive.

 

    • As all SAP owned validations are disabled for Archiving, we did the same for the PDI validations.

 

  • You may want to check inside your PDI solutions for the value of the element “ArchivingStatusCode” at the resp, Root nodes if the value equals to 2 (“ArchivingInProcess“) or 3 (“Archived“).
    In such case you may want to

      • Delete some own PDI objects
      • Inform other systems about the archiving (= deletion)
    • Stop the further execution to prevent unnecessary updates
Sara Sampaio

Sara Sampaio

Author Since: March 10, 2022

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x