“Every company will become a technology company, and every company will become a data
company” – Steve Brown ‘The Innovation Ultimatum’
Please read PART1
In SAP world, almost every company intended to use S/4HANA as their digital core.
But to run any sort of business, many companies also need various other SAP and non-SAP
systems/SaaS solutions. (On top of S/4 HANA). Like:
– PIM to manage all categories of products,
– WebShop to handle direct customer orders,
– EWM (advanced) to manage a warehouse,
– CRM to manage customer relations,
– PSP (payment service provider) for digital payments,
– SAP Ariba for spend management,
– Success factors for human capital management,
Etc… just to give some names, there could be many possibilities.
So, whenever you want to build a custom or extension solution for your SAP S4 HANA or
surrounding systems, it’s better to build and deploy in a centralized platform, totally controlled
by the company IT itself and of course, it will ensure the cleanliness of your S4 core.
There SAP BTP comes into the picture!
As many of you might already know, SAP BTP offers INTEGRATION, EXTENSION, and DATA-to-
VALUE solutions.
So, DATA is very important for any organization.
There DATA ENGINEERING comes into the picture!
What are the technologies come under data engineering?
- Database management
- ETLing, ELTing
- Data Modelling
Basically collect, maintain, and prepare data for application development, data analysis, data
science and machine learning/deep learning solutions.
Let’s deep dive into the how part now!
SAP BTP offers a robust Database Solution, HANA Cloud as DBaaS.
To know more about HANA, you can check out the below high-level architecture.
I’m not going to explain the architecture in this blog, maybe somewhere in the next blogs.
You can already see, there are lots of components inside the HANA platform.
Actually, HANA has Multi-model capability as well. Check in the below pic:
Okay, after understanding the high-level components and features of the HANA engine, let’s focus again on the practical approach toward data engineering.
How to get various kinds of data from the different systems into the HANA cloud?
There is a very efficient solution, called SDI ( Smart Data Integration ), which is absolutely free
software from SAP. Lots of customers use this to get data efficiently into HANA Cloud and either
persist or virtualize data for various purposes.
Smart Data Integration is a very powerful tool with capabilities like:
- Real-time as well as batch data communication from any kind of sources to SAP HANA
cloud as target - It has lots of SAP and non-SAP adapters
- It can also transform data efficiently before consuming it in your cloud-native applications
- Virtualise data at a cheaper cost, which also ensures data residency within the core system
itself
By using SDI, customers can keep the core systems clean and create required data models in
cloud itself.
Now after getting data efficiently into HANA Cloud, transforming it as per business requirements,
it’s time to manage the data properly so that it fits into any future design, architecture, and processes.
How to manage the data?
It can vary from business to business requirements. And most commonly we can manage data using
proper container modeling which will also ensure data security and integrity. Also need to
mention that technically, cross-container access control would play an important role here.
Pictorial illustration as below ( Let me know if anyone needs more info on it )
Okay, so far we have seen..
- HANA architecture ( it will help to manage HANA DB efficiently )
- ETL process with SDI
- Data management inside the HANA cloud using the Containerisation technique
Now, data engineers should prepare the data for business application consumption or analytical
dashboards or data science and AI/ML solutions.
Here, as part of data engineering, we create CV ( Calculation Views ), HANA DB Procedures,
CAPM CDS entity/views for different scenarios.
Don’t forget about Data Lake.
We do have HANA DL mainly for two purposes.
- Data Consolidation: various ELT ( Extract Load Transform ) scenarios, basically to collect
all sorts of possible raw data from various data sources - Data Pyramid: bring down data into cold storage of HANA DL from hot in-memory storage
of HANA Cloud, mainly for archiving purposes which will ensure lower TCO
Let me stop here today, but as promised, will continue posting different parts of keep-core-clean
series.
Although, I haven’t discussed other possibilities for data pipelining, like you can also use
SAP’s paid Data Services or Data Intelligence tools. Please let me know if anyone is interested in
those topics as well.
Please let me know if this post is informative, or should I deep dive into more details.
And in case you want to connect with me directly, please check my bio.
Cheers!