Today I am going to discuss about what you can do in terms of Optimization in HANA.

A. Performance tuning on database design Level  :

1. Check all tables by high amount of records and determine the growing rate. Select from table GLOBAL_TABLE_PERSISTENCE_STATISTICS. Therefore, you can use the select SQL to check. Only tables where DISK_SIZE > 10,000,000 bytes are listed in the table.

Note : For reference. SAP note 1969700 (HANA_Tables_TopGrowingTables_Records_History)
If row exceeds 139 billion plan for  partition.

SELECT DISK_SIZE FROM GLOBAL_TABLE_PERSISTENCE_STATISTICS ORDER BY DISK_SIZE DESC.

 

2. ROUNDROBIN RANGE PARTITION is preferable with 3-4 partition per node. Partition doesn’t exceed a size of 20 to 50 GB. On a scale-out appliance, you can multiply this by the number of active nodes over which you distribute the tables.

*------------------------------------------------------------------
ROUNDROBIN PARTITIONS is preferable
*------------------------------------------------------------------

Creating a Table with Round-Robin-Range Partitioning Using SQL
CREATE COLUMN TABLE MY_TABLE (a INT, b INT, c INT) 
	PARTITION BY 
		ROUNDROBIN PARTITIONS 4,
		RANGE (c) 
			(PARTITION 1 <= VALUES < 5, 
			 PARTITION 5 <= VALUES < 20)

;

*------------------------------------------------------------------

 

3. Design the key very carefully, if composite key is not useful do not use it and avoid using the HANA database for referential integrity between tables.

4. We maintain our table column between 40-50 columns and saw up to a 25% degradation in throughput.

5. To keep control of the delta merge we keep our table enable AUTOMERGE but in the data loading time we keep it into disable auto merge and enable smart merge for large volume tables.

Auto Merge :  HANA keeps a compressed “main” store of column tables and an uncompressed “delta” store, for new items. Periodically, a process called mergedog combines the two in a DELTA MERGE process. Sometimes however your tables may not be configured to automerge (you can issue ALTER TABLE table name ENABLE AUTOMERGE), and occasionally merged fails and you have to restart HANA. This can cause the delta store to become very large, and because it is uncompressed, this is a big issue if you don’t have much RAM.

**---------------------------------------------------------------

ALTER TABLESCHEMA.TABLEDISABLE AUTOMERGE;

IMPORT FROM/sapmnt/log/test.ctl’ WITH THREADS 120 BATCH 200000;

MERGE DELTA OFSCHEMA.TABLE;

ALTER TABLESCHEMA.TABLE ENABLE AUTOMERGE;

**---------------------------------------------------------------

 

Smart Merge : 

For example, if an application starts loading relatively large data volumes, a delta merge during the load may have a negative impact both on the load performance and on other system users. Therefore, the application can disable the auto merge for those tables being loaded and send a “hint” to the database to do a merge once the load has completed.

Smart merge is active if the smart_merge_enabled parameter in the mergedog section of the indexserver.ini file is set to yes.

Crate a procedure an schedule it. Before Initial Data load large dataset table procedure executed.

 

*-----------------------------------------------------------------

PROCEDURE Disable_AutoMerge( IN table_name)

LANGUAGE SQLSCRIPT
SQL SECURITY INVOKER
READS SQL DATA AS
BEGIN

ALTER TABLESCHEMA.TABLEDISABLE AUTOMERGE;
ALTER TABLESCHEMA.TABLEENABLE SMART_MERGE;

END

After initial load :

PROCEDURE Enable_AutoMerge( IN table_name)

LANGUAGE SQLSCRIPT
SQL SECURITY INVOKER
READS SQL DATA AS
BEGIN

ALTER TABLESCHEMA.TABLEDISABLE SMART_MERGE;
ALTER TABLESCHEMA.TABLEENABLE AUTOMERGE;

END

*-----------------------------------------------------------------

 

 

6. Created a procedure to trach delta merge history from select table M_DELTA_MERGE_STATISTICS.

PROCEDURE CHECK_DELTAMERGE( OUT Table (table name varchar(100),))

LANGUAGE SQLSCRIPT
SQL SECURITY INVOKER
READS SQL DATA AS
BEGIN
SELECT * FROM M_DELTA_MERGE_STATISTICS WHERE table_name = ‘<your_table>’ AND motivation = ‘SMART’;
SELECT * FROM M_DELTA_MERGE_STATISTICS WHERE type = ‘HINT’ AND table_name = ‘<your_table>’;
SELECT * FROM _SYS_STATISTICS.HOST_DELTA_MERGE_STATISTICS WHERE table_name = ‘<your_table>’;

END

7. Do not use the HANA database operations involve referential integrity to execute if you can, it is expensive.
8. Avoid using VARCHAR s if you don’t need them. use CHAR , NVARCHAR and INTEGER fields are even better. VARCHAR do not compress.

B. Design time SQL level performance tuning  :

1.In Database SQL Query design to enable proper pruning i.e. where condition to that specified field which used in range portion so that partition pruning can take place.
2. Use explicit type casting .It is important to use explicit type casting as much as possible to avoid data materialization.
Additional materialization leads to longer execution time and extra memory consumption as well as
more CPU consumption.

Use like WHERE ENDTIME >= CAST(‘2018/4/27 5:00:00.0 AM’ AS TIMESTAMP);

3. To prevent a statement from being combined with other statements, you can use the NO_INLINE.
hint. Blocking statement inclining with NO_INLINE SQL hint.

If the statements combined together prevent parallel processing.

C. Performance tuning in Calculation view/Graphical view modeling : 

1. Check the details row generated in each node or “performance analysis” tab of calculation view and find the dominant operator and find possible key Reducer.

A. Follow the hints and resolve it one by one..
B. Once the dominant operator identified, Replace the dominant operator using JOIN_THRU_AGGR or AGGR through Join by pushing the join(possible key reduce Operator ) with another table on the button node which reduce returning r maximum number of row to subset the data if applying enumerator JOIN_THRU_AGGR not working.
2.Put Cardinality to each and every join node calculation view node.
3. Use input parameter instead of variable and proun it through lower level.
4. Use optimize join= true .
5. When possible we use union instead of Join.
6. use static cache by enabling cache and put more retention period for
7. I used to prefer to create for each Master data table and one Dimension view and enable cache. For reporting create a Cube Calculation view using star join and keep the transactional table/view in the center and around all the dimension calculation view.
8. Health check of the calculation view –

Execute out of the box procedure to check the health of a calculation view –

CALL CHECK_ANALYTICAL_MODEL(”,’SCHEMA1′,’CV_DEMO_TEST’,?);
Call CHECK_CALCULATION VIEW();

9. Based on union pruning settings, some data sources in
the Union node can be omitted (pruned) early. Use union pruning.
10. Use dynamic join whenever possible to speed up join execution by reducing granularity at which join is
executed.

 

Hope this will help in Summary level. I have described in details in the further blogs about performance tuning.

Sara Sampaio

Sara Sampaio

Author Since: March 10, 2022

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x