where test early, test often and test enough is not related to COVID-19.

Thus spake the master: “Any program, no matter how small, contains bugs”. The novice did not believe the master’s word.

Geoffrey James, The Zen of Programming

Code Quality

Every software is expected to have bugs, known and unknown. While all bugs are not equal in impact, the defect density (number of defects per KLOC 1000 lines of code) can be used for comparison. A value of 0.5 or lower is considered good quality.The%20First%20Computer%20Bug

The First Computer Bug (wikipedia image)

SAP meticulously documents programming errors in OSS notes. This advanced bug tracking increases confidence in the development process.

At customer sites, bugs are not tracked over the product lifetime, although other tools are used. I am used to a requirement analysis process by problem domain experts followed by a specification and a Big Design Up Front. GxP regulated industries will have an elaborate QA process (e.g. V-model), but most need few deliverablesAcceptance tests derived from the specification are common. Testing at a smaller scope is the sole burden of the ABAP developer.

I once wrote about working with ABAP unit with a test last emphasis. I am now convinced a test first approach should be preferred, but only if some often not discussed pre-requisites are fulfilled. To be effective, some skills must be honed and success must be redefined. This blog will settle on a test early and test often message and raise awareness of workflow changes needed.


How to Test?

 

The Bug Hunting Game

Let me try to capture the process of ABAP debugging with a fictional a CSI:ABAP series where I am a great detective tasked with diagnosing and fixing an error laden ABAP report output:

  • Episode 1 – create a test case and reveal the bug I set breakpoints and step through the code while using the debugger to observe the program state until the fault can be consistently reproduced
  • Episode 2 – localize the bug I step through the code while observing the program state with the debugger until the part of the logic that triggers the wrong behavior is known.
  • Episode 3 – I identify the error and implement a correction. I then set breakpoints and step through the code while checking the system state with the debugger to verify the code performs according to expectations.

Common to all those steps is the troubleshooting with the debugger (SAP GUI or Eclipse based) to validate or refute expectations about the code behavior. In the end, I affirm with confidence that a certain part of the code behaves according to expectation.

Given a good code structure with modules that are not tightly coupled, the experienced developer can usually estimate how long this process will take.

However, debugging is a time-consuming manual process that does not scale well. There is a limit to the code size where we can affirm our confidence in the code behavior though debugging alone.

Test Cases

We can test at different scopes but let us imagine an Acceptance Test case for a custom transaction ZMYREPORT. The formal test is created from a simple template that defines test steps. Each step has an observable result. The tester must be fill-in test parameters and the status of the step, i.e. passed or failed.

Step Description Parameters/ Prerequisites Expected Result Actual Result Pass/ Fail
1. Call Transaction Code ZMYREPORT Test User has authorization Selection screen of transaction appears As Expected Pass
2.
  • Enter Document Number DocNo
  • Choose layout “Output as Grid”
  • Execute with F8
Document DocNo must exist with Material MatNr Screen “My Report” appears with details on the Document Number DocNo

DocNo = 12345

MatNr=42

As Expected

Pass
3 Double Click on the Material Number in the list   Material Document MatNr is displayed As Expected Pass

From our understanding of the specification, we develop a suite of test cases that define the expected behavior. A good test suite removes ambiguity from the specification. It is a hard but necessary work which value is plain to see in code maintenance of critical projects where my first question is always how can I test this?

Acceptance tests become a living specification. From those, the developer can derive Given-When-Then test cases that

  • reveal bugs in the code,
  • demonstrate conformance to the specification or
  • validate our understanding of the code.

Test Coverage

While testing, we can classify the system behavior as fault (non-compliance to the specification), omission (not implemented), surprise (implemented but not specified) or correct (behaves as specified).

Those categories can be visualized without much suspension of disbelief as areas:

Faults%2C%20omissions%20and%20surprises

Faults, omissions and surprises as defined in TOOS

The most productive boost to our code quality comes from test automation.

Every passing unit test increases our confidence that the code conforms to the specification. When enough different code units are tested and all tests pass, we expect most of the correct code area will have been harnessed and found conforming. Our confidence is high, we promote the Code under Test (CUT).

ABAP Unit Recap

The proposition of automated testing is to write test code that harness productive code and reduce reliance on debugging. If you understand the ABAP Unit core techniques described in this diagram, then skip this part and we are good to go:

If not, check e.g.

ABAP Unit is a test framework for the smallest testable part of an application. ABAP Units tests are encapsulated into classes.

CLASS ltc_test_A DEFINITION FOR TESTING RISK LEVEL HARMLESS DURATION SHORT.
  PRIVATE SECTION.
    METHODS: setup,                  " Test fixture
             teardown. 
    METHODS: my_test_method_A1 FOR TESTING.
ENDCLASS.

ABAP Unit tests are based on assertions, the self-checking part of the tests. A test method should either pass (green) or fail (red).

 METHOD collect_contract_keys.
    DATA lv_ebeln TYPE ebeln.
    DATA lt_item_key TYPE lcl_quantity_tracking=>tt_item_key.
    DATA lt_item_key_exp TYPE lcl_quantity_tracking=>tt_item_key.

    APPEND VALUE #( ebeln = c_konnr_test
                    ebelp = c_ktpnr_test ) TO lt_item_key_exp.
    mo_tracking->collect_contract_keys( CHANGING cv_ebeln = lv_ebeln
                                                 ct_item_key = lt_item_key ).
    cl_abap_unit_assert=>assert_equals(
       act = lt_item_key
       exp = lt_item_key_exp
       msg = 'COLLECT_CONTRACT_KEYS Item Keys' ).
  ENDMETHOD.

 

Test Automation

A typical unit test runs in 1/100th of a second, so it is commonplace for a large test suite with 100 tests to complete in a few seconds. We execute the suite again and again, all the time, especially after each change. It becomes a part of the routine, like ABAP code activation. This is the new normal.

Working with automated tests changes the workflow!

The mindset in automated software testing is not the bug hunting game. Instead of trying to find unknown bugs, the task is to trigger the alarm when a known bug/error/fault is present in the code. We can still use manual debugging to reveal a bug. That makes it a known bug.

ADT%20ABAP%20Unit

ADT ABAP Unit

In this context, regression testing is really for free (as in beer). Test failures are then attributed to the most recently changed code. Those last changes are reviewed and either the test code or the productive code is corrected. With this workflow a smaller number of changes occur before testing.

How do you decide that your code is well tested?

Design%2C%20Cover%2C%20Promote%20%28cf%20TOOS%29

Design, Cover, Promote (cf. TOOS)

Having many unit tests will not boost our confidence in the code if all those tests only harness the same CUT. We are motivated to add new tests that are sensitive to more bugs, to all our known bugs. This is what we call a good test coverage of the production code.

The Feedback Loops

I have learned to appreciate how Test Driven Development (TDD) could be done in ABAP (cf. openSAP course: while working on legacy code we should gradually extract functionality to an island of happiness were the new code is completely covered by tests).

TDD%20Feedback%20Loops

TDD Feedback Loops (cf. GOOS)

I also recognize it requires a different mindset: in TDD the test color becomes an obsession: you must see a unit test fail before the production code makes it pass. We want to see test red color, then the green color. It is important to see the test fail first. And it is also important to only change the code to make the test pass or to refactor.

This leads to feedback loop informing the specification from the tests after validating specification by the tests. This guides the software development process with every change focused on having an specified impact on the system behavior.

How To Write Tests?

 

Limits of Testing

Tests are no substitute for a well-designed application. Adding tests to legacy code could be impossible to do safely. The advice is to create tests first, or failing that, to test early and often.

A failing test can reveal a fault. A passing test gives us confidence, but it does not prove correctness, it does not guarantee defects are absent. A passing test tells us the behavior of the code matches the specification. It does not tell us whether the specification is correct.

The TDD feedback loop always enforces a good procedure coverage, but discipline is needed. In a scenario like the CSI:ABAP above, an experienced developer not used to TDD will feel a loss of productivity while changing their routine. it is more common to just change the production code to fix a bug before thinking about tests. The developer behavior must change, but I will acknowledge with Sandi Metz (cf. POODR) there will be a cost in productivity and TDD will not pay for itself unless the following skills are available:

Test Design

  • How to write unit tests that uncover errors in my code I do not know about just yet?

There are so many possible execution paths for non-trivial software, not all cases can be tested. To be effective, invest time in a fault model that describes how the code could possibly be wrong. Create test cases for those code paths, so that the test suite is fault revealing.

Disclaimer: I am not affiliated with any QA tools vendor referenced.

In combinatorial testing, test parameters are selected according to combinatorial models (like a decision table). Suppose you look at this code

IF price GE 100 AND price LT 1000.
  NumberOfApprover = 0.  " No approval needed
ENDIF.

Domain Analysis Testing will focus on the on points for price (100 and 1000), one or more in points between 100 and 1000 and one or more out points that does not lie between 100 and 1000. A domain here is the set of all inputs to the Object Under Test.

TOOS compiles test design patterns, including state charts based models (e.g. State Transition Testing),  UML based test models (e.g. use cases scenario based approach) and sequence diagram based testing.

Some suggestions from POOD:

An Object Under Test (Code or Class Under Test) should have clearly defined boundaries. The practical way to test the abstraction is to create unit tests for the messages (method calls) in the sequence diagram.

Some suggestions from OORP:

When, why, how and what to test

  • Test the interface, not the implementation.

e.g. an test for class ZCL_NOTIFIERCONTROLLER in the sequence diagram below will send a message (call a the method) build_and_send_notifiers( ) to an object of this class and create an assertion on the response. The test supports abstraction by willfully ignoring the internals and focusing on the public interface, which should be stable.

Test%20the%20interface%21

Test the interface!

  • Record Business Rules as Test
  • Write Tests to Understand:
    • Test Fuzzy Features
    • Test Old Bugs
    • Retest Persistent Problems

Whenever the test code is hard to write, the productive code is hard to use. We have to improve the existing code first to expose those public interfaces that can be tested.

Testing database access and CDS artifacts is an ABAP specific challenge where ABAP unit can only help for integration test. The dedicated testing frameworks Open SQL Test Double Framework is discussed elsewhere:

Writing unit tests for legacy code means decoupling some tangled operations for testing. We must introduce implicit interfaces to ask about the parameter being passed from one part to another.

Refactoring

It is common for the first code version to be easy to understand as a solution to a given problem, then have the architectural clarity disappear after some patches by maintainers who fail to make the code reflect the new requirements. Fowler (RIDEC) calls this decay, the code contains smelling (like Duplicated Code, Feature Envy, Large Class). Gungor Ozcelebi calls this ABAP crime.

After a while, the maintainers lose the ability to run a complete regression test suite. There is no confidence in the in the code behavior after the change. Feathers (WLC) calls this Edit and Pray.

A refactoring is a series of small changes to improve our production code and keep it easy to test without changing its external behavior, avoiding decay.

Test code can smell as well, Meszaros’ xUnit Patterns describes many of those, like fragile testserratic tests or test logic in production.

Characterization Testing

  • How could we add unit tests to ABAP code without a proper specification?

Even when no formal specification exists and no test models are available yet, we can characterize the system behavior with an automated test suite. This approach is called characterization testing.

The focus is to understand the code and make the tests sensitive to behavior change. The test methods call units of the production code, get the result, and compare it to the value that is currently determined via debugging. The test oracle just replicate the CUT behavior captured at a given moment in time.

  • Characterization Testing does not infer correctness of the results.

We want to make sure the code behaves exactly as at the time of the test definition (the characterization) without using the debugger.

Characterization is not the best approach, but it is an approach where we can determine from the start how success looks like, e.g. it is economical, it is conceptually simple, we can get started on day one, even if that means using TEST-SEAMS.

Code Coverage Metrics

I have been working on ABAP Scheme, an interpreter for the Scheme language based on a nearly perfect example of a specification from a testing perspective: R7RS small is many decades old and seven revisions strong. So the code has over 600 ABAP passing unit tests that execute in around 5 seconds, depending on the environment (ABAP Development Tools for Eclipse or SAP GUI SE80 transaction).

Install ABAP Scheme from github using abapGit. If you are using the SAP GUI, open Report ZZ_LISP_IDE and execute the unit tests.

Execute_Coverage

Execute with Coverage

Coverage%20Result%20SE80

Coverage Result SE80

If you are in ADT for Eclipse, select the Report ZZ_LISP_IDE and run as ABAP unit test.

ADT%20Enable%20Coverage

ADT Enable Coverage

ADT%20Coverage

ADT Coverage

Execute unit tests with coverage and look at the branch coverage metrics in your editor.

Coverage%20Metrics%20SE80

From the feedback it should be straightforward increase the coverage increases, but I do not obsess on achieving 100% process coverage. As I have added a numerical tower, my short-term aim is to have a good enough coverage of logic that handles complex numbers.

References

  • TOOS: Binder, Robert; Testing Object-Oriented Sytems: Models, Patterns, and Tools, 1191 pages
  • GOOS: Steve Freeman & Nat Pryce: Growing Object-Oriented Software, Guided by Tests, 358 pages
  • WELC: Michael Feathers, Working Effectively with Legacy Code, 
  • xTP: Gerard Meszaros, xUnit Test Patterns: Refactoring TestCode, 883 pages
  • POOD: Sandi Metz, Practical Object-Oriented Design in Ruby, An Agile Primer, 247 pages
  • RIDEC: Martin Fowler, Refactoring, Improving the Design of Existing Code, 1st Edition 431 pages, 2nd Edition 419 pages.
  • OORP: Serge Demeer, Stéphane Ducasse, Oscar Nierstrasz, Object-Oriented Reengineering Patterns, 338 pages. Legacy Edition

 

First version posted at www.informatik-dv.com

Randa Khaled

Randa Khaled

Author Since: November 19, 2020

0 0 votes
Article Rating
Subscribe
Notify of
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x