To realize its concept of enterprise service-oriented architecture, SAP publishes new enterprise services almost every week on the SAP Developer Network site. Nevertheless, there are differences. For example, products like the SAP Business ByDesign solution are based completely on enterprise SOA. Enterprise services that are based on the SAP NetWeaver technology platform and manufactured by third parties or even from SAP can be programmed in Java. The functionality of the SAP ERP application that is encapsulated in enterprise services is based on ABAP. Because these different enterprise services use different technologies as their foundation, different tools are used to measure performance in each type of enterprise SOA.
The standard Web services have one thing in common. They contain business application software. Because they are based on standard protocols, they can be combined as desired into almost completely new applications. Those include business-to-business applications and, much more frequently, cross-application end-user scenarios, which SAP calls SAP xApps composite applications.
In terms of performance, which is given in response time or in throughput for each unit of time, it’s interesting to note that end-user scenarios are part of the SAP ERP application. But they can also take advantage of various applications. Examples include the interplay of SAP ERP and SAP Supply Chain Management (SAP SCM) or third-party software. In many cases, the services even communicate beyond the boundaries of computer centers. For example, a company might use a central system for order processing, but enter the orders in a decentralized manner.
The most complex aspect of performance, however, is when enterprise services affect different companies at different locations. That’s the case when an ATM at one bank checks the validity of a card issued by a bank on the other side of the globe.
Knowing who’s involved
In the case of enterprise services from the proprietary ABAP stack from SAP, a service call goes to the SAP NetWeaver Application Server component as an HTTP request in SOAP format. The HTTP communication layer redirects the request. The communication layer consists of two parts: the Internet communication manager (ICM) and the Internet communication framework (ICF). Their task is to assign the request to the correct service provider.
The task of the downstream Web-service enabling layer is more complicated. It also consists of various components, including the Web-service runtime that reads the SOAP header and extracts XML Web-service data. The proxy framework then maps the data on the existing ABAP data structures. The Web-service enabling layer also calls the Web-service proxy, which then turns to the actual SAP application. The SAP application consists of familiar BAPIs and function modules that can be called efficiently with remote function calls (RFC), a proprietary SAP protocol.
These elements are crucial for the performance of enterprise SOA.
Analyzing response time
If a survey asked users which key figure they would see as most important in measuring performance, 99 percent would probably answer “response time.” This answer also applies to applications or services based on enterprise SOA.
Unfortunately, response time is difficult to measure. It does not depend directly on system throughput, and it consists of the sum of lengths of stay in the diverse communications layers.
First, the browser is an important factor in performance. Many software suppliers increasingly place functionality into browsers because of user friendliness. The browser-rendering time very much depends on the performance of the front-end PC. Personalized pages or visualizations run on the front end – what used to be thin clients have become rich clients.
Roundtrips and bandwidth
Second, the path to the application server determines the response time in a network. The more communications steps and roundtrips that exist between browsers and servers in a wide area network and the more data that is transmitted, the wider the bandwidth that must be provided. That’s why it can take longer to call a new page in doubtful cases. In addition, networks do not always remain stable. Asynchronous data transmission that renders only parts of new pages is of limited help here.
Third, the application server and its application logic also affect the overall response time. Services are often nested and have to call other services or applications. That can be the case when a sales order in SAP ERP uses RFC synchronously to call an available-to-promise check in SAP SCM and has to wait for an answer that indicates the availability of the desired product. Along with pure processing time, additional network and communication times clearly play an additional role here. The last steps, from the network to the database, cause fewer performance issues.
In terms of performance, calling services from the Internet requires attention to how much data is transmitted to the front end, how many round trips between the browser and the application sever occur for each user action, if hidden communication occurs between the application severs, and how many synchronous messages (and of what scope) are transmitted.
In general, SAP software uses two communications protocols: the standard Web HTTP (SOAP) protocol and proprietary RFCs that exist with various properties for synchronous and asynchronous communication.
The transmission and processing time of a message has a fixed portion, regardless of the size of the message. This portion depends on a message’s metadata, the time the CPU needs to process it, the physical distance of the transmission (over a satellite, for example), the capacity of the hardware components (like routers), and the bandwidth and quality of the network.
The variable portion of a message’s transmission time largely depends upon its size. This portion includes the pure runtime in the network, including copying, encryption, and, depending on the complexity, serialization and deserialization.
Therefore, the more limited the pure runtime of the network is, the larger the percentage of the fixed management time is within the overall runtime. For the users of enterprise SOA, that means the following: Yes, enterprise services feature an endless supply of design options, but the more that communication occurs over the Internet, the more response times lengthen and the costs of communication increase. It’s important to think about how much a system can rely on Internet communication.
The new tools are the old tools
A variety of metrics measure the most important key figures for performance. Examples of such key figures include browser key figures, individual statistics, and runtime analysis.
No complicated tools are required to measure performance for the browser and the network to the application server. Any commercial HTTP monitoring tool can determine the rendering time and the number of round trips. Regardless of which tool is used, good preparations should be made for measuring performance in the browser.
The browser cache must be emptied and the data compressed in HTTP 1.1. It’s also important to ensure that the cache is always emptied when closing the application. Browsers usually provide that option in Internet settings. The scenario undergoing testing should be executed at least three times without measurements so that the cache refills and that the measurement results are reproducible. Afterward, the measurement tool can capture the runtime of the round trip, the number of synchronous round trips, and the scope of the transferred data. As a general rule of thumb, enterprise services have one round trip. The quantity of the transferred data depends upon the complexity of the user interface, and should normally lie between 5 and 15 KB.
The most important key figures at the level of the application server include CPU usage by user, the CPU time of the enterprise services, and possibly the communications costs between servers. Two familiar tools are helpful to measure these key figures: individual statistics (STAD) and runtime analysis (SE30 or its successor, SAT).
Use of individual STAD statistics requires working through the scenario to be tested several times – and least three times and preferably six times – without taking any measurements. That fills the buffers so that the measurement results are reproducible. When working with STAD, the fields for response time, time in work process, wait time, CPU time, database request time, and maximum extended memory in step must be selected. The enterprise services to be examined are identified in the column T (for task type) with an H (for HTTP), and T (for HTTPS).
The customer fact sheet is a typical use case. It displays eight enterprise services on one page. The STAD display clearly shows how much CPU time the services have consumed and how much of that time occurred on the application server and on the database server. That’s how to determine the average time that an enterprise service uses at the application and the database page.
Display of usage
Runtime analysis (transaction SE30) shows the composition of individual CPU times. Specific settings must be configured so that runtime analysis can supply the results of HTTP measurements. The number of measurements and the number of enterprise services should be the same. If the number is unavailable, it’s best to work with a large number. The expiration time must lie in the future.
The call hierarchy (transaction SE30) displays the analysis of how much time was used in individual software components, for example for the encapsulation in XML and SOAP structures.
A series of measurements of simple and complex services might reveal that the times for HTTP were disregarded, that the times for SOAP were between 10 and 15 milliseconds, the proxy framework took 5 to 20 milliseconds, and that 60 to 90 milliseconds were spent at the application layer. In this example, the ratio of the application run time to Web services packing would lie between 25 percent and 40 percent. An analysis of these results would show that the shorter the actual runtime is, the higher the percentage of Web service enablement.
Simple tips for good performance
The significant technical flexibility of enterprise SOA applications leaves a great deal of room for the design of context- and industry-specific applications. That’s why it’s important to keep an eye on performance considerations. Good results do not depend on home-grown programming.
The possible effects of the communication costs of the enterprise SOA application should be considered as early as the evaluation phase. That applies especially to applications in a wide area network. The procedure is quite simple. First, determine the response-time requirements from the new application. If that figure is two seconds, for example, that value is the starting point for the downstream considerations.
There should not be more than two round trips for each step in user interaction. The latency times in the wide area network are between 0.3 and 0.5 seconds for each round trip. That means that 1.0 to 1.4 seconds are available for pure processing time on all components. This time is spread across the various software layers: 30 percent for the browser rendering time, 10 percent for the Web server, 50 percent for the application, and 10 percent for the database. That makes it easy to define a simple performance baseline for the new enterprise services, a baseline that you can confirm with metrics in the test phase. Finally, the implementation should try to limit the amount of data being transferred.