Secure Operation for Users and Developers

Feature Article | February 6, 2006 by admin

The SAP subsidiary SAP Hosting uses MaxDB widely for its customers. The service provider values not only the suitability of the database, which was originally developed by SAP, for the company’s own ERP solutions; high availability for enterprise-critical applications is another important feature. After all, hosting can only be an economic success if the environments can be run reliably. The service provider must maintain regular system operation and deal with all kinds of downtimes. MaxDB supports these tasks with efficient file organization and capacity reserves for peak loads, on the one hand, and with classical data backups and high-availability solutions such as hot standby, on the other.

Maintenance during day-to-day operation

The main purpose of hosting is to ensure the fault-free operation of an IT environment. One of MaxDB’s strengths in this respect lies in its low-maintenance file organization. It does not need special reorganization runs to ensure optimum access structures and minimum storage space requirement, and the administrator does not need to interrupt day-to-day operation to carry out maintenance tasks. This is one key difference between MaxDB and its rivals.
The data on MaxDB does not degenerate and does not contain unused gaps between the data records. By clustering the data in a key sequence and using the “Update in place,” “Delete in place,” and “Insert in place” methods, MaxDB ensures that there are none of the gaps and concatenations that are so costly to maintain. These three methods enable data records to be inserted, extended, shortened, or removed at exactly the point in the gap-free structure where they occur. As a result, when data records are deleted or shortened, the remaining data records are immediately pushed together to make space available for new data records. In MaxDB, this procedure ensures that data is stored in the most effective way, even after a long runtime.
Operation is not affected even when MaxDB reaches the storage space limits of data or log files. New data files, known as data volumes or log volumes in MaxDB, can be inserted at any time without interrupting operation. If a data file is approaching its (freely definable) threshold limit, the integrated eventing function informs the administrator by email or SMS. While external monitoring solutions are not necessary, integration in a higher-level monitoring system is easily possible.

Safe from nasty surprises

For smooth operation, it is necessary to keep an eye on database performance, and maximize this where possible. MaxDB uses a cost-based optimizer for this to process commands quickly. For each database query, a cost-based optimizer determines the lowest-cost execution variant at that particular point in time. A rule-based optimizer, which always selects the same execution variant, does not offer this flexibility. MaxDB calculates the costs on the basis of real-time spot checks and statistics stored on the data and expected read operations.

The database also contains tools for fine tuning. Often, a few non-optimized queries considerably reduce the overall performance of the system. These slow SQL queries can be logged with the help of the Diagnosis Monitor and Resource Monitor.
Long-term monitoring of the database and relevant operating system parameters is carried out by the Database Analyzer. The tool takes measurements across all architecture components, and enables the database administrator to ascertain whether there is a bottleneck in lock management, that is, the synchronization of transactions, in the CPU, or in the I/O system. For clarity, the Database Analyzer assigns each output value a weighting. On SAP R/3, there is graphical formatting for this log that shows at first glance when the various bottlenecks occurred.

Security for production systems . . .

The Database Analyzer determines bottlenecks

The Database Analyzer determines bottlenecks

Availability is not just measured by how long a system runs without interventions. When problems do occur, it is just as important to keep downtimes as short as possible and data losses to a minimum – usually to avoid them completely. This is ensured by the mechanisms for backup and recovery for production and development systems.
The transaction-consistent online backup integrated in MaxDB backs up the data and log. Data can be backed up in full or incrementally, where incremental data backup only contains the changes since the last complete backup. The log backup can also be performed automatically, if required. After the log entries have been backed up, MaxDB makes the storage space previously used by the entries available again. This procedure ensures that a log overflow is largely prevented.
In serious cases, a combination of incremental and complete data backups coupled with log backups offer a multitude of recovery strategies. If an incremental backup medium becomes damaged and cannot be used for recovery, it is possible to import the data from this period using log backups. While this takes longer than an incremental data backup, it is a reliable way to protect against data loss.
MaxDB also provides point-in-time recovery based on the log backups. These enable database content to be restored to the state it was in directly before an accidental deletion. Self-explanatory wizards in the Database Manager GUI support the user in all backup and restore tasks. These can also be executed using command line and if necessary controlled by scripts.

Recovery Wizard

Recovery Wizard

Parallel processing accelerates the backup and restoring of data. Special tasks perform all the related actions without adversely affecting normal database operations. These tasks have read and write access to any number of backup media.
Users with special requirements can also integrate the MaxDB backup and recovery solutions in applications from other vendors such as IBM/Tivoli or Legato. Additional backup tools can also be integrated via the Backint for Oracle and Backint for MaxDB interfaces.
When it comes to the updating of application software, SAP Hosting has a trick up its sleeve: Before a new version is imported, a database snapshot is taken. The shadow memory and converter technology of MaxDB makes it possible to freeze the database status so that it is protected against being overwritten. Thus the information can be read, but not changed. Write operations store their information on new, previously unused areas. This procedure is useful if problems occur during the software update and the database administrator needs to reconstruct the old status. This can be done quickly, because all the data still exists in the database.

. . . and development environments

Developers and administrators make different demands on backup and recovery. Because developers need to update the complete database software much more often, for example, they need particularly rapid recovery. When importing new software, they carry out much the same process as administrators, carefully creating a snapshot, but at memory level – “a file system snapshot” – rather than at database level. These snapshots are provided for example by a Local Volume Manager (LVM) and back up complete hard drives. The advantage of working at memory level is that the original status can be restored far more quickly.
Before a file system snapshot is created, it is advisable to create a database snapshot, so that a backed-up, consistent dataset is stored. The alternative to this is automatic recovery using log files and backup points, but this takes longer than the re-importing of a database snapshot.

High availability with hot standby

No system can be 100 percent protected against downtimes. As a result, a company needs mechanisms that help to avoid operating shutdowns if the system does fail. As a rule, systems are therefore designed so that a replacement system can be used if the main system fails. This is also true of the standby procedure for high availability in MaxDB.
More simple standby systems are supplied with log backups from the production system at regular intervals. The standby dataset is older than the production system dataset by the margin of time between the log backups on the production system and their import on the reserve system.
It can be an advantage to have this slightly older dataset, if, for example an application error occurs in the meantime and causes data to be deleted or modified by mistake. In this case, the administrator can make use of the old data on the standby system. The reimport of log backups, also known as log shipping, is then interrupted and the application systems are redirected to the reserve system.

Hot-Standby with MaxDB

Hot-Standby with MaxDB

In addition, MaxDB offers hot standby operation, in which the production database and the reserve system share the log area, which is however only written by the production system. The standby system has its own, independent data area, but only has read access to the shared area, until circumstances dictate that the reserve system becomes the production system.
During setup, the standby system is supplied with the production system’s initial dataset transport, which is generated using the storage system (flash copy, BCV – business continuous volume). After this, it constantly reads and executes all new log entries. The production system and standby system therefore always contain the same data. Execution of the log entries can be staggered, if required, in order to protect against logical errors, for example.
If the production system fails, the cluster solution switches from the defective master to the standby system, which itself then becomes the master system. The cluster solution uses an IP switch to route the application clients to the standby system. At the same time, the standby system adds the last log entries and switches to online mode, to process the requests from the application clients. High availability is thus ensured.

Ulf Wendel

Ulf Wendel

Tags: , ,

Leave a Reply