The Transformation of Data Centers

February 18, 2009 by Markus Winter, SAP Managed Services

Virtualization Thanks to a special software known as the hypervisor, multiple virtual machines and operating systems can run in parallel on a single physical computer system.

In the past, companies could only extend their business applications by bringing in new physical server units. Consequently, they could only map growth by adding servers to the data center. As time passed, this approach led to complex, sprawling IT landscapes. At the same time, technological progress was making hardware ever more powerful, leaving many of the machines installed in today’s data centers vastly underutilized. It is not unusual for both midsize and large companies to operate several thousands of servers with an average utilization rate of well below 20 percent.

Resuscitation on the x86 platform

Virtualization
Virtualization transcends physical server operation and can be broadly described as the abstraction of computer resources, whereby “resources” can refer to hardware, software, or a logical component (such as an IP or other access address). Aside from the virtualization of computers into virtual machines – virtual computing – this technology has many other applications at all levels of an IT infrastructure: Virtual networks (VLANs) are a good example. Virtualization therefore does not stop with servers – it will affect the entire data center.

In the mainframe environment, processing power is a comparatively precious commodity, which means that achieving significantly higher utilization rates is a priority. It is little wonder, then, that a technology that has been established since the 1970s is making a comeback – albeit in a modified form – in data centers all over the world. This technology is known as virtualization – or, more accurately, server virtualization – which, as long as 30 years ago, made it possible to partition mainframes into several smaller units known as virtual machines (VMs).

Virtualization describes the introduction of a software layer that manages and distributes resources in order to logically separate the presentation of resources to users from the actual, physical resources available. But how does that actually help in operating a data center?

The “renaissance artists” of today’s data centers are VMware or XEN – just two of the many vendors and products that have succeeded in revitalizing the classic concept of virtual machine technology for use on the x86 platform. VMs make it possible to run several operating system environments in parallel on a single physical computer system. Although the products currently available on the market vary in terms of technical realization and functional scope, they all aim – directly or indirectly – to make better use of the physical resources available in a computer system by consolidating operating system environments.

Improved resource utilization

Thanks to a special software known as the hypervisor, multiple VMs and operating systems can run in parallel on a single physical computer system. End users are generally not aware that this is happening. The main benefit is that IT operators can significantly pare down the number of physical server units required in the data center. In practice, it is not unusual for as many as 20 or more smaller VMs to run on one large physical server. This obviously means cost savings for IT operators: less physical equipment, less floor space, less power, less cooling, and so on. Instead of acquiring a physical server – with a rigid, defined processing capacity – for a new application in the data center, they can deploy VMs to provide virtual resources for specific applications as and when required.

Flexibility of an VM

Adjusting the size and processing power of a VM takes just a few seconds. In some cases, changes can even be made while the VM is operational. VMs always create an appropriate virtual hardware environment that can be quickly and flexibly configured to fit the requirements of the application. Thus, in terms of the application life cycle, virtualization brings additional cost benefits. When a company buys a new software application, it does not need to settle on a specific hardware size at the time of installation to cover the maximum possible utilization level in the future. Instead, the virtual machine simply adapts over time and frees up unused resources as and when required. For data center operators and customers virtualization leads to a significant improvement in flexibility and a higher availability of services while reducing costs.

Risks of virtualization

For more information and configuration details relating to the operation of SAP systems in virtual environments, please see the SAP Community Network or refer to the relevant SAP Notes on SAP Service Marketplace (for example, 1122388, 895807, and 674851).

As is generally the case with consolidation scenarios, it is essential to plan and monitor operations that use server consolidation and virtual machines with great care. Consolidated usage necessarily results in less downtime tolerance. In a conventionally operated infrastructure, if one system fails or slows down due to performance bottlenecks, then only this one system is affected. In the case of server virtualization, if one physical server breaks down then usually all the virtual servers running on this unit break down too. It is therefore vital to weigh up the risks in advance and minimize them by making careful contingency plans and actively monitoring selected technical redundancies.

This applies equally to the sizing of application landscapes, as is customary in the SAP environment. By decoupling infrastructure and operation as described, virtualization can support, but cannot replace, the sizing procedure. If virtual environments are to run smoothly, it is essential to know the details of the specific usage scenario (number of users, expected data throughput, system classification, and so on).

VLBA Lab Magdeburg
Markus Winter, the author of this article, is head of the virtualization task force at SAP Managed Services and coordinates global projects in virtual and cloud computing. As an external PhD student at the VLBA Lab, he is conducting research into the impact of new computing technologies on the architecture, operation, and service offerings in the data centers of the future.

The VLBA Lab Magdeburg was established in November 2006 as part of the business information technology working group at the Otto von Guericke University in Magdeburg, Germany. This research group is concerned with the design, development, and operation of very large business applications (VLBAs). SAP and T-Systems are involved with the work of the VLBA Lab through the SAP University Competence Center (SAP UCC), thus ensuring that current issues in the worlds of industry and business are addressed in long-term research projects.
http://www.vlba-lab.de

Tags: ,

Leave a Reply