Compared to read/write operations on mechanical hard drives, storing and accessing all of your data directly within server memory offers a quantum leap in terms of speed. It’s also possible to query any area of memory directly. New in-memory databases organize all of their information in columns rather than the usual rows; for more, check out the article “Unveiling SAP TechEd 2010”.
While SAP cofounder Hasso Plattner has already been touting the advantages of storing data like “uncooked spaghetti” since SAPPHIRE 2009 in Orlando, the company has been working with the likes of Cisco, Fujitsu, Hewlett-Packard, IBM, and Intel on SAP High-Performance Analytic Appliance (SAP HANA) – software that focuses primarily on the rapid analysis of business information. Given the variety of hardware manufacturers involved in SAP HANA’s development, it should come as no surprise that the software is set for release on a series of different platforms in 2011.
Read on: Why real time?
In the video below, Vishal Sikka, SAP Board Member and CTO, comments on the official HANA launch.:
Business in real time – what’s in it for you?
Nowadays, virtually every company – regardless of industry – is trying to keep its head above an unrelenting flood of information. Organizations are bringing their operative and analytical tools to bear on ever-growing volumes of data. Generating reports from ERP and CRM data is often a time-consuming endeavor, and since the information is at least two hours old, it would be quite a stretch to describe the process as occurring in real time. All in all, the various interfaces and software applications involved add up to a substantial total cost of ownership.
In-memory computing enables you to combine historical key figures and trend indicators with current data sources to conduct predictive analysis. As a result, sudden changes are immediately visible in a way that conventional technology simply cannot illustrate. The latest business intelligence applications utilize online analytical processing (OLAP) cubes, which offer a multidimensional perspective of stored data.
Essentially, the idea is to implant the functions of relational databases and OLAP cubes directly into server RAM. This makes it possible for sales personnel, for example, to access data in real time and refine their queries based on products, revenue, or other criteria.
Read on: Squeeze 2,7 TB onto 700 GB
Two billion transactions by 2020
According to the projections of the market researchers at IDC, we will be able to move up to 50 billion gigabytes of data and conduct two billion financial transactions by the year 2020. Four million of us will live connected lifestyles, supported by some 31 billion online devices and 25 million software applications.
Fujitsu, HP, and IBM offering blade servers with 2TB of RAM
To enable customers to take advantage of the fully in-memory database technology offered by SAP HANA, Fujitsu, Hewlett-Packard, and IBM have introduced blade servers equipped with Intel processors and chipsets.
These new blade servers sport motherboards based on the Intel Westmere-EX platform, which supports DIMM memory chips of up to 32GB each. The ability to install up to 64 of these modules puts the maximum memory of one server at 2TB; as far as business processes are concerned, most of the databases today’s companies maintain are no larger than 500GB. In addition to this generous RAM capacity, the blades can contain as many as 64 CPU cores (with Intel’s Hyper-Threading technology) – enough for most computing operations.
NewDB squeezes 2.7TB onto 700GB
Despite predictions of the even more massive data volumes to come, in-memory databases will still be able to offer increases in speed. For example, a pile of CRM data that takes up 2.5TB on a conventional hard drive occupies only 600GB when arranged in SAP HANA’s column-wise structure. Under ideal conditions, the compression rate can reach a factor of 10.