Extracting value from big and enterprise data is being held back by complexity.
In the “Rime of the Ancient Mariner,”’ written in the late 18th century by Samuel Taylor Coleridge, an old and becalmed sailor complains that there is “water, water every where, nor any (but not a) drop to drink.” Many companies feel the same level of frustration with their efforts to extract value from the oceans of data they are now collecting and storing. There is more data and more ways to store and use it than ever before.
While this data holds business opportunity, corporate data landscapes are growing increasingly complex, and it is getting harder and costlier for organizations to understand the data they have and to capture the maximum value from it in order to innovate and drive their digital agenda forward.
Big Data is important to enterprises because it enables them to gain new insights and take new actions within their business based on a wide variety of internal and external data, much of which was not available before. A Big Data-centric organization is significantly better able to outpace the competition, exceed customer expectations, and accelerate their businesses growth leveraging new business models not possible before.
But while companies recognize that harnessing the power of Big Data, including social media and sensor-derived IoT data, can be turned into valuable insights, they are often stymied by the complexities of their data landscapes and the problems they encounter when they try and combine Big Data — particularly unstructured data — with more traditional enterprise data.
Recognizing this, many companies have already invested heavily in Big Data systems in an effort to capture, store, and process just a fraction of the data that is now being generated on a daily basis. Consider this: More data will be created in 2017 than in the previous 5,000 years combined.
Often, organizations begin to implement a Big Data strategy by cobbling together best-of-breed Big Data platform technologies from multiple vendors and various open source community sources. This results in added complexities required to integrate, manage, and secure the environment.
Enterprise data landscapes themselves have also grown increasingly complex, with proliferating data sources and destinations such as data lakes, enterprise data warehouses (EDWs), and data marts. Mergers and acquisitions and the rapid expansion of new data sources, such as sensor data or mobile data, have also compounded the data complexity problem.
At the same time, rivalries — even within IT departments — between ‘owners’ of data often stymie data-based projects. Enterprise data owners argue that Big Data is not ‘enterprise ready’ and often lacks governance controls, while Big Data advocates argue that their enterprise data counterparts are slow and too cautious.
In a new global study published this week by SAP, three-quarters of IT decision-makers say their data landscape is so complex that it limits agility, less than one-in-five say they have a full view of their data and can access it immediately, and 88 percent admit that there is much more they could do with their data (see the infographic).
Among their other concerns were data quality and accessibility; only half of the survey respondents said their data was accessible to a wide variety of business stakeholders. This lack of access to data in part reflects the way corporate data systems have evolved.
Data kept in silos (Hadoop implementations, cloud object storage, data warehouses, data marts, etc.) across the enterprise prevents businesses from being agile and capitalizing on the full value of their data. Users can’t access and work with the data they need across the silos where it is stored. In particular, it is complex, time consuming, and costly to connect Big Data with enterprise data and the business processes required to gain insight and value from it.
Unfortunately, businesses generally cannot solve the complexity of their landscape simply by storing all their data in a Hadoop data lake. Hadoop systems, while powerful, often do not have the extent of governance and security measures that enterprises require and often require highly trained and scarce data specialist to operate them.
With the increased complexity of enterprise landscapes, the ability to provide effective governance is more difficult. Without governance across data sources, organizations find it difficult to ensure proper data security or to trust and rely on the data’s accuracy, creating risk for anyone using analytics or operational applications that use the data. The ability to trace data lineage and forecast the impact of changes, as well as manage security and privacy requirements at the hub level, are all critical aspects of a trusted enterprise landscape.
Without effective ways to manage, govern, and process across the massive amounts of data available to organizations today, there is a real danger that this data will remain underutilized and in some cases sit wasted in data silos that are costly to maintain or do not add value to the company.
SAP’s solution to these data management problems is SAP Data Hub, launched this week. SAP Data Hub is designed to sit at the heart of a modern data framework and is responsible for governance, data pipelining, and data sharing capabilities across the connected data landscape.
“Companies are struggling to work across multiple types and volumes of information that are kept in a variety of locations across the enterprise,” said Bernd Leukert, member of the Executive Board of SAP SE, Products and Innovation. “SAP Data Hub simplifies the data landscape, making it manageable and scalable while providing the governance that companies need. It also makes it easier to build applications that extract value from data across the organization, no matter if it lies in the cloud or on premise, in the data lake or the enterprise data warehouse, or in an SAP or non-SAP system.”
No longer its seems, will there be “data, data everywhere, but no way to help us think.”