All Paths Lead To A Federation Data Lake

Is your organization constrained by 2nd platform data warehouse technologies with limited or no budget to move forward towards 3rd platform agile technologies such as a Data Lake? As an EMC customer you have the advantage of leveraging existing EMC investments to develop a Federation Data Lake at minimal cost. Additionally, the Federation Data Lake will generate healthy returns, as it is packaged up with the expertise needed to immediately execute on data lake uses cases such as data warehouse ETL offloading and archiving.

Data Lake

With the release of William Schmarzo’s Five Tactics to Modernize Your Existing Data Warehouse, I wanted to explore whether the Dean of Big Data views data warehouse modernization tactics or paths ultimately leading to a Federation Data Lake.

1.  What is a Data Lake and who should care?

The data lake is a modern approach to data analytics by taking advantage of the processing and cost advantages of Hadoop. It allows you to store all of the data that you think might be important into a central repository as is. Leaving the data in its raw form is key since you don’t need a pre-determined schema or ‘schema on load’. Schema on load is a data warehousing process that optimizes a query, but also strips the data of information that could be useful for analysis. This flexibility then allows the data lake to feed all downstream applications such as a data warehouse, analytic sandboxes, and other analytic environments.

Everybody should care, but especially the data warehousing and data science teams. It provides a line of demarcation between the data warehouse team who is production/SLA driven and the data science team who is ad-hoc/exploratory driven. There is a natural point of friction between these teams since the nature of data science tools such as SAS negatively affect data warehouse SLAs. With a data lake, the data science team can freely access the data they need without affecting data warehouse SLAs.

The other benefit a data lake provides for a data warehouse team is ETL offload. The data lake can perform large-scale, complex ETL processing, freeing up resources in the expensive data warehouse. I’m working with a large hospital right with this ETL offload use case as their data warehouse costs are continually rising due to having to add more resources in order to prevent ETL processing negatively affecting reporting windows.

2.  What is the Federation Data Lake solution?

Through the testing of different storage and processing technologies, the Federation Data Lake provides a technology reference architecture, with services, that span across the Federation – EMC II, Pivotal, and VMware.

It is a package that really helps customers accelerate the modernization of their data warehouse environment into a data lake – not only through a proven architecture but also with global services to assist with the migration.

3.  Who are the ideal candidates for the Federation Data Lake and why?

The ideal candidate is any large data warehouse organization having trouble meeting ETL windows or maxing out on resources. The perception is that a Data Lake is a data science tool, but it is also a great tool for data warehouse teams for ETL processing. It is a 20-50X savings when you move ETL processing from an expensive data warehouse to a low-cost Data Lake.

4.  One of the biggest barriers to getting value from Big Data or a Data Lake is the skills shortage. How does the Federation Data Lake address this issue?

Federation Data Lake addresses this issue in 3 ways. By putting together a technology reference architecture, we accelerate the development of a Data Lake. By packaging up expertise through EMC Global Services, customers can quickly get started by helping them identify use cases that have the most business impact and creating subsequent project plans for execution. Finally, the EMC Big Data curriculum is aligned with the Federation Data Lake in order to train executives, business leaders, and data scientists to successfully identify use cases and execute on them. For example, we train users how to use new technologies such as Hadoop as a more modern, powerful, and agile approach to ETL processing.

5.  Gartner says beware of data lake fallacy, citing ‘Data lakes focus on storing disparate data and ignore how or why data is used, governed, defined and secured’. How does the Federation Data Lake address this issue?

My issue with Gartner’s comment is that they are taking the concept of a Data Lake and beating it apart whereas EMC approaches the concept of a Data Lake as a means to solve technical and business problems. For example, we absolutely believe you need data governance and it should not be ignored in a data lake environment. EMC Global Services helps organizations with their data governance strategy by identifying the business processes that will be supported by the Data Lake. For example, a business process may use POS data, which will be highly governed, social media data, which may be lightly governed, and market intelligence data, which may need no governance.

Big Data Pains & Gains From A Real Life CIO

What does it take to make CIO Magazine’s Top 100 List? Big Data victory is one of them.
Michael Cucchi, Sr Director of Product Maketing at Pivotal, had the privilege to speak with one of the winners – EMC CIO Vic Bhagat. Discussing the pains and gains of EMC’s Big Data initiative, I have put together a summary of this interview below.  EMC IT’s approach to Big Data is exactly what the EVP Federation enables organizations to do – first collect any and all data in a Data Lake, deploy the right analytic tool that your people know how to use to analyze the data, and finally learn agile development so you can take those insights and build applications rapidly.

1. Why is Big Data important to your business?

Continue reading

Hadoop-as-a-Service: An On-Premise Promise?

Hadoop-as-a-Service (HaaS) is generally referred to Hadoop in the cloud, a handy alternative to on-premise Hadoop deployments for organizations with overwhelmed data center administrators that need to incorporate Hadoop but don’t have the resources to do so. What if there was also a promising option to successfully build and maintain Hadoop clusters on-premise also referred to HaaS? The EMC Hybrid Cloud (EHC) enables just this – Hadoop in the hybrid cloud.

EHC, announced at EMC World 2014, is a new end-to-end reference architecture that is based on a Software-Defined Data Center architecture comprising technologies from across the EMC federation of companies: EMC II storage and data protection, Pivotal CF Platform-as-a-service (PaaS) and the Pivotal Big Data Suite, VMware cloud management and virtualization solutions, and VMware vCloud Hybrid Service. EHC’s Hadoop-as-a- Service was demonstrated at last week’s VMworld 2014 San Francisco – the underpinnings of a Virtual Data Lake:

EHC leverages these tight integrations across the Federation so that customers can extend their existing investments for automated provisioning & self-service, automated monitoring, secure multi-tenancy, chargeback, and elasticity to addresses requirements of IT, developers, and lines of business. I spoke with Ian Breitner, Global Solutions Marketing Director for Big Data, to explain why EMC’s approach to HaaS should be considered over other Hadoop cloud offerings.

1.  In your opinion, what are the key characteristics of HaaS?

Continue reading

Pivotal Big Data Suite: Eliminating the Tax On A Growing Hadoop Cluster

The promise of Big Data is about analyzing more data to gain unprecedented insight, but Hadoop pricing can place serious constraints on the amount of data that can actually be stored for analysis.  Each time a node is added to a Hadoop cluster to increase storage capacity, you are charged for it.  Because this pricing model is counterintuitive to the philosophy of Big Data, Pivotal has removed the tax to store data in Hadoop with its announcement of Pivotal Big Data Suite.

Through a Pivotal Big Data Suite subscription, customers store as much data as they want in fully supported Pivotal HD, paying for only value added services per core – Pivotal Greenplum Database, GemFire, SQLFire, GemFire XD, and HAWQ.   The significance of this new consumption model is that customers can now store as much Big Data as they want, but only be charged for the value they extract from Big Data.

BigDataSuite_Diagram

*Calculate your savings with Pivotal Big Data Suite compared to traditional Enterprise Data Warehouse technologies.

Additionally, Pivotal Big Data Suite removes the mind games associated with diverse data processing needs of Big Data.  With a flexible subscription of your choice of real-time, interactive, and batch processing technologies, organizations are no longer locked into a specific technology because of a contract.  At any point of time, as Big Data applications grow and Data Warehouse applications shrink, you can spin up or down licenses across the value added services without incurring additional costs.  This pooled approach eliminates the need to procure new technologies, which results in delayed projects, additional costs, and more data silos.

I spoke with Michael Cucchi, Senior Director of Product Maketing at Pivotal, to explain how Pivotal Big Data Suite radically redefines the economics of Big Data so organizations can achieve the Data Lake dream.

1. What Big Data challenges does Big Data Suite address and why?

Continue reading