EMC Hadoop Starter Kit ViPR Edition: Creating a Smarter Data Lake

Pivotal HD offers a wide variety of data processing technologies for Hadoop – real-time, interactive, and batch. Add integrated data storage EMC Isilon scale-out NAS to Pivotal HD and you have a shared data repository with multi-protocol support, including HDFS, to service a wide variety of data processing requests. This smells like a Data Lake to me – a general-purpose data storage and processing resource center where Big Data applications can develop and evolve. Add EMC ViPR software defined storage to the mix and you have the smartest Data Lake in town, one that supports additional protocols/hardware and automatically adapts to changing workload demands to optimize application performance.

EMC Hadoop Starter Kit, ViPR Edition, now makes it easier to deploy this ‘smart’ Data Lake with Pivotal HD and other Hadoop distributions such as Cloudera and Hortonworks. Simply download this step-by-step guide and you can quickly deploy a Hadoop or a Big Data analytics environment, configuring Hadoop to utilize ViPR for HDFS, with Isilon hosting the Object/HDFS data service.  Although in this guide Isilon is the storage array that ViPR deploys objects to, other storage platforms are also supported – EMC VNX, NetApp, OpenStack Swift and Amazon S3.

I spoke with the creator of this starter kit James F. Ruddy, Principal Architect for the EMC Office of the CTO to explain why every organization should use this starter kit optimize their IT infrastructure for Hadoop deployments.

1.  The original EMC Hadoop Starter Kit released last year was a huge success.  Why did you create ViPR Edition?

Organizations that are deploying Hadoop as dedicated environments are creating more data siloes in the organization. This guide will enable customers to minimize data siloes by deploying any of the three most popular Hadoop distributions (Pivotal, Cloudera, Hortonworks) utilizing EMC ViPR software defined storage, enabling organizations to leverage existing investments in storage platforms/infrastructures for Big Data analytics. There are massive amounts of data already living in storage platforms whereby ViPR will ‘analytics’ enable those storage arrays without having to create a separate dedicated Hadoop environment.

2.  What are the best use cases for HSK ViPR Edition?

First, you can instantly deploy a Big Data repository through utilizing existing enterprise storage capacity as “Data Lakes” on top of which to enable analytics.

Second, you can reduce the growth in dedicated Hadoop environments since large volumes of unstructured data already living in EMC storage or third party such as NetApp arrays can be now exploited through Hadoop programs.

Third, you can eliminate the need to have multiple copies of the same data for different types of applications through ViPR’s support for multiple protocols/mixed workloads.  ViPR will enable dual mode access to the data under its management, enabling object based workloads and analytics applications to manipulate the same data since ViPR provides S3, Swift and Atmos APIs interface support as well as HDFS API access.

3.  So what are the pre-requisities for HSK ViPR Edition?

The guides are designed to enable the use of ViPR as a Hadoop compatible file system that resides as object storage on top of an existing ViPR supported file storage array. So to start you need a file system array that you can deploy ViPR data services in front of. For the compute side you need either physical or virtual machines to run the hadoop cluster. Anywhere from one to many can be used. The guides walk you through the automated deployment tools available through each distribution and shows how to use the native management tools to integrate ViPR HDFS services.

 

Pivotal Big Data Suite: Eliminating the Tax On A Growing Hadoop Cluster

The promise of Big Data is about analyzing more data to gain unprecedented insight, but Hadoop pricing can place serious constraints on the amount of data that can actually be stored for analysis.  Each time a node is added to a Hadoop cluster to increase storage capacity, you are charged for it.  Because this pricing model is counterintuitive to the philosophy of Big Data, Pivotal has removed the tax to store data in Hadoop with its announcement of Pivotal Big Data Suite.

Through a Pivotal Big Data Suite subscription, customers store as much data as they want in fully supported Pivotal HD, paying for only value added services per core – Pivotal Greenplum Database, GemFire, SQLFire, GemFire XD, and HAWQ.   The significance of this new consumption model is that customers can now store as much Big Data as they want, but only be charged for the value they extract from Big Data.

BigDataSuite_Diagram

*Calculate your savings with Pivotal Big Data Suite compared to traditional Enterprise Data Warehouse technologies.

Additionally, Pivotal Big Data Suite removes the mind games associated with diverse data processing needs of Big Data.  With a flexible subscription of your choice of real-time, interactive, and batch processing technologies, organizations are no longer locked into a specific technology because of a contract.  At any point of time, as Big Data applications grow and Data Warehouse applications shrink, you can spin up or down licenses across the value added services without incurring additional costs.  This pooled approach eliminates the need to procure new technologies, which results in delayed projects, additional costs, and more data silos.

I spoke with Michael Cucchi, Senior Director of Product Maketing at Pivotal, to explain how Pivotal Big Data Suite radically redefines the economics of Big Data so organizations can achieve the Data Lake dream.

1. What Big Data challenges does Big Data Suite address and why?

Continue reading

RSA and Pivotal: Laying the Foundation for a Wider Big Data Strategy

Building from years of security expertise, RSA was able to exploit Big Data to better detect, investigate, and understand threats with its RSA Security Analytics platform launched last year. Similarly, Pivotal leveraged its world-class Data Science team in conjunction with its Big Data platform to deliver Pivotal Network Intelligence for enhanced threat detection using statistical and machine learning techniques on Big Data. Utilizing both RSA Security Analytics and Pivotal Network Intelligence together, customers were able to identify and isolate potential threats faster than competing solutions for better risk mitigation.

As a natural next step, RSA and Pivotal last week announced the availability of the Big Data for Security Analytics reference architecture, solidifying a partnership that brings together the leaders in Security Analytics and Big Data/Data science. RSA and Pivotal will not only enhance the overall Security Analytics strategy, but also provide a foundation for a broader ‘IT Data Lake’ strategy to help organizations gain better ROI from these IT investments.

RSA’s reference architecture utilizes Pivotal HD, enabling security teams to gain access to a scalable platform with rich analytic capabilities from Pivotal tools and the Hadoop ecosystem to experiment and gain further visibility around enterprise security and threat detection. Moreover, the combined Pivotal and RSA platform allows organizations to leverage the collected data for non-security use cases such as capacity planning, mean-time-to-repair analysis, downtime impact analysis, shadow IT detection, and more.

RSA-Pivotal-Reference-Architecture

 

Distributed architecture allows for enterprise scalability and deployment

I spoke with Jonathan Kingsepp, Director of Federation EVP Solutions from Pivotal to discuss how the RSA-Pivotal partnership allows customers to gain much wider benefits across their organization.

1.  What are the technology components of this is this new RSA-Pivotal Reference architecture?

Continue reading

EMC and RainStor Optimize Interactive SQL on Hadoop

Pivotal HAWQ was one of the most groundbreaking technologies entering the Hadoop ecosystem last year through its ability to execute complete ANSI SQL on large-scale datasets managed in Pivotal HD. This was great news for SQL users – organizations heavily reliant on SQL applications and common BI tools such as Tableau and MicroStrategy can leverage these investments to access and analyze new data sets managed in Hadoop.

Similarly, RainStor, a leading enterprise database known for its efficient data compression and built-in security, also enables organizations to run ANSI SQL queries against data in Hadoop – highly compressed data.  Due to the reduced footprint from extreme data compression (typically 90%+ less), RainStor enables users to run analytics on Hadoop much more efficiently.  In fact, there are many instances where queries run significantly faster with a reduced footprint plus some filtering capabilities that figure out what not to read.  This allows customers to minimize infrastructure costs and maximize insight for data analysis on larger data sets.

Serving some of the largest telecommunications and financial services organizations, RainStor enables customers to readily query and analyze petabytes of data instead of archiving data sets to tape and then having to reload it whenever it is needed for analysis. RainStor chose to partner with EMC Isilon scale-out NAS for its storage layer to manage these petabyte-scale data environments even more efficiently. Using Isilon, the compute and storage for Hadoop workload is decoupled, enabling organizations to balance CPU and storage capacity optimally as data volumes and number of queries grow.

Rainstor

Furthermore, not only are organizations able to run any Hadoop distribution of choice with RainStor-Isilon, but you can also run multiple distributions of Hadoop against the same compressed data. For example, a single copy of the data managed in Rainstor-Isilon can service Marketing’s Pivotal HD environment, Finance’s Cloudera environment, and HR’s Apache Hadoop environment.

To summarize, running RainStor and Hadoop on EMC Isilon, you achieve:

  • Flexible Architecture Running Hadoop on NAS and DAS together: Companies leverage DAS local storage for hot data where performance is critical and use Isilon for mass data storage. With RainStor’s compression, you efficiently move more data across the network, essentially creating an I/O multiplier.
  • Built-in Security and Reliability: Data is securely stored with built-in encryption, and data masking in addition to user authentication and authorization. Carrying very little overhead, you benefit from EMC Isilon FlexProtect, which provides a reliable, highly available Big Data environment.
  • Improved Query Speed: Data is queried using a variety of tools including standard SQL, BI tools Hive, Pig and MapReduce. With built-in filtering, queries speed-up by a factor of 2-10X compared to Hive on HDFS/DAS.
  • Compliant WORM Solution: For absolute retention and protection of business critical data, including stringent SEC 17a-4 requirements, you leverage EMC Isilon’s SmartLock in addition to RainStor’s built-in immutable data retention capabilities.

I spoke to Jyothi Swaroop, Director of Product Marketing at Rainstor, to explain the value of deploying EMC Isilon with RainStor and Hadoop.

1.  RainStor is known in the industry as an enterprise database architected for Big Data. Can you please explain how this technology evolved and what needs it addresses in the market?

Continue reading