EMC Hadoop Starter Kit ViPR Edition: Creating a Smarter Data Lake

Pivotal HD offers a wide variety of data processing technologies for Hadoop – real-time, interactive, and batch. Add integrated data storage EMC Isilon scale-out NAS to Pivotal HD and you have a shared data repository with multi-protocol support, including HDFS, to service a wide variety of data processing requests. This smells like a Data Lake to me – a general-purpose data storage and processing resource center where Big Data applications can develop and evolve. Add EMC ViPR software defined storage to the mix and you have the smartest Data Lake in town, one that supports additional protocols/hardware and automatically adapts to changing workload demands to optimize application performance.

EMC Hadoop Starter Kit, ViPR Edition, now makes it easier to deploy this ‘smart’ Data Lake with Pivotal HD and other Hadoop distributions such as Cloudera and Hortonworks. Simply download this step-by-step guide and you can quickly deploy a Hadoop or a Big Data analytics environment, configuring Hadoop to utilize ViPR for HDFS, with Isilon as the Object/HDFS data service.  Although in this guide Isilon is being used for the storage platform that ViPR deploys, it could be other EMC storage platforms such as VNX, third party storage platforms such as NetApp and cloud stacks such as OpenStack Swift and Amazon S3.

I spoke with the creator of this starter kit James F. Ruddy, Principal Architect for the EMC Office of the CTO to explain why every organization should use this starter kit optimize their IT infrastructure for Hadoop deployments.

1.  The original EMC Hadoop Starter Kit released last year was a huge success.  Why did you create ViPR Edition?

Organizations that are deploying Hadoop as dedicated environments are creating more data siloes in the organization. This guide will enable customers to minimize data siloes by deploying any of the three most popular Hadoop distributions (Pivotal, Cloudera, Hortonworks) utilizing EMC ViPR software defined storage, enabling organizations to leverage existing investments in storage platforms/infrastructures for Big Data analytics. There are massive amounts of data already living in storage platforms whereby ViPR will ‘analytics’ enable those storage arrays without having to create a separate dedicated Hadoop environment.

2.  What are the best use cases for HSK ViPR Edition?

First, you can instantly deploy a Big Data repository through utilizing existing enterprise storage capacity as “Data Lakes” on top of which to enable analytics.

Second, you can reduce the growth in dedicated Hadoop environments since large volumes of unstructured data already living in EMC storage or third party such as NetApp arrays can be now exploited through Hadoop programs.

Third, you can eliminate the need to have multiple copies of the same data for different types of applications through ViPR’s support for multiple protocols/mixed workloads.  ViPR will enable dual mode access to the data under its management, enabling object based workloads and analytics applications to manipulate the same data since ViPR provides S3, Swift and Atmos APIs interface support as well as HDFS API access.

3.  So what are the pre-requisities for HSK ViPR Edition?

The guides are designed to enable the use of ViPR as a Hadoop compatible file system that resides as object storage on top of an existing ViPR supported file storage array. So to start you need a file system array that you can deploy ViPR data services in front of. For the compute side you need either physical or virtual machines to run the hadoop cluster. Anywhere from one to many can be used. The guides walk you through the automated deployment tools available through each distribution and shows how to use the native management tools to integrate ViPR HDFS services.

 

EMC and RainStor Optimize Interactive SQL on Hadoop

Pivotal HAWQ was one of the most groundbreaking technologies entering the Hadoop ecosystem last year through its ability to execute complete ANSI SQL on large-scale datasets managed in Pivotal HD. This was great news for SQL users – organizations heavily reliant on SQL applications and common BI tools such as Tableau and MicroStrategy can leverage these investments to access and analyze new data sets managed in Hadoop.

Similarly, RainStor, a leading enterprise database known for its efficient data compression and built-in security, also enables organizations to run ANSI SQL queries against data in Hadoop – highly compressed data.  Due to the reduced footprint from extreme data compression (typically 90%+ less), RainStor enables users to run analytics on Hadoop much more efficiently.  In fact, there are many instances where queries run significantly faster with a reduced footprint plus some filtering capabilities that figure out what not to read.  This allows customers to minimize infrastructure costs and maximize insight for data analysis on larger data sets.

Serving some of the largest telecommunications and financial services organizations, RainStor enables customers to readily query and analyze petabytes of data instead of archiving data sets to tape and then having to reload it whenever it is needed for analysis. RainStor chose to partner with EMC Isilon scale-out NAS for its storage layer to manage these petabyte-scale data environments even more efficiently. Using Isilon, the compute and storage for Hadoop workload is decoupled, enabling organizations to balance CPU and storage capacity optimally as data volumes and number of queries grow.

Rainstor

Furthermore, not only are organizations able to run any Hadoop distribution of choice with RainStor-Isilon, but you can also run multiple distributions of Hadoop against the same compressed data. For example, a single copy of the data managed in Rainstor-Isilon can service Marketing’s Pivotal HD environment, Finance’s Cloudera environment, and HR’s Apache Hadoop environment.

To summarize, running RainStor and Hadoop on EMC Isilon, you achieve:

  • Flexible Architecture Running Hadoop on NAS and DAS together: Companies leverage DAS local storage for hot data where performance is critical and use Isilon for mass data storage. With RainStor’s compression, you efficiently move more data across the network, essentially creating an I/O multiplier.
  • Built-in Security and Reliability: Data is securely stored with built-in encryption, and data masking in addition to user authentication and authorization. Carrying very little overhead, you benefit from EMC Isilon FlexProtect, which provides a reliable, highly available Big Data environment.
  • Improved Query Speed: Data is queried using a variety of tools including standard SQL, BI tools Hive, Pig and MapReduce. With built-in filtering, queries speed-up by a factor of 2-10X compared to Hive on HDFS/DAS.
  • Compliant WORM Solution: For absolute retention and protection of business critical data, including stringent SEC 17a-4 requirements, you leverage EMC Isilon’s SmartLock in addition to RainStor’s built-in immutable data retention capabilities.

I spoke to Jyothi Swaroop, Director of Product Marketing at Rainstor, to explain the value of deploying EMC Isilon with RainStor and Hadoop.

1.  RainStor is known in the industry as an enterprise database architected for Big Data. Can you please explain how this technology evolved and what needs it addresses in the market?

Continue reading

VCE Vblock: Converging Big Data Investments To Drive More Value

As Big Data continues to demonstrate real business value, organizations are looking to leverage this high value data across different applications and use cases. The uptake is also driving organizations to transition from siloed Big Data sandboxes, to enterprise architectures where they are mandated to address mission-critical availability and performance, security and privacy, provisioning of new services, and interoperability with the rest of the enterprise infrastructure.

Sandbox or experimental Hadoop on commodity hardware with direct attached storage (DAS) makes it difficult to address such challenges for several reasons – difficult to replicate data across applications and data centers, lack of IT oversight and visibility into the data, lack of multi-tenancy and virtualization, difficult to streamline upgrades and migrate technology components, and more. As a result, VCE, leader in converged or integrated infrastructures, is receiving an increased number of requests on how to evolve Hadoop implementations reliant on DAS to being deployed on VCE Vblock Systems -  an enterprise-class infrastructure that combines server, shared storage, network devices, virtualization, and management in a pre-integrated stack.

Formed by Cisco and EMC, with investments from VMware and Intel, VCE enables organizations to rapidly deploy business services on demand and at scale – all without triggering an explosion in capital and operating expenses. According to IDC’s recent report, organizations around the world spent over $3.3 billion on converged systems in 2012, and forecasted this spending to increase by 20% in 2013 and again in 2014. In fact, IDC calculated that Vblock Systems infrastructure resulted in a return on investment of 294% over a three-year period and 435% over a five-year period compared to data on traditional infrastructure due to fast deployments, simplified operations, improved business-support agility, cost savings, and freed staff to launch new applications, extend services, and improve user/customer satisfaction.

I spoke with Julianna DeLua from VCE Product Management to discuss how VCE’s Big Data solution enables organizations to extract more value from Big Data investments.

vce

 

1.  Why are organizations interested deploying Hadoop and Big Data applications on converged or integrated infrastructures such as Vblock?

Continue reading

You Asked, Rackspace Listened – New Big Data Hosting Options

Running Hadoop on bare metals may fit some use cases, but many organizations have the types of data workloads that demand more storage than compute resources. In order to get the most efficient utilization for these types of Hadoop workloads, separating the compute and storage resources makes sense and is a configuration users are asking for, i.e EMC Isilon storage for Hadoop.  In response to diverse Big Data needs such as mixed Hadoop workloads, hybrid cloud models, and heterogenous data layers, Rackspace recently delivered new Big Data hosting options whereby users now have more choices – run Hadoop on Rackspace managed dedicated servers, spin up Hadoop on the public cloud, or configure your own private cloud.

EMC_Isilon_RefArch

I spoke with Sean Anderson, Product Marketing Manager for Data Solutions at Rackspace, to talk about one particular new offering called ‘Managed Big Data Platform’ whereby customers can design the optimal configuration for their data, and leave the management details to Rackspace.

1.  The Big Data lifecycle will go through various stages, with each stage imposing different requirements. From what you are seeing with your customers, can you explain this lifecycle and what value Rackspace brings to support this lifecycle?

Continue reading