Pivotal Big Data Suite: Eliminating the Tax On A Growing Hadoop Cluster

The promise of Big Data is about analyzing more data to gain unprecedented insight, but Hadoop pricing can place serious constraints on the amount of data that can actually be stored for analysis.  Each time a node is added to a Hadoop cluster to increase storage capacity, you are charged for it.  Because this pricing model is counterintuitive to the philosophy of Big Data, Pivotal has removed the tax to store data in Hadoop with its announcement of Pivotal Big Data Suite.

Through a Pivotal Big Data Suite subscription, customers store as much data as they want in fully supported Pivotal HD, paying for only value added services per core – Pivotal Greenplum Database, GemFire, SQLFire, GemFire XD, and HAWQ.   The significance of this new consumption model is that customers can now store as much Big Data as they want, but only be charged for the value they extract from Big Data.

BigDataSuite_Diagram

*Calculate your savings with Pivotal Big Data Suite compared to traditional Enterprise Data Warehouse technologies.

Additionally, Pivotal Big Data Suite removes the mind games associated with diverse data processing needs of Big Data.  With a flexible subscription of your choice of real-time, interactive, and batch processing technologies, organizations are no longer locked into a specific technology because of a contract.  At any point of time, as Big Data applications grow and Data Warehouse applications shrink, you can spin up or down licenses across the value added services without incurring additional costs.  This pooled approach eliminates the need to procure new technologies, which results in delayed projects, additional costs, and more data silos.

I spoke with Michael Cucchi, Senior Director of Product Maketing at Pivotal, to explain how Pivotal Big Data Suite radically redefines the economics of Big Data so organizations can achieve the Data Lake dream.

1. What Big Data challenges does Big Data Suite address and why?

Continue reading

RSA and Pivotal: Laying the Foundation for a Wider Big Data Strategy

Building from years of security expertise, RSA was able to exploit Big Data to better detect, investigate, and understand threats with its RSA Security Analytics platform launched last year. Similarly, Pivotal leveraged its world-class Data Science team in conjunction with its Big Data platform to deliver Pivotal Network Intelligence for enhanced threat detection using statistical and machine learning techniques on Big Data. Utilizing both RSA Security Analytics and Pivotal Network Intelligence together, customers were able to identify and isolate potential threats faster than competing solutions for better risk mitigation.

As a natural next step, RSA and Pivotal last week announced the availability of the Big Data for Security Analytics reference architecture, solidifying a partnership that brings together the leaders in Security Analytics and Big Data/Data science. RSA and Pivotal will not only enhance the overall Security Analytics strategy, but also provide a foundation for a broader ‘IT Data Lake’ strategy to help organizations gain better ROI from these IT investments.

RSA’s reference architecture utilizes Pivotal HD, enabling security teams to gain access to a scalable platform with rich analytic capabilities from Pivotal tools and the Hadoop ecosystem to experiment and gain further visibility around enterprise security and threat detection. Moreover, the combined Pivotal and RSA platform allows organizations to leverage the collected data for non-security use cases such as capacity planning, mean-time-to-repair analysis, downtime impact analysis, shadow IT detection, and more.

RSA-Pivotal-Reference-Architecture

 

Distributed architecture allows for enterprise scalability and deployment

I spoke with Jonathan Kingsepp, Director of Federation EVP Solutions from Pivotal to discuss how the RSA-Pivotal partnership allows customers to gain much wider benefits across their organization.

1.  What are the technology components of this is this new RSA-Pivotal Reference architecture?

Continue reading

Alpine Data Labs – Making Predictive Analytics Pervasive and Persuasive

Big Data has exposed the need for deeper data insights through predictive analytic techniques such as data mining, machine learning, and modeling. The interesting thing to note is that predictive analytics has been around for a long time, used by a select few, in select organizations. Its value has always been recognized and applauded, but its true potential never fully realized due to lack of widespread adoption, as well as issues around data accessibility, performance, statistical expertise, business sponsorship, cost, and more. In fact, nearly 90 percent of organizations that do employ predictive analytic software agree that it has given them a competitive advantage, according to a new survey.

The advent of Big Data has driven the uptake of predictive analytics due to the curiosity of very capable Data Scientists, along with new tools and technologies from companies such as Alpine Data Labs.  Alpine Data Labs provides next generation predictive analytics to address legacy issues and meet the new demands of Big Data. But more importantly, Alpine Data Labs is mainstream-oriented whereby business users, not just statisticians and Data Scientists, are compelled to mine data.

collaborate-alpine-data

Backed by $16M in Series B funding, Alpine Data Labs is getting some serious momentum in the Big Data analytics startup space, offering zero coding for creating and deploying complex predictive models on Hadoop. I spoke with Alpine Data Labs CEO Joe Otto to talk about their game changing approach to predictive analytics for Big Data.

1.  Lets first talk about leading predictive analytics incumbents such as SAS, IBM SPSS, and other analytics vendors who got their start years ago with desktop and server software designed for data mining and advanced analytics. How has Alpine Data Labs overcome the issues around these incumbent technologies and address the new needs of Big Data?

Continue reading

Pivotal Data Dispatch Shrinks The Big Data Productivity Gap

The data warehousing and business intelligence space has undergone a huge transformation in the past several years whereby business users are moving away from these traditional, ‘IT bottleneck’ environments to more agile ones driven by Big Data.  For example, when business users lobbied for self-service access, they got Tableau.  When they pressed for data discovery, they got Endeca.  What’s next?  An agile, yet controlled environment to satisfy both the business and IT community.   Pivotal Data Dispatch (Pivotal DD) fulfills the needs of all enterprise data stakeholders by empowering business users with on-demand access and analysis to Big Data – all under an established system of metadata and security defined by IT.

pdd_features_slide

I spoke with Todd Paoletti, Vice President of Product Marketing at Pivotal to explain why Pivotal DD is the next Big Thing to hit the Big Data market.

1. Walk me through how Pivotal DD is used from the inception of a Big Data project and what issues it overcomes during a project lifecycle?

Continue reading