The promise of Big Data is about analyzing more data to gain unprecedented insight, but Hadoop pricing can place serious constraints on the amount of data that can actually be stored for analysis. Each time a node is added to a Hadoop cluster to increase storage capacity, you are charged for it. Because this pricing model is counterintuitive to the philosophy of Big Data, Pivotal has removed the tax to store data in Hadoop with its announcement of Pivotal Big Data Suite.
Through a Pivotal Big Data Suite subscription, customers store as much data as they want in fully supported Pivotal HD, paying for only value added services per core – Pivotal Greenplum Database, GemFire, SQLFire, GemFire XD, and HAWQ. The significance of this new consumption model is that customers can now store as much Big Data as they want, but only be charged for the value they extract from Big Data.
*Calculate your savings with Pivotal Big Data Suite compared to traditional Enterprise Data Warehouse technologies.
Additionally, Pivotal Big Data Suite removes the mind games associated with diverse data processing needs of Big Data. With a flexible subscription of your choice of real-time, interactive, and batch processing technologies, organizations are no longer locked into a specific technology because of a contract. At any point of time, as Big Data applications grow and Data Warehouse applications shrink, you can spin up or down licenses across the value added services without incurring additional costs. This pooled approach eliminates the need to procure new technologies, which results in delayed projects, additional costs, and more data silos.
I spoke with Michael Cucchi, Senior Director of Product Maketing at Pivotal, to explain how Pivotal Big Data Suite radically redefines the economics of Big Data so organizations can achieve the Data Lake dream.
1. What Big Data challenges does Big Data Suite address and why?