The data warehousing and business intelligence space has undergone a huge transformation in the past several years whereby business users are moving away from these traditional, ‘IT bottleneck’ environments to more agile ones driven by Big Data. For example, when business users lobbied for self-service access, they got Tableau. When they pressed for data discovery, they got Endeca. What’s next? An agile, yet controlled environment to satisfy both the business and IT community. Pivotal Data Dispatch (Pivotal DD) fulfills the needs of all enterprise data stakeholders by empowering business users with on-demand access and analysis to Big Data – all under an established system of metadata and security defined by IT.
I spoke with Todd Paoletti, Vice President of Product Marketing at Pivotal to explain why Pivotal DD is the next Big Thing to hit the Big Data market.
1. Walk me through how Pivotal DD is used from the inception of a Big Data project and what issues it overcomes during a project lifecycle?
Over 50% of the Big Data business opportunity comes from better understanding your customers. As a result, organizations with Sales and Marketing departments are finally aligning together by simply aligning around high value customers. EMC is a great example of Sales and Marketing teams ending the turf war, as both departments are working together to create a centralized customer analytics database to better identify customer segments for upsell/cross sell opportunities, target prospects who are more likely to bring in more value, and optimize operations. This Big Data transformation now enables Sales and Marketing to work off of the same customer account data to harmoniously build the EMC brand and business.
I became very interested in documenting the success of Big Data here at EMC so I captured one of the use cases around optimizing operations for EMC Maintenance and Renewals. Click here for this newly published Big Data success story that details how EMC gained an incremental $113M above revenue goal from a Big Data strategy that involved the right people, processes, and technology.
Splunk has proven to deliver real value to organizations by collecting and indexing massive amounts raw data generated from virtually any source and transforming this data into new, real-time insight. This type of detailed data being generated by machines and applications throughout an organization were previously untapped, but Splunk makes this data usable and accessible to solve real business problems such as preventing disastrous outages and service degradation.
Raw Data captured and indexed
Raw data visualized
To provide Splunk customers with yet more value, EMC and Splunk have teamed up to help customers manage the explosion of data across physical, virtual and cloud environments with a tested reference architecture for non-disruptive scalability, optimized performance, and simplified management. Download the EMC reference architecture guide to deploy a shared infrastructure for Splunk using VMware vSphere dynamic computing and EMC Isilon scale-out storage to enable higher levels of consolidation and utilization compared to traditional IT deployments.
I spoke with Hal Rottenberg, Data Center Practice Manager at Splunk to speak to the value the EMC Reference Architecture for Splunk and how this architecture addresses the rapid growth of data and users in Splunk environments.
1. Before we get into the EMC Reference Architecture, please describe Splunk and the value it provides.
Big Data: Understanding How Data Powers Big Business is yet another Big Data book to hit the market. What makes this book unique? There is practical advice and hands on exercises so that you end up with a Big Data action plan unique to your business after completion of the book. I spoke to the author, EMC’s own Big Data’s preeminent expert William Schmarzo, to explain the goals of his book and why organizations grappling with Big Data should pick it up.
1. What makes you a Big Data expert in providing practical advice for developing Big Data strategies?