Keith has spent 25+ years building distributing computing and high performance computing systems for the Financial Services industry and in support of the US Government. He built his first machine learning system in 2009 and has been fascinated by data driven technology since then. Keith holds 6 issued patents and a few still pending around distributed analytics and high performance computing. Keith holds degrees from Virginia Tech and the University of Georgia
Lately, I have spent large swaths of my time focused around Deep Learning and Neural Networks (either with customers or in our lab). One of the most common questions that I get is around underperforming model training with regard to “wall clock time”. This has more to do with focusing on only one aspect of their architecture, say GPUs. As such, I will spend a little time writing about the 3 fundamental tenets for a successful Deep Learning architecture. These fundamental tenants are compute, file access, and bandwidth. Hopefully this will resonate and help provide some thoughts for those customers on their journey.
Deep Learning (DL) is certainly all the rage. We are defining DL as a type of Machine Learning (ML) built on a deep hierarchy of layers, with each layer solving different pieces of a complex problem. These layers are interconnected into a “neural network”.
The use cases that I am presented with continue to grow exponentially with very compelling financial return on investments. Whether it is Convolutional Neural Networks (CNNs) for Computer Vision or Recurrent Neural Networks (RNNs) for Natural Language Processing (NLP) or Deep Belief Networks (DBN) for Restricted Boltzmann Machines (RBMs), Deep Learning has many architectural structures and acronyms. There is some great Neural Network information out there. Pic 1 is a good representation of the structural layers for Deep Learning on Neural Networks:
Orchestration tools like BlueData, Kubernetes, Mesosphere, or Spark Cluster Manager are the top of the layer cake of (more…)
Brett is the Technical Lead for Dell EMC’s Data Analytics Technology Alliances, focused on developing solutions that help customers solve their data challenges. You can find him on social media at @Broberts2261
Operational Intelligence and machine generated data have been very hot topics lately as organizations are beginning to realize how valuable this data is for the business. For the last few years, Splunk has been the leader in this space with their all-encompassing platform that enables the ability to collect, search and analyze machine generated data. (Not up to speed on this yet? Check out my other blog on getting started with machine generated data) Dell EMC and Splunk have had a tremendous partnership over the past couple years that is based on the premise that we offer market leading infrastructure that is optimal for Splunk’s world class analytics platform for machine generated data. A couple weeks ago, we took this one step further… I’m excited to announce the release of the Solution Guide for Machine Analytics with Splunk Enterprise on VxRack Flex 1000! With this, Dell EMC now has a validated rack scale, hyper-converged infrastructure solution for Splunk that has been jointly validated by Splunk & Dell EMC.
Why is this important?
Having this solution that has been jointly validated by both Splunk and Dell EMC to “meet or exceed Splunk’s performance benchmarks” gives users a higher degree of confidence in the environment. With this solution the performance needed to run Splunk effectively and gain the valuable insights to make critical IT and business decisions will be there. Our solutions engineering team along with Splunk put hundreds of engineering hours into designing specific configurations based on a variety of different deployment scenarios and rigorously tested them to ensure performance. The solutions guide gives you not only those configurations but also implementation guidelines and deployment practices. All of this equals lower risk, quicker time to value and validated for performance…can’t ask for anything better.
How is VxRack Optimal for Splunk?
VxRack provides flexible, rack scale, hyper-converged infrastructure that allows you to use the hypervisor of your choice or bare metal as well as the ability to start small but scale-out to thousands of nodes. With VxRack you are given the flexibility to optimize your tiering for Splunk by putting Hot and Warm buckets in SSD while using HHD or even Isilon scale-out NAS for your cold bucket needs (Solution guide shows how to use Isilon for cold tiering). You also get to enjoy the benefits of Software Defined Storage and data services that are essential in today’s data center. The best part is that VxRack gives a turnkey experience that is engineered and designed to be ready to run, giving you a quicker time to insight and value. Additionally, with single support and life-cycle management for your infrastructure you lower complexity and reduce risk and costs. All of this equals great performance, economical tiering structure & easy to deploy and manage infrastructure that is validated to run Splunk.
Follow Dell EMC
Dell EMC Big Data Portfolio
See how the Dell EMC Big Data Portfolio can make a difference for your analytics journey
The opinions and interests expressed on Dell EMC employee blogs are the employees' own and do not necessarily represent Dell EMC's positions, strategies or views. Dell EMC makes no representation or warranties about employee blogs or the accuracy or reliability of such blogs. When you access employee blogs, even though they may contain the Dell EMC logo and content regarding Dell EMC products and services, employee blogs are independent of Dell EMC and Dell EMC does not control their content or operation. In addition, a link to a blog does not mean that EMC endorses that blog or has responsibility for its content or use.