BIG DATA Data Center Infrastructure — 01 August 2016

By Leo Reiter, Chief Technology Officer, Nimbix

Screen Shot 2016-08-01 at 12.20.47 PMBig Data is becoming much more than just widespread distribution of cheap storage and cheap computation on commodity hardware. Big Data analytics may soon become the new “killer app” for High Performance Computing (HPC).

There is more to Big Data than large amounts of information. It also pertains to massive distributed activities such as complex queries and computations. In other words, deriving value through computation is just as “big” as the size of the data sets themselves. In fact, Big Data on HPC has already been coined by the analyst firm IDC, as “high performance data analysis.”

HPC is well positioned to enable Big Data use cases through all three phases of typical workflows including, data capture and filtering; analytics; and results visualization. In addition to the three phases, the speed of computation matters just as much as the scale. In order to unlock the full potential of Big Data, we have to pair it with “big compute,” or HPC.

Here are three ways Big Data and HPC are converging and how businesses can take full advantage of the phenomenon right now to improve large-scale processing.

  1. Hadoop Meets Infiniband

Many consider Infiniband, the most commonly used interconnect technology in supercomputers, as much a basic of a requirement for HPC as bare metal processing. If you can’t move information back and forth between nodes quickly, it limits the horizontal scalability you can achieve. Remote Direct Memory Access (RDMA) for Apache Hadoop provides an excellent high speed, low latency interconnect option for Big Data platforms. You can even provision a Hadoop cluster in the cloud that leverages RDMA in no time. Consider that 56Gbps FDR Infiniband can be over 100 times faster than even 10Gbps Ethernet due to its superior bandwidth and latency advantage. Short of using very expensive custom bus fabrics, this is the fastest way to distribute data and processing across computational nodes. Finally, you can scale that Big Data platform to the size it deserves without worrying nearly as much about bottlenecks. Not only would you obtain results faster, but the setup time would be far lower than if using commodity networking technology.

  1. Hadoop Meets Accelerators

Another key feature of HPC is the use of popular coprocessors and accelerators, such as passively cooled NVIDIA Tesla and Kepler GPUs. Just as these technologies greatly assist technical computing solutions, they can also help Big Data and analytics much like they already do for sequencing and alignment.

Hadoop leveraging GPU technologies such as CUDA and OpenCL can boost Big Data performance by a significant factor. All other things being equal, high performance Big Data platforms and technologies such as Hadoop, Spark, and MapReduce lead to faster results for complex analytics. In fact, the only way to keep up with the growing amount of data we are collecting is to increase computation speed at the same time. Big Data leveraging co-processors and accelerators is an important way for HPC to make a big impact in this space.

  1. Big Data and HPC Converge in the Cloud

As Big Data fuels public cloud growth faster than any other application, HPC on demand is an emerging force ready to meet this challenge. The more data we collect, the more computational capacity we need to analyze the data. Simply stated, Big Data and HPC growth in the cloud go hand-in-hand. The only way to provide enough scale to keep up with demand is to deploy HPC class assets to increase processing performance and density.

Thanks to the marriage of Big Data platforms with supercomputing technologies such as high-speed interconnects and coprocessors, organizations can utilize and deploy HPC on demand services designed to enable the next major wave of analytics innovation. The same computational power that accelerates sequencing and alignment today can vastly improve queries and comparisons in the future. With distributed file systems such as Hadoop rather than expensive, traditional HPC parallel storage, the economics become more attractive. Finally, with the time to value and elastic scale only possible in the public cloud, companies can now focus exclusively on their work rather than wrestling with IT platforms.

Thanks to the convergence of Big Data and HPC on demand, companies will be able to leverage the scale and availability of computation in the public cloud.

————

Leo Reiter is the Chief Technology Officer at Nimbix, a leader in cloud-based High Performance Computing (HPC) infrastructure and applications.                                                    

 

Share

About Author

(0) Readers Comments

Comments are closed.

Visit Us On FacebookVisit Us On TwitterVisit Us On Linkedin