News
News Categories

NVIDIA announces availability of their Hyperscale Suite and Tesla GPUs

By John Law - on 10 Nov 2015, 10:00pm

NVIDIA announces availability of their Hyperscale Suite and Tesla GPUs

Last week, NVIDIA announced during VMWare’s annual vForum in Singapore that they were bringing their virtual desktop computing services to the Southeast Asian region in the form of the GRID 2.0, and Singapore would be the hub for that service.

The Tesla M40, made for Deep Learning, machine learning, and HPC, and the Tesla M4, created for workload acceleration (below).

This week, the graphics company announced during a conference call that the graphics accelerators that power the GRID 2.0 will now be available for companies and industries that seek to power up their computers. We are, of course, talking about the new Tesla M40 and Tesla M4 GPU accelerators.

It's been a long standing fact that GPU accelerator are far superior to CPU agnostic datacenters.

While both these cards are based upon NVIDIA’s current Maxwell GPU architecture, Tesla GPUs are not your average graphics cards. These are graphics accelerators that were designed to handle an extremely different scale of complex computation and the processing of user generated content on the web (To date, the amount of user generated content is now within the field of Exabytes), and all of that data is handled and sorted in real-time.

Field of Exabytes: here's a short list of content that is generated on the internet. By the way, this is all on a daily basis.

While the two cards may differ in their use and functions, both the Tesla M40 and Tesla M4 what NVIDIA refers to as their new Hyperscale accelerators, and both cards can be used in tandem within the same datacenter in order to perform the same task that a CPU agnostic platform would do, in merely a fraction of the time and the amount of computational power used.

IN this situation, the Tesla M40 can be considered the workhorse of the two. NVIDIA hails it as the world’s Fastest Accelerator, both capable and suitable for the function of machine learning, Deep Learning, and more importantly, HPC (High Performance Computing).

The gist of it all: The Tesla M40 captures, sifts through the data found on the net and creates a feasible model, while the Tesla M4 deploys it across all servers in the datacenter.

It works like this: in a datacenter, the Tesla M40 can effectively be set to train the servers within the datacenter. This means sifting through the previously mentioned field of Exabytes of data and then converting it into feasible models which can easily be interpreted.

Once these models are created, they are then deployed on to every other server in the datacenter, ready which are then ready to be shared out to the billions of devices that exists around the world. That’s where the Tesla M4 comes in to play. Where the Tesla M40 was used for the more complex machine learning and Deep Learning, the Tesla M4 is designed to handle what is known as “Throughput Hyperscale Workload Acceleration.”

To put it in a nutshell, the task of the Tesla M4 is to process any and all information that the Tesla M40 finds and compiles. From there, the Tesla M4 then transforms the compiled data into the relevant format (e.g. videos are compiled and made into the standard video format, images are resized and scaled accordingly, etc.). This includes video processing and transcoding, image processing, and machine learning inference, to name a few.

NVIDIA's new Hyperscale Suite is meant to help companies to take full advantage of their new GPU accelerators.

Of course, for the two cards to do all this, it’s also going to need some pretty powerful tools. To that end, NVIDIA also announced the accompanying Hyperscale suite along with these cards. This Hyperscale suite includes the following:

  • Deep Learning Toolkit
  • GPU REST Engine
  • GPU Accelerated FFmpeg
  • Image Compute Engine
  • GPU support for Apache Mesos

NVIDIA's new Hyperscale Accelerators now include support for Apache Mesos, an open-source cluster manager.

The last point is an important highlight for the NVIDIA, as the open-source Apache Mesos cluster management software is used by corporations like Twitter, Airbnb, and Apple in order to handle the large amount of data that goes through their datacenters daily.

For more news from NVIDIA, follow us here.