Event Coverage

NVIDIA GTCx Melbourne: Catching up on Deep Learning, HPC, and AI applications

By John Law - 4 Oct 2016

NVIDIA GTCx Melbourne: Catching up on Deep Learning, HPC, and AI applications

NVIDIA’s annual GPU Technology Conference (GTC) at San Jose has always been a big hit up in San Jose, California in the U.S. that the company decided to begin hosting the conference in different regions around the world. Last year, we had the privilege to go down to Singapore to cover the company’s first ever GTC in the Southeast Asian region. This year, we were even luckier to have been invited to join their first ever GTCx at Melbourne, Australia.

David Kirk, NVIDIA Fellow, giving the opening keynote at GTCx Melbourne.

More than gaming: NVIDIA is no longer just known for their GeForce graphics cards.

In hindsight, hosting the GTCx in Melbourne actually makes a lot of sense because some of the company’s major clients happen to be some of the country’s prestigious institutions of higher learning. To bring you up to speed on some of their clients: Monash University is one of the few Australian universities to use NVIDIA’s own GPUs to power and drive their MASSIVE supercomputers (that’s the actual name), CAVE2 visual facility, and even their Synchotron telescope that is able to create a 3D map out of virtually any object, biological or non-biological.

More recently, the country’s science wing, the Commonwealth Scientific and Industrial Research Organization (CSIRO) recently purchased two of the graphics company’s DGX-1 supercomputers which, as you can read in our earlier coverage of it, are each powered by eight of NVIDIA’s powerful Tesla P100 GPUs and driven by four 1,600W PSUs.

NVIDIA announced their new Tesla P4 and Tesla P40 supercomputer PCIe accelerators for HPC and Deep Learning inference.

NVIDIA's Tesla P100 and the DGX-1 were also being exhibited at the convention today.

Behold: the front of the DGX-1 supercomputer. This particular unit belongs to CSIRO, Australia's primary science body.

During the opening keynote, the company also announced that their new Tesla P4, P40, and P100 for PCIe accelerators. To be clear: the Tesla P100 that is used in the DGX-1 uses NV Link technology, which several times faster than what a PCIe accelerator is able to achieve.

Deep Learning is still one of key topics at NVIDIA's GTC events.

Beyond the announcements, GTCx was essentially all about NVIDIA telling the public about their achievements in the fields of Deep Learning, High Performance Computing (HPC), and their application of their GPU technology in the advancement of science (such as Deep Speech), as well as some less-than-conventional applications, which we'll cover a little further down this article.

One of the highlights of the keynote: the Deep Learning AI's romantic description of ingredients for a burger (bottom right image).

The majority of topics which NVIDIA’s representatives had brought up were pretty much the same topics on self-driving cars and AI learning, which we had already covered both in GTC 2015 and earlier in the year at GTC 2016

Dr. Mark Sagar, Director of the Lab for Animate Technologies at the University of Auckland, New Zealand, giving a talk about his project, BabyX.

Dr. Sagar's speciality? The functions and emotions of the human face.

One of the most impressive, (albeit freaky) applications of NVIDIA’s GPUs however, was during a short presentation by Dr. Mark Sagar, Director of the Lab for Animate Technologies at the University of Auckland, New Zealand. Specifically, Dr. Sagar specializes in the creation of virtual human faces that are able to display a wide array of emotions that are comparable to a human face in reality. To give you an idea of what we’re talking about: this is pretty much the same kind of technology which has been used in some of our favourite films like The Curious Case of Benjamin Button, or even the recreation of Paul Walker’s face in his last film, Furious 7, when the actor was killed in a car accident before he got to finish the film.

The technology, impressive as it is, actually hides a much deeper aspect in its creation, a point which Dr. Sagar demonstrated with his own award-winning (and still on-going) program, BabyX. What is BabyX? Well, the program is essentially an avatar of a virtual infant which was created on the basis of Deep Learning, thus making it capable of learning different things, such as identifying certain images and objects, as well as identifying the good doctor whenever he appeared in front of the camera. What impressed us immensely though was the baby’s ability to respond, both verbally and emotionally.

When Dr. Sagar showed the baby a picture of the horse and a puppy, the baby was actually able to identify the animals and then utter the name of the animals when asked what they were. Emotionally, the baby was also showing signs of distress whenever the doctor disappeared outside the viewing angle of his notebook’s camera, which in effect, showed a sign of dependency of the AI on the good doctor’s presence!

And that's a dissected view of the baby's cognitive part of the brain.

On parting note, Dr. Sagar actually showed us the innate detail that was literally more than just skin deep. In fact, the man has actually come up with a detailed layout of the baby’s neural network, dopamine levels, and brain activity whenever it was displaying an emotion. The question then, was a simple one: what exactly possessed him to create this program?

“I created BabyX out of the simple reason that it wasn’t something that I would get bored of easily. Usually, with most projects, I’d lose interest in them over them. But with this, I could still be doing this for 20 years down the road, and I still wouldn’t get bored of it, because there’s just so many things about it that I could learn!”

We’ll be uploading some of videos of the keynotes a little later, and we’ll update this page within the next few days.

For more from NVIDIA, follow us here.