Event Coverage
GTC 2016 Day 1: Of Pascal, VR, deep learning, and self-driving cars
By John Law - 6 Apr 2016,3:17pm

GTC 2016 Day 1: Of Pascal, VR, deep learning, and self-driving cars

Jen-Hsun Huang, CEO of NVIDIA, giving the opening keynote at GTC 2016.

Here at HardwareZone Malaysia, we've always looked forward to NVIDIA's annual Graphics Technology Conference (GTC). Each year had a theme, and over the last couple of years, we've seen NVIDIA speak across a broad spectrum of topics that focus on the usage of their GPUs, ranging from the goals of creating an autonomous vehicle that could just drive itself, to a Deep Learning AI that is capable of disseminating information unsupervised.

This year, all those previously discussed topics were brought under the umbrella of this year's primary focus: VR (Virtual Reality).

NVIDIA SDK

NVIDIA's SDK resources for developers.

The first announcement revolved around the availability of the NVIDIA SDK. Essentially created to enable and empower developers, the NVIDIA SDK is comprised of different developer's toolkits. This includes Computeworks, Gameworks, Designworks, Driveworks, Jetpack, and of course, the newly christened VRWorks.

As the "works" in their names suggests, these developer tools cover nearly all aspects in design, architecture and even programming tools for creating self-driving cars. VRWorks was designed, as you would imagine, for developers looking to take advantage of the realm of VR with the recently released Oculus Rift and soon-to-be released HTC Vive headsets.

VR, And The Worlds Created

During the VR demo of Mars 2030, we got a surprise appearance from Steve "Woz" Wozniak, who was also piloting the Mars Rover in the VR demo.

During the show, we saw two VR shows on demonstration, Everest VR, and Mars 2030. Everest VR was a pretty straight forward VR presentation that allowed us to see what it was like to be at the top of the majestic peak, but that was pretty much it. Interaction within the demo was limited, but still visually refreshing nonetheless.

 

OMG it's Steve Wozniak, and he's piloting the Mars Rover in NVIDIA's Mars 2030!

Posted by HWM Malaysia on Tuesday, 5 April 2016

During the Mars 2030 presentation, the keynote had a light-hearted moment when Jen-Hsun Huang, CEO of NVIDIA, started a video conference with Steve "Woz" Wozniak (if you don't know who this man is, then shame on you), who then proceeded to pilot the rover in the entire demo. Interestingly enough, one fact that Jen-Hsun had pointed out was that the creation of the Everest VR consisted of nearly 108 billion pixels (not polygons, pixels!), which bears testament to the power the GPU.

IRAY VR

Essentially just a additional block to NVIDIA's already existing IRAY program and service, IRAY VR effectively allows developers to take advantage of currently existing IRAY features (i.e. Ray Tracing, Photorealistic rendering, etc.) and implement them into a VR program easily.

 

NVIDIA's use of VR allowed them to design their new office, and even brought the construction process approximately seven weeks ahead of schedule.

Posted by HWM Malaysia on Tuesday, 5 April 2016

It's not all smoke and mirrors either. With the help of IRAY VR, Huang stated that his architects were able to bring the construction time of their new NVIDIA HQ forward by approximately seven weeks, which NVIDIA themselves say is unprecedented, and has never been done before. This was because IRAY VR allowed architects and designers to maximize the amount of light that came into the building throughout the year, and on top of that, prevent the light from ever letting the temperatures spike to uncomfortable levels during the summer.

In addition, Huang also announced IRAY VR Lite, which was designed to allow users to create designs in less detailed programs, such as 3ds Max, and even be used on the Android mobile operating system.

Both IRAY VR and IRAY VR Lite will be available in June this year.

Pascal, The Tesla P100, and DGX-1

Perhaps the most important announcement that the Huang made during the keynote (and the one that we had long been waiting for) was the announcement of their new GPU architecture, Pascal.

Well, we have some good news and some not-really-that bad news.

The Tesla P100, NVIDIA's first GPU that is based on the new Pascal GPU architecture.

The good news is that Pascal was indeed announced, and as promised by NVIDIA, the new GPU architecture does come equipped with the new HBM2 memory format, which is based off from AMD's own HBM (High Bandwidth Memory) technology. Fabricated using a 16nm FinFET process, this GPU is able to utilize the new NVLink feature, and last but not least, will be able to decipher new AI algorithms.

Now, the not-really-that bad news is that Pascal wasn't announced in the form of a GeForce card today. Instead, it was announced in the shape of the Tesla P100, a GPU which was designed for hyperscale datacenter processing.

Despite the new process technology, its die size is 600mm2 to house the 15.3 billion transistors of the new GPU. That's not counting the memory dies on the package, which combined with the GPU, total up to 150 billion transistors!

There a total of 56 Streaming Multiprocessors (SMs) on the Tesla P100 GPU, although the GP100 GPU core supports up to 60 SMs. That just means higher tier products will come in due course. In total, the Tesla P100 will have 3,584 FP32 (single precision) CUDA cores and about half that amount for FP64 (double precision) CUDA cores.

That's eight (count that, eight) Tesla P100 GPUs inside the machine.

While the Tesla P100 won't be available until Q1 of 2017, Huang had also announced that the card was already in volume production, and was being used in the DGX-1, the world's first Deep Learning supercomputer that is powered by not one, but eight Tesla P100 GPUs. The new DGX-1 supercomputer is so powerful that compared to a Maxwell-powered supercomputer, it is able process a total of 1.3 billion images in just under two hours. That's 12 times faster than what four Maxwell GPUs are able to do!

On that note, if you wish to own a DGX-1 supercomputer, it's going to set you back a grand total of US$129,000 (approx. RM505,434).

Self-driving Cars, and Roborace

 

Introducing NVIDIA's very own self-driving, DAVENET.

Posted by HWM Malaysia on Tuesday, 5 April 2016

Last on Huang's list of announcements  was a showcase of their self-driving AI technology, which is made possible via Deep Learning. This presentation was pretty short, with Huang simply showing us how far NVIDIA's self-driving AI had come. As you can see from the embedded video above, it seems to be going quite well.

In addition to that, Jen-Hsun also announced that NVIDIA was planning on hosting the world's first Autonomous Car Race, Roborace, sometime during this year and 2017. The race would comprise of 10 teams, each with two cars that are identical to each other, and all cars will be powered by an NVIDIA Drive PX2. If you're wondering what a PX2 is, it's essentially a mini supercomputer designed for cars.

That's all the interesting stuff that happened at today's keynote. Stay tuned though, as we will be bringing you more coverage for the next two days of this event.