Event Coverage

NVIDIA GTC Asia South 2015: The return of HPC and Deep Learning (Updated with video)

By John Law - 6 Sep 2015

NVIDIA GTC Asia South 2015: The return of HPC and Deep Learning (Updated with video)

Every March (or around that time), we attend NVIDIA's annual Graphics Technology Conference (GTC) in San Jose, California, in the U.S. Despite the announcement of its current flagship TITAN X graphics card during this year's GTC 2015, NVIDIA's attention was obviously and quite clearly focused on another application: Deep Learning.

Just yesterday, we went down to Singapore where, quite surprisingly, NVIDIA was hosting the Asia South leg of GTC 2015. However, this conference wasn't for them to announce a new graphics card. Far from it, the focus of this GTC was actually for NVIDIA to promote its graphics technology to both the enterprise and the science (specifically, applied and accelerated) sector. As you'd expect, there were guest speakers from institutions and companies such as Monash University and WeChat, who were invited to the conference in order to give us the lowdown on how they were using NVIDIA's technology in order to drive their research efforts in the fields of High Performance Computing (HPC) and even Deep Learning.

 

The Southeast Asian expansion

Marc Hamilton, VP, Solutions Architecture, NVIDIA.

The keynote of the event kicked off with Marc Hamilton, VP, Solutions Architecture, NVIDIA, giving us the lowdown of the company's many feats, most especially the use of GPU power in driving the advancement of scientific research in the fields of HPC, Deep Learning, PC Virtualization, Rendering, and the more familiar field of Cloud Gaming.

Of course, throughout the course of his speech, Marc made several references to the more commonly used applications by NVIDIA's products, with those references being object rendering, and the GPU-intensive (and computationally-expensive) Ray Tracing program, which adjusts the source of light within a 3D model render in real time.

But 3D rendering aside, Marc then revealed NVIDIA's real intentions of hosting the GTC in Singapore, and that was to expand the technology into the enterprise sector of Singapore. Having seen the benefits of HPC and Deep Learning and how it can be used in the country's vision of becoming a Smart Nation, it became abundantly clear to both Singapore and several companies that GPU technology can be applied to a wider array of sciences, and not just gaming.

NVIDIA will be opening its first technology center in Singapore, along with Singaporean-based company, EDB Singapore.

To that end, Marc was also glad to announce that NVIDIA would be opening its first technology center in Singapore, in partnership with EDB Singapore. With this partnership, both companies are hoping that the new technology center will drive the concept of Deep Learning within the Southeast Asian region.

 

WeChat: Deep Learning, social media, and mobile messaging

Professor Qiang Yang, Technical Advisor to WeChat, Tencent.

At this stage of the keynote, the subject of Deep Learning was still on the plate. This time, though, its application had shifted from the realm of search engines and data accuracy, to the realm of social media and mobile messaging.

To talk about how Deep Learning was being used in the social media space, NVIDIA invited Professor Qiang Yang, Technical Advisor to WeChat, Tencent, in order to talk about how the Chinese company was implementing Deep Learning into its app, and how it was transforming the way users communicated with each other.

We've already covered the subject of Deep Learning and its functions at length during our coverage of the GTC 2015, so we won't elaborate on the topic in detail here.

Professor Qiang explained that WeChat was implementing Deep Learning for itself via several channels that are related to the app: messaging, audio, image recognition, voice recognition, and crowd intelligence.

WeChat used Augmented Reality and 3D reconstruction to further improve its Deep Learning initiative.

One of the more interesting methods of WeChat actually implementing the concept of image recognition was through the use of augmented reality and 3D imaging. The concept is simple: with the use of a phone's camera, the user would simply capture a 3D rendered image of a subject. Once captured, the picture would then be completely reconstructed via the vast amount of information that the company would have accumulated through the Deep Learning system.

Finally, when the picture has been completely reconstructed, the WeChat app (along with Deep Learning) would save the reconstructed image in its database. From there, the user can then use that image as a form of identification either for their own selves, or for someone in their friend's list.

The "Hong Bao" Shake was WeChat's way of collecting data in a short amount of time.

Now, the question at hand with WeChat's Deep Learning initiative was this: How did they managed to accumulate such a large amount of data in such a short amount of time since its adoption of the system? The answer, quite surprisingly, was through a promotional event that the company held during Chinese New Year called the “Hong Bao” (Mandarin for Red Packet) Shake. During the festival, WeChat users could send these Hong Baos to random people by merely shaking their phones as fast as possible, and each Hong Bao would contain different gifts for the recipient.

Through that promotional event and in the span of 14 days (the duration of the Chinese New Year festival), WeChat had gathered enough data for its Deep Learning program, and recorded a peak of 810 million shakes per minutes during the promotion.

 

Magnifying microscopic images with HPC

Paul Bonnington, Professor and Director of e-Research, Monash University, Australia.

Deep Learning aside, one of the more interesting talks at the event was on the use of GPU technology and HPC in the field of science. Specifically, in the field of microscopy. As with the application of HPC in the field of neuroscience, using HPC in the field of microscopy actually has its benefits, as illustrated by Paul Bonnington, Professor and Director of e-Research, Monash University, Australia.

The Synchotron is one very powerful telescope that is driven by the MASSIVE supercomputer (below).

It's a little known secret that Monash University is actually one of NVIDIA's biggest clients, and the reason for that is because all the hardware purchased by the University (through Paul) is used to drive two instruments: the first is the Synchotron, an extremely powerful microscope that can render a 3D model of a subject in incredible detail. The second instrument is the supercomputer that drives the Synchotron: a supercomputer known by its acronym of MASSIVE. Used in tandem with NVIDIA's GPUs, both MASSIVE and the Synchotron are capable of reconstructing data from any subject put under the microscope in a time span of just two minutes.

On that note, if that speed of data reconstruction doesn't impress you, allow us to elaborate a little further: Traditionally, and long before GPU technology was even considered as a driver for data reconstruction, Paul told us that the process could easily take anything between days and weeks.

These 3D renders of a pair of lungs was captured by the Synchotron and reconstructed by MASSIVE in just two minutes.

It doesn't stop there either. Paul also showed us how NVIDIA's GPUs also allowed MASSIVE and the Synchotron to reconstruct and visualize a pair of lungs, and how it allowed his team to witness the actual movement of it from its breathing pattern. This level of visualization is astounding, especially when you considered the idea that sooner or later, scientists will be able to monitor different organs (let alone lungs), and discover any anomalies within the subject, all within minutes of putting a subject under the Synchotron.

The CAVE2 (or CAVE) facility, where Monash University's research faculty can gather to see the reconstructed data in full.

However, Monash University's greatest achievement thus far was the creation of the CAVE2 (or CAVE, as Paul liked to call it). As you can see from the chart above, the CAVE is essentially a facility that is used together with the Synchotron and MASSIVE, in order for Paul and his department to gain a visual render of their work. The facility itself looks and sounds mind-blowing, especially when you consider that it houses 40 NVIDIA Quadro K5200 workstation GPUs, as well as as multiple displays to give researchers a clear view of 3D rendered images in real time.

Needless to say, the CAVE is very powerful.

With all that has been mentioned here, it's safe to say that NVIDIA's GTC Asia South 2015 in Asia is off to a good start. That's all from us for now, and hopefully next year, we'll get to hear more from NVIDIA's partnership with EDB Singapore on their Deep Learning endeavor.

For more from NVIDIA, follow us here.