News Categories

NVIDIA outs a US$99 AI computer, the Jetson Nano

By Vijay Anand - on 19 Mar 2019, 1:36pm

NVIDIA outs a US$99 AI computer, the Jetson Nano

Targeted at the robotics community and industry, the new Jetson Nano dev kit is NVIDIA’s lowest cost AI computer to-date at US$99 and is the most power efficient too consuming as little as 5 watts.

The Jetson Nano Developer Kit (80x100mm), available now for US$99.

It is really small.

The goal of the Jetson Nano is to make AI processing accessible to everyone, all while supporting the same underlying CUDA architecture, deep learning SDKs and frameworks (such as TensorRT and cuDNN - NVIDIA CUDA Deep Learning Neural Network library), supporting popular machine learning (ML) frameworks such as TensorFlow, PyTorch, Caffe and others along with frameworks for computer vision and robotics development like OpenCV and ROS. This makes it easier than ever to deploy AI-based inference workloads onto Jetson. It’s not just limited to DNN inferencing as its CUDA architecture can be leveraged for real-time computer vision and Digital Signal Processing (DSP) to enable multi-sensor autonomous robots, IoT devices with intelligent edge analytics, and advanced AI systems.

Packing a 128-core pared down NVIDIA Tegra TX1 processor (which means you also get a 64-bit quad-core ARM A57 processor) and 4GB of 64-bit LPDDR4 memory, the Jetson Nano is perfectly capable of processing 8 Full HD video streams in real-time (or twin 4K video streams) and is perfect as a low power edge intelligent video analytics platform for network video recorders, smart camera arrays, IoT gateways and more.

Reference NVR system architecture with Jetson Nano and 8x HD camera inputs. (Source: NVIDIA)

While it doesn’t sound like a whole lot less capable than the original Jetson TX1, the Jetson Nano costs far less and is more compact - and that's the point of this new entrant in the hope that there would be even more makers and help kick start AI adoption at every level. So that’s a great value and perfect for the maker community hoping to step up from more basic platforms like Raspberry Pi 3 and get on the AI bandwagon, which requires more processing heft. Speaking of which, you’ll be surprised how well the Jeston Nano holds up against other entry-level options as compiled by NVIDIA. Also, keep in mind that the Jetson Nano is the only option of that class that runs all AI application frameworks and models available in the market now.

Jetson kits compared
Features Jetson Nano Jetston TX1 Jetson TX2
GPU 128-core Maxwell
(Tegra TX1 variant)
256-core Maxwell
(Tegra TX1)
256-core Pascal
(Tegra "Parker")
CPU 64-bit quad-core ARM A57 2x Denver 2 + 4x ARM A57
Memory 4GB 64-bit LPDDR4 (25.6GB/s) 8GB 128-bit LPDDR4 (58.4GB/s)
Storage 16GB eMMC 32GB eMMC
Ethernet Gigabit Ethernet
Wi-Fi NIL 802.11ac 2x2
Video Encode 4K/30p 4K/60p OR (2x) 4K/30P
Camera 12 MIPI CSI-2 DPHY 1.1 lanes (1.5Gb/sec) 
Dual ISP (1.4Gpix/sec) 
12 MIPI CSI-2 DPHY 1.2 lanes (2.5Gb/sec) 
Dual ISP (1.4Gpix/sec)
Module Size 70 x 45mm 50mm x 87mm 50mm x 87mm

Dev-kit: US$99

Module: US$129

Dev-kit: US$599

Module: US$299

Dev-kit: US$599

Module: US$479

Beyond vision based applications, the Jetson Nano is a great fit for IoT Gateways in smart buildings where a massive amount of sensory information can be aggregated and processed. Speech synthesis is yet another computationally intensive arena that the Jetson Nano can be targeted but good use cases are perhaps still in the making (beyond surveillance needs).

The 45 x 70mm Jetson Nano compute module with 260-pin edge connector. It will be available for US$129, but packed with better hardware than the dev-kit edition.

The Jetson Nano devkit is built around a 260-pin SODIMM-style System-on-Module (SoM), which contains the processor, memory, and power management circuitry. The Jetson Nano compute module itself (as pictured above) is 45 x 70mm and will be shipping starting in June 2019 for US$129 for embedded designers to integrate into production systems. The production compute module will also include 16GB eMMC onboard storage and enhanced I/O, hence the higher cost.

Let the tinkering, begin!

Source: NVIDIA