Covering Scientific & Technical AI | Saturday, November 30, 2024

Nvidia Launches Autonomous Vehicle, Conversational AI Tech at GTC China 

(chombosan/Shutterstock)

Nvidia launched a raft of new autonomous driving- and conversational AI-related products today at its GTC China Conference, along with a strategic relationship with Didi Chuxing (DiDi), a mobile transportation platform.

The company introduced Drive AGX Orin, which Nvidia said is an advanced, software-defined platform for autonomous vehicles (AVs) and robots. Powered by a new SoC called Orin, the platform consists of 17 billion transistors and is the result of four years of R&D, Nvidia said. The Orin SoC integrates Nvidia’s next-gen GPU architecture and Arm Hercules CPU cores. It also incorporates new deep learning and computer vision accelerators that deliver total throughput of 200 trillion operations/second, which the company said is almost 7x the performance of its previous-generation Xavier SoC.

Nvidia said Orin is designed for the variety of applications and deep neural networks (DNNs) that run in autonomous vehicles and robots, while also supporting systematic safety standards, such as ISO 26262 ASIL-D.

“Nvidia’s long-term commitment to the transportation industry, along with its innovative end-to-end platform and tools, has resulted in a vast ecosystem — virtually every company working on AVs is utilizing NVIDIA in its compute stack,” said Sam Abuelsamid, principal research analyst at Navigant Research. “Orin looks to be a significant step forward that should help enable the next great chapter in this ever improving technology story.”

The Drive AGX Orin platform is designed to support architecturally compatible platforms that scale from a Level 2 to full self-driving Level 5 vehicle, “enabling OEMs to develop large-scale and complex families of software products,” Nvidia said.

Also on the AV front, Nvidia announced it will provide transportation companies access to its Nvidia Drive DNNs for AV development on the Nvidia GPU Cloud (NGC) container registry. This means that for automakers, truck manufacturers, robotaxi and software companies, Nvidia will provide access to its pre-trained AI models and training code.

“The AI autonomous vehicle is a software-defined vehicle required to operate around the world on a wide variety of datasets,” said Jensen Huang, founder/CEO of Nvidia. “By providing AV developers access to our DNNs and the advanced learning tools to optimize them for multiple datasets, we’re enabling shared learning across companies and countries, while maintaining data ownership and privacy. Ultimately, we are accelerating the reality of global autonomous vehicles.”

DNNs running on Nvidia Drive AGX turn sensor data to handle AV tasks such as traffic-light and sign detection, detection of vehicles, pedestrians, bicycles, and path planning and perception, along with gaze detection and gesture recognition inside the vehicle.

Nvidia also announced developer tools for customizing the company’s DNNs using developers’ own datasets and target feature set. These tools allow the training of DNNs using active learning (automated data selection using AI, rather than manual curation), federated learning (use of datasets across countries while maintaining data privacy and IP) and transfer learning (designed to speed development of perception software by leveraging Nvidia’s existing AV R&D).

Finally, within AVs, Nvidia and DiDi announced, as part of their strategic relationship, that DiDi will use Nvidia GPUs and data center servers for training machine learning algorithms and Nvidia Drive for inference on DiDi's Level 4 AVs.

In conversational AI, Nvidia introduced new software that the company said cuts inference latency for better human-to-AI “interactive engagement” for such applications as voice agents, chatbots and recommendation engines. The offering is based on TensorRT 7, the seventh generation of the company’s inference software development kit.

Nvidia cited findings by Juniper Research stating there are an estimated 3.25 billion digital voice assistants used in devices globally, with that number expected to reach 8 billion by 2023, more than the world’s total population.

TensorRT 7 includes a deep learning compiler designed to automatically optimize and accelerate “the increasingly complex recurrent and transformer-based neural networks needed for AI speech applications,” Nvidia said. The company said the compiler accelerates components of conversational AI by more than 10x compared with CPU performance, reducing latency to less than 300-milliseconds, the company said, the threshold considered necessary for real-time interactions.

Nvidia said conversational AI customers include Sogou, which provides search services to WeChat, one of the world’s most widely used application on mobile phones. “Sogou provides high-quality AI services, such as voice, image, translation, dialogue and Q&A to hundreds of millions of users every day,” said Yang Hongtao, CTO of Sogou. “By using the Nvidia TensorRT inference platform, we enable online service responses in real time. These leading AI capabilities have significantly improved our user experience.”

AIwire