Developers using AI compute platforms looking to squeeze every last bit of performance from their systems when processing complex models. May be interested in a new article published by NVIDIA this week showing how to get the best performance on MLPerf Inference 2.0. The Jetson Orin AGX is a system on chip platform capable of providing up to 275 TOPS of AI compute for multiple concurrent AI inference pipelines, together with high-speed interface support for multiple sensors.
MLPerf Inference 2.0
“Models like Megatron 530B are expanding the range of problems AI can address. However, as models continue to grow complexity, they pose a twofold challenge for AI compute platforms: These models must be trained in a reasonable amount of time. They must be able to do inference work in real time.
Jetson Orin AGX is an SoC that brings up to 275 TOPS of AI compute for multiple concurrent AI inference pipelines, plus high-speed interface support for multiple sensors. The NVIDIA Jetson AGX Orin Developer Kit enables you to create advanced robotics and edge AI applications for manufacturing, logistics, retail, service, agriculture, smart city, healthcare, and life sciences.
Beyond the hardware, it takes great software and optimization work to get the most out of these platforms. The results of MLPerf Inference 2.0 demonstrate how to get the kind of performance needed to tackle today’s increasingly large and complex AI models.”
Source : NVIDIA
Latest Geeky Gadgets Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.