Intel has this week released a new version of its OpenVINO ahead of MWC Barcelona 2022 taking place from 28 February to 3 March 2022. The Intel Distribution of OpenVINO Toolkit first launched back in 2018 and now includes new features to help developers advance AI inferencing. Offering a tool suited for high-performance deep learning, targeted at faster, more accurate real-world results.
New features in the latest Intel OpenVINO 2022.1 release
“The latest release of OpenVINO 2022.1 builds on more than three years of learnings from hundreds of thousands of developers to simplify and automate optimizations. The latest upgrade adds hardware auto-discovery and automatic optimization, so software developers can achieve optimal performance on every platform. This software plus Intel silicon enables a significant AI ROI advantage and is deployed easily into the Intel-based solutions in your network,” said Adam Burns, vice president, OpenVINO Developer Tools in the Network and Edge Group.”
Updated, cleaner API
- Fewer code changes when transitioning from frameworks: Precision formats are now preserved with less casting, and models no longer need layout conversion.
- An easier path to faster AI: Model Optimizer’s API parameters have been reduced to minimize complexity.
- Train with inferencing in mind: OpenVINO training extensions and neural network compression framework (NNCF) offer optional model training templates that provide additional performance enhancements with preserved accuracy for action recognition, image classification, speech recognition, question answering and translation.
Broader model support
- Broader support for natural language programming models and use cases like text-to-speech and voice recognition: Dynamic shapes support better enables BERT family and Hugging Face transformers.
- Optimization and support for advanced computer vision: Mask R-CNN family is now more optimized and double precision (FP64) model support has been introduced.
- Direct support for PaddlePaddle models: Model Optimizer can now import PaddlePaddle models directly without first converting to another framework.
Portability and performance
- Smarter device usage without modifying code: AUTO device mode self-discovers available system inferencing capacity based on model requirements, so applications no longer need to know their compute environment in advance.
- Expert optimization built into the toolkit: Through auto-batching functionality, device performance is increased, automatically tuning and customizing the proper throughput settings for developers’ system configuration and deep learning model. The result is scalable parallelism and optimized memory usage.
- Built for 12th Gen Intel Core : Supports the hybrid architecture to deliver enhancements for high-performance inferencing on CPU and integrated GPU.
Source : Intel
Latest Geeky Gadgets Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn more.