If you’re delving into the world of machine learning and artificial intelligence on mobile computing platforms, you will be pleased to know that AMD has something tailored just for you. Meet the Ryzen AI Software Platform, a robust environment designed to work hand-in-hand with AMD’s Ryzen 7040U and 7040HS series mobile processors. What’s the cherry on top? It’s fully compatible with Windows 11 OS.
One of the most compelling aspects of this platform is its commitment to simplicity. Let’s break down how it makes the life of a developer easier:
- Minimal Learning Curve: The platform doesn’t demand any alterations to your existing machine learning models or training methodologies. If you have a pre-trained model in PyTorch or TensorFlow, you can get started with Ryzen AI without any additional fuss.
- Intuitive APIs: Whether you’re a C++ aficionado or a Python enthusiast, the ONNX Runtime accommodates both. This means you can deploy your AI models using languages you’re comfortable with.
- Optimized Performance: The Vitis AI Execution Provider, integrated into the ONNX Runtime, takes care of intelligently partitioning the tasks. It determines which parts of your model should run on the Ryzen IPU, ensuring that performance is maximized while power consumption remains low.
What is the AMD Ryzen AI
If you’re wondering how this translates into real-world applications, think about the real-time capabilities you’ve always wanted right on your laptop. The Ryzen AI Engine, available in select models of the AMD Ryzen 7040 series, is a game-changer. It’s the first dedicated AI processing silicon on a Windows x86 processor. Built on a cutting-edge 4nm process node, it promises exceptional performance and longevity of battery life in ultrathin laptops.
Other articles you may find of interest on the subject of AMD :
- AMD FidelityFX SDK 1.0 now available to download
- AMD Versal Premium VP1902 adaptive system-on-chip (SoC)
- AMD 4th Gen AMD EPYC processors
- AMD Radeon RX 6900 XT performance vs RTX 3090, RTX 3080
- AMD Ryzen 6000HS RDNA 2 graphics performance
So, how do you set up your development environment? It’s straightforward:
- First, make sure that the IPU driver is properly installed. You can verify this from Device Manager under System Devices as “AMD IPU Device.”
- For simpler applications like video conferencing, an IPU binary configuration offering up to 2 TOPS (Tera Operations Per Second) should suffice. You can run up to four such AI streams in parallel without compromising performance.
- For more demanding applications, you can opt for a larger configuration that offers up to 10 TOPS. However, keep in mind that this configuration currently supports only a single application at a time.
Let’s talk about the deployment process. After selecting and training a model in either the PyTorch or TensorFlow frameworks, the next step is quantization. This is facilitated by the AMD Vitis AI quantizer, which converts the model into INT8 and saves it in ONNX format. If you’re a fan of Microsoft Olive, you’ll be happy to know it’s supported as well, with the Vitis AI quantizer serving as a plug-in. Finally, the ONNX Runtime Vitis AI Execution Provider takes over, compiling and executing your quantized model optimally on Ryzen AI.
The aiming to integrate machine learning models into applications for AMD Ryzen-powered laptops. It offers a seamless development flow, versatile deployment options, and intelligent workload optimization, all while being extraordinarily power-efficient.
If you’re in the market for a developer-friendly, power-efficient, and performance-optimized AI solution, this platform is definitely worth a closer look.
Latest Geeky Gadgets Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn about our Disclosure Policy.