This week AMD have officially unveiled the new Radeon Instinct family accelerators for deep learning which includes the Radeon Instinct MI25, which has 16GB of HBM2 memory based on a Vega 10 GPU with 4096 Stream cores and a 300W TDP.
AMD has created three initial accelerators which have been created to address a wide range of machine intelligence applications. ThisSpecifications of the AMD Radeon Instinct MI25 Deep Learning Accelerator include:
• Vega 10 Architecture
• 4096 Stream Processors
• 24.6 TFLOPS Half Precision (FP16)
• 12.3 TFLOPS Single Precision (FP32)
• 768 GFLOPS Double Precision (FP64)
• 16GB HBM2 Memory
• 484GB/sec Memory Bandwidth
• 300W TDP
• PCIe Form Factor
• Full Height Dual Slot
• Passive Cooling
AMD explains more about each accelerator currently in the range:
– The Radeon Instinct MI25 accelerator, based on the “Vega” GPU architecture with a 14nm FinFET process, will be the world’s ultimate training accelerator for large-scale machine intelligence and deep learning datacenter applications. The MI25 delivers superior FP16 and FP32 performance in a passively-cooled single GPU server card with 24.6 TFLOPS of FP16 or 12.3 TFLOPS of FP32 peak performance through its 64 compute units (4,096 stream processors). With 16GB of ultra-high bandwidth HBM2 ECC GPU memory and up to 484 GB/s of memory bandwidth, the Radeon Instinct MI25’s design is optimized for massively parallel applications with large datasets for Machine Intelligence and HPC-class system workloads.
– The Radeon Instinct MI8 accelerator, harnessing the high-performance, energy-efficiency of the “Fiji” GPU architecture, is a small form factor HPC and inference accelerator with 8.2 TFLOPS of peak FP16|FP32 performance at less than 175W board power and 4GB of High-Bandwidth Memory (HBM) on a 512-bit memory interface. The MI8 is well suited for machine learning inference and HPC applications.
– The Radeon Instinct MI6 accelerator, based on the acclaimed “Polaris” GPU architecture, is a passively cooled inference accelerator with 5.7 TFLOPS of peak FP16|FP32 performance at 150W peak board power and 16GB of ultra-fast GDDR5 GPU memory on a 256-bit memory interface. The MI6 is a versatile accelerator ideal for HPC and machine learning inference and edge-training deployments.
Latest Geeky Gadgets Deals
Disclosure: Some of our articles include affiliate links. If you buy something through one of these links, Geeky Gadgets may earn an affiliate commission. Learn more.