This week NVIDIA has unveiled a new addition to its range of PCI-Express cards with the unveiling of the Tesla P100 HPC accelerator which was officially unveiled at this year’s International Supercomputing Conference, held in Frankfurt, Germany.
The Tesla P100 HPC has been designed for PCIe multi-slot servers and measures 30 cm long, 2-slot thick, and of standard height, and follows on from the introduction of the Tesla P100 by NVIDIA during April 2016.
NVIDIA explained a little more about the performance of the new PCI-Express variant of its Tesla P100 HPC Accelerator :
The PCIe variant of the P100 offers slightly lower performance than the NVLink variant, because of lower clock speeds, although the core-configuration of the GP100 silicon remains unchanged. It offers FP64 (double-precision floating-point) performance of 4.70 TFLOP/s, FP32 (single-precision) performance of 9.30 TFLOP/s, and FP16 performance of 18.7 TFLOP/s, compared to the NVLink variant’s 5.3 TFLOP/s, 10.6 TFLOP/s, and 21 TFLOP/s, respectively.
The card comes in two sub-variants based on memory, there’s a 16 GB variant with 720 GB/s memory bandwidth and 4 MB L3 cache, and a 12 GB variant with 548 GB/s and 3 MB L3 cache. Both sub-variants feature 3,584 CUDA cores based on the “Pascal” architecture, and core clock speed of 1300 MHz.
For more information on the new Tesla P100 HPC accelerator jump over to the NVIDIA website for details by following the link below.