Carsten Spille · @CarstenSpille
96 followers · 41 posts · Server social.tchncs.de

Apparently,
@AMD
Instinct will deliver 2507 TFlops of FP8 with sparsity at 850 watts based on pre-production silicon. That's put against 306,4 delivered TFlops (FP16, no FP8-support) on MI250X at 560 watts (80% utilization). Footnote's a year old, but wasn't live until a couple of days ago and in David Wang's Presentation at AMD FAD22, both numbers weren't present.

#mi300a

Last updated 1 year ago

Has a to Rival ’s
is a GPU-only version of previously announced supercomputing chip, which includes a and . The MI300A will be in El Capitan, a supercomputer coming next year to the . El Capitan is expected to surpass 2 exaflops of performance. The MI300X has 192GB of , which Su said was 2.4 times more memory density than Nvidia’s H100. The SXM and PCIe versions of H100 have 80GB of HBM3.
hpcwire.com/2023/06/13/amd-has

#amd #gpu #nvidia #H100 #mi300x #mi300a #cpu #LosAlamos #nationallaboratory #hbm3

Last updated 1 year ago

Instinct#MI300 is THE Chance to Chip into Share
NVIDIA is facing very long lead times for its and , if you want NVIDIA for AI and have not ordered don't expect it before 2024. For a traditional , MI300 is GPU-only part. All four center tiles are GPU. With 192GB , & can simply fit more onto a single GPU than NVIDIA. has 24 , GPU cores, and 128GB . This is CPU deployed in the El Capitan 2+ Exaflop .
servethehome.com/amd-instinct-

#amd #nvidia #ai #H100 #a100 #gpu #hbm #mi300a #Zen4 #cdna3 #hbm3 #supercomputer

Last updated 1 year ago