Scalable Analyses · @scalable
6 followers · 93 posts · Server fosstodon.org
Don Kosak · @don_kosak
1151 followers · 241 posts · Server fosstodon.org

NVIDIA posted their benchmarks today. Take-away:

2.5X overall speed improvements on A100 (Ampere) due to software improvements since initial release.

6.7X overall speed improvement on new H100 (Hopper) architecture vs. original A100.

Details: blogs.nvidia.com/blog/2022/11/

#MLPerf #machinelearning

Last updated 2 years ago

|

’s AI reportedly up to 4.5x than the previous champ

Upcoming "" in its debut, according to Nvidia.

announced yesterday that its upcoming "Hopper" set new performance records during its debut in the industry-standard MLPerf benchmarks, delivering results up to 4.5 times faster than the A100, which is currently Nvidia's fastest production AI chip.

The MPerf benchmarks (technically called " Inference 2.1") measure "inference" workloads, which demonstrate how well a chip can apply a previously trained machine learning model to new data. A group of industry firms known as the MLCommons developed the MLPerf benchmarks in 2018 to deliver a standardized metric for conveying machine learning performance to potential customers.

In particular, the H100 did well in the BERT-Large benchmark, which measures natural language-processing performance using the BERT model developed by Google. Nvidia credits this particular result to the Hopper architecture's Transformer Engine, which specifically accelerates training transformer models. This means that the H100 could accelerate future natural language models similar to OpenAI's GPT-3, which can compose written works in many different styles and hold conversational chats.

Nvidia positions the H100 as a high-end data center GPU chip designed for AI and supercomputer applications such as image recognition, large language models, image synthesis, and more. Analysts expect it to replace the A100 as Nvidia's flagship data center GPU, but it is still in development. US government restrictions imposed last week on exports of the chips to China brought fears that Nvidia might not be able to deliver the H100 by the end of 2022 since part of its development is taking place there.

Nvidia clarified in a second Securities and Exchange Commission filing last week that the US government will allow continued development of the H100 in China, so the project appears back on track for now. According to Nvidia, the H100 will be available "later this year." If the success of the previous generation's A100 chip is any indication, the H100 may power a large variety of groundbreaking AI applications in the years ahead.

arstechnica.com/information-te

Disclaimer: tastingtraffic.net (Decentralized SOCIAL Network) and/or its owners [tastingtraffic.com] are not affiliates of this provider or referenced image used. This is NOT an endorsement OR Sponsored (Paid) Promotion/Reshare.

#INTERNATONAL_TECH_NEWS #Nvidia #Flagship #Chip #faster #Hopper #GPU #broke_records #MLPerf #H100 #Tensor_Core_GPU #MLPerfTM

Last updated 2 years ago

RISC-V · @risc_v
311 followers · 1662 posts · Server noc.social

RT from Andes Technology (@Andes_Tech)

Andes announced submitting Tiny v0.7 benchmark for Andes V5 . With , customers could accelerate and facilitate applications such as , , , processing, processing, and more! Read more: t.ly/Glmd

Original tweet : twitter.com/Andes_Tech/status/

#MLPerf #riscv #processors #AndesCore #ai #TinyML #aiot #nn #signal #data

Last updated 2 years ago

RISC-V · @risc_v
311 followers · 1662 posts · Server noc.social

RT from Andes Technology (@Andes_Tech)

Andes Scores!!!! lnkd.in/gCgum-Qt :In the MLPerf section,... Andes's "AndesCore" chips make use of the open-source RISC-V computer instruction set,... an instruction set that can be freely modified for any kind of computing device."

Original tweet : twitter.com/Andes_Tech/status/

#MLPerfTinyML #ai #MLPerf #TinyML #riscv

Last updated 2 years ago

Gapry · @gapry
38 followers · 186 posts · Server fosstodon.org

MLCommons Releases MLPerf Inference v1.0 Results with First Power Measurements


mlcommons.org/en/news/mlperf-i

#MLPerf #GapryBlogReadingList

Last updated 3 years ago