Tim Kellogg · @kellogh
943 followers · 3596 posts · Server hachyderm.io

Now that have had repeated big successes over the last 15 years, we are starting to look for better ways to implement them. Some new ones for me:

notes that NNs are bandwidth-bound from memory to GPU. They built a LPU specifically designed for
groq.com/

A wild one — exchange the silicon for moving parts, good old Newtonian physics. Dramatic drop in power utilization and maps to most NN architectures (h/t @FMarquardtGroup)

idw-online.de/de/news820323

#neuralnetworks #groq #LLMs

Last updated 2 years ago