Now that #NeuralNetworks have had repeated big successes over the last 15 years, we are starting to look for better ways to implement them. Some new ones for me:
#Groq notes that NNs are bandwidth-bound from memory to GPU. They built a LPU specifically designed for #LLMs
https://groq.com/
A wild one — exchange the silicon for moving parts, good old Newtonian physics. Dramatic drop in power utilization and maps to most NN architectures (h/t @FMarquardtGroup)