Tim Kellogg · @kellogh
943 followers · 3596 posts · Server hachyderm.io

Now that have had repeated big successes over the last 15 years, we are starting to look for better ways to implement them. Some new ones for me:

notes that NNs are bandwidth-bound from memory to GPU. They built a LPU specifically designed for
groq.com/

A wild one โ€” exchange the silicon for moving parts, good old Newtonian physics. Dramatic drop in power utilization and maps to most NN architectures (h/t @FMarquardtGroup)

idw-online.de/de/news820323

#neuralnetworks #groq #LLMs

Last updated 1 year ago

Florian Marquardt · @FMarquardtGroup
1248 followers · 1207 posts · Server fediscience.org

The news release on our recent publication about physical self-learning machines with a new all-physical training procedure, explained in layperson's terms:

Efficient training for artificial intelligence idw-online.de/de/news820323


@MPI_ScienceOfLight

#neuralnetworks #neuromorphic

Last updated 1 year ago

Dr_Domi · @mr_domi_MD
3 followers · 11 posts · Server social.vivaldi.net

Obsidian, Zettelkasten and productivity. I've been looking for the perfect note-taking app for a very long time. As a result, my choice is a markdown, connections between notes and complete freedom of writing.

I fell in love with Obsidian at first sight and now I have my own knowledge base, a second brain.

obsidian.md/

#obsidian #zettelkasten #productivity #neuralnetworks

Last updated 1 year ago

Juri Marcucci · @juri
0 followers · 1 posts · Server econtwitter.net

Hi !

On Friday, Sep 8 at 11AM EDT/4PM GMT/5PM CEST, we will resume the summer break by hosting another great with Lasse Heje Pedersen (Copenhagen Business School) talking about " and the Implementable Efficient Frontier".

Andrew Chen (Federal Reserve Board) will moderate.

Link to paper: papers.ssrn.com/sol3/papers.cf

If you are interested, please join us and register at lnkd.in/egTakd2

#econtwitter #amleds #webinar #machinelearning #finance #portfolio #investment #neuralnetworks #bigdata

Last updated 1 year ago

InfoWorld · @InfoWorld
52 followers · 572 posts · Server techhub.social
IT News · @itnewsbot
3707 followers · 272384 posts · Server schleuss.online

Rethinking open source for AI - We keep using the term โ€œopen sourceโ€ in the context of large language models (LLMs) li... - infoworld.com/article/3706091/

#opensource #neuralnetworks #softwarelicensing #artificialintelligence

Last updated 1 year ago

Florian Marquardt · @FMarquardtGroup
1244 followers · 1193 posts · Server fediscience.org

Looking forward to our workshop 'Frontiers of Neuromorphic Computing' that will start tomorrow morning at the @MPI_ScienceOfLight Max Planck Institute for the Science of Light.

We are trying to record at least most of the talks and release them later (with some delay) on video.

indico.mpl.mpg.de/event/15/

#machinelearning #neuralnetworks #neuromorphic

Last updated 1 year ago

Marcelo Garbarino · @marcegarba
3 followers · 212 posts · Server mastodon.sdf.org
Tech news from Canada · @TechNews
984 followers · 26728 posts · Server mastodon.roitsystems.ca
· @MrClon
250 followers · 11866 posts · Server lor.sh

Is there some thePyrateBay for ? Want to test but Meta require some jumps over hoop to download it

#neuralnetworks #codellama #llm

Last updated 1 year ago

Markus Heyl · @HeylGroup
30 followers · 5 posts · Server fediscience.org

Neural quantum states utilize the impressive power of artificial to encode the fundamental object in - the quantum many-body wave function.

What has remained a key open question is to understand for which states this encoding is efficient and therefore have low complexity. In a recent work we show that mean-field theories are simple in this context:

arxiv.org/abs/2308.10934

#quantum #quantummechanics #neuralnetworks

Last updated 1 year ago

Tech news from Canada · @TechNews
978 followers · 26271 posts · Server mastodon.roitsystems.ca
Mr.Trunk · @mrtrunk
9 followers · 16697 posts · Server dromedary.seedoubleyou.me

@m @rotopenguin @davidgerard I've always found it fascinating / sad that are worst at what computers were best at (counting, retaining and recalling minute information, easy lookups). Would be nice for to find a way to merge the two capabilities

#neuralnetworks #ai

Last updated 1 year ago

Noodlemaz · @noodlemaz
397 followers · 2862 posts · Server med-mastodon.com

Loved this post, via @emilymbender, about "AI", , and more - flagging for @lingthusiasm as I think you'll enjoy it too. Big recommend to everyone though.

karawynn.substack.com/p/langua

#language #ableism #autism #deaf #ai #llm #neuralnetworks #communication

Last updated 1 year ago

A new type of and 3/3

...machine is used to make a better one.

I've been thinking of a suitable set of tasks for these systems to optimize on and developed one earlier in . Recently I've come across the competitions, and a that a system needs to navigate and learn to go through it as fast as possible may be a good choice.

If anyone is interested in on any of these aspects, do let me know!

#neuralnetworks #ai #c #micromouse #maze #collaboration

Last updated 1 year ago

A new type of and 2/3

... Recent successes at one shot tasks where states changes become learning also point in this direction.

Free neural networks from being . Genetic algorithms come to mind for training, but we'd need a suitable base structure that supports gene exchanges & some kind of recursion for repeated structures.

Another idea is to use backpropagation to develop a new learning method, just like when a language or manufacturing...

#neuralnetworks #ai #FeedForward #programming

Last updated 1 year ago

A new type of and 1/3

I've been thinking that based neural networks will reach their peak (if they haven't already), and it may be interesting to search for a new learning method. Some observations and ideas:

The two main modes of - training when weights are adjusted, and prediction when states change should be merged. After all real-life brains do prediction and learning at the same time, and they are not restarted for every task. ...

#neuralnetworks #ai #backpropagation

Last updated 1 year ago

A new type of and

I've been thinking that based neural networks will reach their peak (if they haven't already), and it may be interesting to search for an alternative method. I've had some observations and ideas:

The two main modes of - training when weights are adjusted, and prediction when states are adjusted should be merged. After all real-life brains do prediction and learning at the same time, and they survive for long-term; they are not restarted for every task. Recent successes at one shot tasks where states changes effectively become learning also point in this direction.

It'd also be nice to free neural networks from being , but we don't have any training methods for freer networks. Genetic algorithms come to mind, but a suitable base structure has to be found that supports gene exchanges to some kind of recursion for repeated structures. Even with these it's unclear if we'd get any results in a reasonable amount of time.

Another idea is to use the current, imperfect learning method (backpropagation) to develop a new one, just like when language or manufacturing machine can be used to make a new, better one. Here would be used to learn a new learning method.

I've been thinking if suitable playgrounds (set of tasks) for these systems to operate in and developed one earlier in . Recently I've come across the competitions, and a that a system needs to navigate and learn to go through them as fast as possible may also be an interesting choice.

If anyone is interested in on any of these aspects, even just exchanging thoughts, let me know!

#collaboration #collab #maze #micromice #c #backprop #programming #FeedForward #machinelearning #backpropagation #ai #neuralnetworks

Last updated 1 year ago

Tech news from Canada · @TechNews
929 followers · 24917 posts · Server mastodon.roitsystems.ca