A new type of and 1/3

I've been thinking that based neural networks will reach their peak (if they haven't already), and it may be interesting to search for a new learning method. Some observations and ideas:

The two main modes of - training when weights are adjusted, and prediction when states change should be merged. After all real-life brains do prediction and learning at the same time, and they are not restarted for every task. ...

#neuralnetworks #ai #backpropagation

Last updated 2 years ago

A new type of and

I've been thinking that based neural networks will reach their peak (if they haven't already), and it may be interesting to search for an alternative method. I've had some observations and ideas:

The two main modes of - training when weights are adjusted, and prediction when states are adjusted should be merged. After all real-life brains do prediction and learning at the same time, and they survive for long-term; they are not restarted for every task. Recent successes at one shot tasks where states changes effectively become learning also point in this direction.

It'd also be nice to free neural networks from being , but we don't have any training methods for freer networks. Genetic algorithms come to mind, but a suitable base structure has to be found that supports gene exchanges to some kind of recursion for repeated structures. Even with these it's unclear if we'd get any results in a reasonable amount of time.

Another idea is to use the current, imperfect learning method (backpropagation) to develop a new one, just like when language or manufacturing machine can be used to make a new, better one. Here would be used to learn a new learning method.

I've been thinking if suitable playgrounds (set of tasks) for these systems to operate in and developed one earlier in . Recently I've come across the competitions, and a that a system needs to navigate and learn to go through them as fast as possible may also be an interesting choice.

If anyone is interested in on any of these aspects, even just exchanging thoughts, let me know!

#collaboration #collab #maze #micromice #c #backprop #programming #FeedForward #machinelearning #backpropagation #ai #neuralnetworks

Last updated 2 years ago

Jürgen · @Jigsaw_You
113 followers · 1954 posts · Server mastodon.nl

is best known for an called , which he first proposed with two colleagues in the 1980s. The technique, which allows artificial neural networks to learn, today underpins nearly all models.

technologyreview.com/2023/05/0

#ethicaltechnology #ethicalai #LLMs #llm #artificialintelligence #ai #machinelearning #backpropagation #algorithm #hinton

Last updated 2 years ago

Norobiik · @Norobiik
296 followers · 4601 posts · Server noc.social

Hinton is best known for an algorithm called , which he first proposed with two colleagues in the 1980s. The technique, which allows artificial to learn, today underpins nearly all models. In a nutshell, backpropagation is a way to adjust the connections between artificial neurons over and over until a neural network produces the desired output.

Deep learning pioneer quits |
technologyreview.com/2023/05/0

#ai #google #GeoffreyHinton #machinelearning #neuralnetworks #backpropagation

Last updated 2 years ago

amen zwa, esq. · @AmenZwa
67 followers · 709 posts · Server mathstodon.xyz

I was an electrical engineering student in college, when Rumelhart published his seminal paper on in 1986. It was a game changer for the community and a life changer for me.

Over the past four decades, I have lived through a few cycles of AI Seasons—both Winters and Springs. During that time, I have observed these disturbingly recurrent patterns: collectively, we tend to over promise and under deliver; the community tends to breed practitioners who are oblivious to our origin history and our foundational theories; these practitioners tend to use the technologies they do not fully grasp, relying exclusively on the massive quantities of raw input data and the apparently satisfactory results, never asking "why" and "how".

In the past, we had tiny computers, scant data, weak learning algorithms, and few practitioners. Today, however, we have massive compute clouds, seemingly inexhaustible amounts of data, powerful learning algorithms, and all techies and their grandmama are AI practitioners. So, unlike in the past, if we AI today, we will do immense harm to humanity. We must establish industry-wide guidelines.

#ethical #misuse #neuralnetworks #backpropagation

Last updated 2 years ago

Sampath Pāṇini ® ✅ · @paninid
657 followers · 6056 posts · Server mastodon.world
Aaron · @hosford42
452 followers · 3451 posts · Server techhub.social

Currently watching a experiment I'm running. I have two identical networks. One is running standard . The other is being trained in two segments, with the second half using standard backprop and the first have being trained with a . The synthetic gradient version is kicking standard backprop's ass, and it feels like a magic trick.

#deeplearning #backpropagation #syntheticgradient

Last updated 2 years ago

Ulrich Junker · @UlrichJunker
350 followers · 1337 posts · Server fediscience.org

Is a wall or hurdle for ? In other words, is of errors via differentiable functions the only mechanism for ? If another mechanism is needed couldn’t it simply be learned by deep learning?

If deep learning were able to learn a whole new mechanism then this mechanism would work for its own as an independent system. But this contradicts the premise of a single mechanism.

noemamag.com/what-ai-can-tell-

#intelligence #backpropagation #deeplearning #reasoning #symbolic

Last updated 2 years ago

When our overlords enslave us in the name of holy (they had innumerably many Attention Heads - the more you cut off, the more grew back instead), at least I will be able to plead, But I was on the waitlist, I'm a good guy! as I'm dragged towards the place where gradients go to vanish.

#api #gpt4 #backpropagation #ai

Last updated 2 years ago

Published papers at TMLR · @tmlrpub
498 followers · 228 posts · Server sigmoid.social

Stable and Interpretable Unrolled Dictionary Learning

Bahareh Tolooshams, Demba E. Ba

openreview.net/forum?id=e3S0Bl

#backpropagation #sparse #coding

Last updated 3 years ago

Published papers at TMLR · @tmlrpub
495 followers · 202 posts · Server sigmoid.social

Constrained Parameter Inference as a Principle for Learning

Nasir Ahmad, Ellen Schrader, Marcel van Gerven

openreview.net/forum?id=CUDdbT

#backpropagation #neuron #optimizers

Last updated 3 years ago

Published papers at TMLR · @tmlrpub
488 followers · 176 posts · Server sigmoid.social

Gradient-adjusted Incremental Target Propagation Provides Effective Credit Assignment in Deep Neu...

Sander Dalm, Nasir Ahmad, Luca Ambrogioni, Marcel van Gerven

openreview.net/forum?id=Lx19Ey

#backpropagation #synaptic #trained

Last updated 3 years ago

Published papers at TMLR · @tmlrpub
489 followers · 153 posts · Server sigmoid.social

Momentum Capsule Networks

Josef Gugglberger, Antonio Rodriguez-sanchez, David Peer

openreview.net/forum?id=Su290s

#backpropagation #capsule #resnets

Last updated 3 years ago

Karthik Srinivasan · @skarthik
77 followers · 26 posts · Server neuromatch.social

If this paper was not shared earlier, here you go:

science.org/doi/full/10.1126/s

The paper puts forth the theory that dopamine signaling is for (retro) causal associations of the stimulus/object learned rather than (the standard and accepted belief) as a reward prediction error (RPE) for the cue/stimulus. It reconciles and fits the data (from the times of Schultz, Graybiel, Hikosaka who started the field) better than the RPE hypothesis. So a win for eligibility trace?

If the experiment and the theory can be replicated, it is poised to overturn the conventional wisdom of dopamine signaling, and sharply brings into question the theories of temporal difference reinforcement learning (TD-RL) of Sutton-Barto, and the Bellman equations. Further the work shows no evidence of back-propagation of dopamine signals.

I believe it is a landmark paper on the role of dopamine and is further, a double whammy to the AI/ML inspired ideas of RL and backprop in the brain. Let's not forget that Sutton-Barto, and Bellman equations have their beginnings in computer science, and this "simple" experiment shows that such theories might not hold in biology.

#neuroscience #dopamine #reinforcementlearning #backpropagation #biology

Last updated 3 years ago

Rahim Sonawalla · @rahim
2 followers · 1 posts · Server mastodon.online

I've been doing some self-study on neural networks, primarily using the fast.ai course from @jh and supplementing with Google searches.

I found surprisingly few examples of backpropagation worked out with real numbers, so I wrote something up in case it's helpful for others: hirahim.com/posts/backpropagat

#deeplearning #machinelearning #neuralnetworks #backpropagation

Last updated 3 years ago

Antonio Lieto · @antoniolieto
146 followers · 40 posts · Server fediscience.org

In his new paper cs.toronto.edu/~hinton/FFA13.p, G. Hinton points our how current based on should be replaced by mechanisms that don't contradict biological evidence and that "may be superior to backpropagation as a model of learning in cortex and as a way of making use of very low-power analog hardware without resorting to ".
In "Cognitive Design for Artificial Minds", 2021 (amazon.com/dp/1138207950/), I argued the same! converge again!

#cogsci #ai #ReinforcementLearning #backpropagation #neuralmodels

Last updated 3 years ago

Paolo Melchiorre (paulox) · @paulox
524 followers · 339 posts · Server fosstodon.org

I'm attending the talk "What if machines, like humans, evolve?" by Luca Di Vita (Aesys engineer) at 🇮🇹

We're learning how is an alternative approach to classical for training 🤖

#machinelearning #devfestpescara2022 #neuroevolution #backpropagation #neuralnetworks

Last updated 3 years ago

Lukas · @lukstru
10 followers · 103 posts · Server toot.kif.rocks

I‘ve been trying to write a simple multi layer neural network for way too many hours. My learning implementation just makes everything approach 1. It should not do that.

Deadline for the assignment is tomorrow noon, let’s see if I can fix it tomorrow morning..

#MachineLeaning #neuralnetworks #backpropagation #numpy

Last updated 3 years ago

Tanguy Fardet · @tfardet
242 followers · 877 posts · Server fediscience.org


"Frozen algorithms: how the brain's wiring facilitates learning" by Raman and O'Leary (2021).

A nice mix between a review and an opinion piece regarding the specific issues of biological learning rules in neuronal circuits compared to typical methods using .

The authors propose two methods to alleviate the limitations due to noise and locality in vivo.

sciencedirect.com/science/arti

#paperoftheday #machinelearning #backpropagation #science #neuroscience #neurons #networks #brain #learning #ml

Last updated 4 years ago