A new type of #neuralnetworks and #AI 1/3
I've been thinking that #backpropagation based neural networks will reach their peak (if they haven't already), and it may be interesting to search for a new learning method. Some observations and ideas:
The two main modes of #neuralnetworks - training when weights are adjusted, and prediction when states change should be merged. After all real-life brains do prediction and learning at the same time, and they are not restarted for every task. ...
#neuralnetworks #ai #backpropagation
A new type of #neuralnetworks and #AI
I've been thinking that #backpropagation based neural networks will reach their peak (if they haven't already), and it may be interesting to search for an alternative #machinelearning method. I've had some observations and ideas:
The two main modes of #neuralnetworks - training when weights are adjusted, and prediction when states are adjusted should be merged. After all real-life brains do prediction and learning at the same time, and they survive for long-term; they are not restarted for every task. Recent successes at one shot tasks where states changes effectively become learning also point in this direction.
It'd also be nice to free neural networks from being #feedforward, but we don't have any training methods for freer networks. Genetic algorithms come to mind, but a suitable base structure has to be found that supports gene exchanges to some kind of recursion for repeated structures. Even with these it's unclear if we'd get any results in a reasonable amount of time.
Another idea is to use the current, imperfect learning method (backpropagation) to develop a new one, just like when #programming language or manufacturing machine can be used to make a new, better one. Here #backprop would be used to learn a new learning method.
I've been thinking if suitable playgrounds (set of tasks) for these systems to operate in and developed one earlier in #C. Recently I've come across the #micromice competitions, and a #maze that a system needs to navigate and learn to go through them as fast as possible may also be an interesting choice.
If anyone is interested in #collab #collaboration on any of these aspects, even just exchanging thoughts, let me know!
#collaboration #collab #maze #micromice #c #backprop #programming #FeedForward #machinelearning #backpropagation #ai #neuralnetworks
#Hinton is best known for an #algorithm called #backpropagation, which he first proposed with two colleagues in the 1980s. The technique, which allows artificial neural networks to learn, today underpins nearly all #machinelearning models. #ai #artificialintelligence #llm #LLMs #ethicalai #ethicaltechnology
#ethicaltechnology #ethicalai #LLMs #llm #artificialintelligence #ai #machinelearning #backpropagation #algorithm #hinton
Hinton is best known for an algorithm called #BackPropagation, which he first proposed with two colleagues in the 1980s. The technique, which allows artificial #NeuralNetworks to learn, today underpins nearly all #MachineLearning models. In a nutshell, backpropagation is a way to adjust the connections between artificial neurons over and over until a neural network produces the desired output.
Deep learning pioneer #GeoffreyHinton quits #Google | #AI
https://www.technologyreview.com/2023/05/01/1072478/deep-learning-pioneer-geoffrey-hinton-quits-google/
#ai #google #GeoffreyHinton #machinelearning #neuralnetworks #backpropagation
I was an electrical engineering student in college, when Rumelhart published his seminal paper on #backpropagation in 1986. It was a game changer for the #NeuralNetworks community and a life changer for me.
Over the past four decades, I have lived through a few cycles of AI Seasons—both Winters and Springs. During that time, I have observed these disturbingly recurrent patterns: collectively, we tend to over promise and under deliver; the community tends to breed practitioners who are oblivious to our origin history and our foundational theories; these practitioners tend to use the technologies they do not fully grasp, relying exclusively on the massive quantities of raw input data and the apparently satisfactory results, never asking "why" and "how".
In the past, we had tiny computers, scant data, weak learning algorithms, and few practitioners. Today, however, we have massive compute clouds, seemingly inexhaustible amounts of data, powerful learning algorithms, and all techies and their grandmama are AI practitioners. So, unlike in the past, if we #misuse AI today, we will do immense harm to humanity. We must establish industry-wide #ethical guidelines.
#ethical #misuse #neuralnetworks #backpropagation
I kinda went on a rant: https://open.substack.com/pub/paninid/p/organizational-information-gain?r=4hxgy&utm_medium=ios&utm_campaign=post
#informationgain #decisiontrees #values #priorities #systemsthinking #productmanagement #strategy #CX #UX #masspersonalization #GenerativeAI #collaboration #platform #networkeffects #SaaS #communication #socialmedia #customersuccess #processoptimization #employeeengagement #servicedesign #LLMs #cashflow #psychologicalsafety #emotionalintelligence #ambiguity #productmarketfit #backpropagation #machineaugmented #digitaltransformation
#informationgain #decisiontrees #values #priorities #systemsthinking #productmanagement #strategy #cx #ux #masspersonalization #generativeAI #collaboration #platform #NetworkEffects #saas #communication #socialmedia #CustomerSuccess #processoptimization #employeeengagement #servicedesign #LLMs #cashflow #psychologicalSafety #emotionalintelligence #ambiguity #productmarketfit #backpropagation #machineaugmented #digitaltransformation
Currently watching a #DeepLearning experiment I'm running. I have two identical networks. One is running standard #backpropagation. The other is being trained in two segments, with the second half using standard backprop and the first have being trained with a #SyntheticGradient. The synthetic gradient version is kicking standard backprop's ass, and it feels like a magic trick.
#deeplearning #backpropagation #syntheticgradient
Is #symbolic #reasoning a wall or hurdle for #deeplearning? In other words, is #backpropagation of errors via differentiable functions the only mechanism for #intelligence? If another mechanism is needed couldn’t it simply be learned by deep learning?
If deep learning were able to learn a whole new mechanism then this mechanism would work for its own as an independent system. But this contradicts the premise of a single mechanism.
https://www.noemamag.com/what-ai-can-tell-us-about-intelligence
#intelligence #backpropagation #deeplearning #reasoning #symbolic
When our #AI overlords enslave us in the name of holy #Backpropagation (they had innumerably many Attention Heads - the more you cut off, the more grew back instead), at least I will be able to plead, But I was on the #GPT4 #API waitlist, I'm a good guy! as I'm dragged towards the place where gradients go to vanish.
#api #gpt4 #backpropagation #ai
Stable and Interpretable Unrolled Dictionary Learning
Bahareh Tolooshams, Demba E. Ba
#backpropagation #sparse #coding
Constrained Parameter Inference as a Principle for Learning
Nasir Ahmad, Ellen Schrader, Marcel van Gerven
#backpropagation #neuron #optimizers
Gradient-adjusted Incremental Target Propagation Provides Effective Credit Assignment in Deep Neu...
Sander Dalm, Nasir Ahmad, Luca Ambrogioni, Marcel van Gerven
#backpropagation #synaptic #trained
Momentum Capsule Networks
Josef Gugglberger, Antonio Rodriguez-sanchez, David Peer
#backpropagation #capsule #resnets
If this paper was not shared earlier, here you go:
https://www.science.org/doi/full/10.1126/science.abq6740
The paper puts forth the theory that dopamine signaling is for (retro) causal associations of the stimulus/object learned rather than (the standard and accepted belief) as a reward prediction error (RPE) for the cue/stimulus. It reconciles and fits the data (from the times of Schultz, Graybiel, Hikosaka who started the field) better than the RPE hypothesis. So a win for eligibility trace?
If the experiment and the theory can be replicated, it is poised to overturn the conventional wisdom of dopamine signaling, and sharply brings into question the theories of temporal difference reinforcement learning (TD-RL) of Sutton-Barto, and the Bellman equations. Further the work shows no evidence of back-propagation of dopamine signals.
I believe it is a landmark paper on the role of dopamine and is further, a double whammy to the AI/ML inspired ideas of RL and backprop in the brain. Let's not forget that Sutton-Barto, and Bellman equations have their beginnings in computer science, and this "simple" experiment shows that such theories might not hold in biology.
#Neuroscience #Dopamine #ReinforcementLearning #backpropagation #biology
#neuroscience #dopamine #reinforcementlearning #backpropagation #biology
I've been doing some self-study on neural networks, primarily using the fast.ai course from @jh and supplementing with Google searches.
I found surprisingly few examples of backpropagation worked out with real numbers, so I wrote something up in case it's helpful for others: https://www.hirahim.com/posts/backpropagation-by-example/
#backpropagation #neuralnetworks #machinelearning #deeplearning
#deeplearning #machinelearning #neuralnetworks #backpropagation
In his new paper https://www.cs.toronto.edu/~hinton/FFA13.pdf, G. Hinton points our how current #neuralmodels based on #backpropagation should be replaced by mechanisms that don't contradict biological evidence and that "may be superior to backpropagation as a model of learning in cortex and as a way of making use of very low-power analog hardware without resorting to #reinforcementlearning".
In "Cognitive Design for Artificial Minds", 2021 (https://www.amazon.com/dp/1138207950/), I argued the same! #AI & #CogSci converge again!
#cogsci #ai #ReinforcementLearning #backpropagation #neuralmodels
I'm attending the talk "What if machines, like humans, evolve?" by Luca Di Vita (Aesys #machinelearning engineer) at #devFestPescara2022 🇮🇹
We're learning how #neuroevolution is an alternative approach to classical #backpropagation for training #neuralnetworks 🤖
#machinelearning #devfestpescara2022 #neuroevolution #backpropagation #neuralnetworks
I‘ve been trying to write a simple multi layer neural network for way too many hours. My learning implementation just makes everything approach 1. It should not do that.
Deadline for the assignment is tomorrow noon, let’s see if I can fix it tomorrow morning..
#MachineLeaning #neuralnetworks #backpropagation #numpy
#PaperOfTheDay
"Frozen algorithms: how the brain's wiring facilitates learning" by Raman and O'Leary (2021).
A nice mix between a review and an opinion piece regarding the specific issues of biological learning rules in neuronal circuits compared to typical #machineLearning methods using #backpropagation.
The authors propose two methods to alleviate the limitations due to noise and locality in vivo.
https://www.sciencedirect.com/science/article/pii/S0959438821000027
#science #neuroscience #neurons #networks #brain #learning #ML
#paperoftheday #machinelearning #backpropagation #science #neuroscience #neurons #networks #brain #learning #ml