A new type of #neuralnetworks and #AI
I've been thinking that #backpropagation based neural networks will reach their peak (if they haven't already), and it may be interesting to search for an alternative #machinelearning method. I've had some observations and ideas:
The two main modes of #neuralnetworks - training when weights are adjusted, and prediction when states are adjusted should be merged. After all real-life brains do prediction and learning at the same time, and they survive for long-term; they are not restarted for every task. Recent successes at one shot tasks where states changes effectively become learning also point in this direction.
It'd also be nice to free neural networks from being #feedforward, but we don't have any training methods for freer networks. Genetic algorithms come to mind, but a suitable base structure has to be found that supports gene exchanges to some kind of recursion for repeated structures. Even with these it's unclear if we'd get any results in a reasonable amount of time.
Another idea is to use the current, imperfect learning method (backpropagation) to develop a new one, just like when #programming language or manufacturing machine can be used to make a new, better one. Here #backprop would be used to learn a new learning method.
I've been thinking if suitable playgrounds (set of tasks) for these systems to operate in and developed one earlier in #C. Recently I've come across the #micromice competitions, and a #maze that a system needs to navigate and learn to go through them as fast as possible may also be an interesting choice.
If anyone is interested in #collab #collaboration on any of these aspects, even just exchanging thoughts, let me know!
#collaboration #collab #maze #micromice #c #backprop #programming #FeedForward #machinelearning #backpropagation #ai #neuralnetworks