© Siarhei Adobe Stock

Event-Queues as an efficient way to process sparsely populated input data sets such as text efficiently

Traditionally, neural network architectures are organized in a layered structure. Often these layers are fully connected, but depending on the application there might be different types of layers, such as convolutional layers, recurrent layers, or softmax layers. To simplify the math these are usually represented as vectors or matrices. The advantage of doing this is that these layers can be easily fed into specialized hardware chips that greatly accelerate processing speed and allow the training of huge networks. There is just one problem with this approach. Most concepts in the real world are really really rare and simply do not…


Most artificial neural networks ignore the spiking nature of biological neural networks to simplify the underlying model and enable learning techniques such as back-propagation. But by doing so, aren’t we possibly rejecting one of the most central principles of biological neural networks?

Within most artificial neural network models, an activation is just a real numbered value associated with the neuron itself. But that’s not what is happening within a biological neural network. Here, an activation occurs when the threshold of a neuron is exceeded and there is an exact point in time associated with it. A prerequisite for this event to occur is that several other input activations have been fired before the current activation. For example, if we have such an event-driven neural network with a neuron representing the word ‘the’, the neurons representing the letters ‘t’, ‘h’ and ‘e’ need…


Looking for a biologically more plausible way to train a neural network.

Traditionally, artificial neural networks have been trained using the Delta rule and backpropagation. But this contradicts the findings that the neurosciences have made on the function of the brain. There simply is no gradient error signal that is propagated backwards through biological neurons (see here and here). Besides, the human brain can find patterns in its audiovisual training data by itself without the need for training labels. When a parent shows a cat to a child, the child doesn’t use this information to learn every detail of what…


When people see a new animal, meet a new person or visit a new place, they don’t need to repeat that experience thousands of times to remember it — so why should computers have to?

Human learning comes in two forms, a fast and a slow one. The slow one requires a lot of repetition which seems to be necessary to conquer a new cognitive field such as learning a new language. But once a field is mastered, learning new facts within this field requires very few, possibly even only one example. It appears, that the brain regions involved in processing this field have been pre wired to the regions they depend on. So once a new fact needs to be learned, this pre wiring is used to speed up the training of the neurons…


An event-driven approach

In traditional neural networks using the sigmoid activation function, all neurons are more or less activated. There is no clear case of an inactive neuron here. That can be problematic if you want to compute extremely large networks, because in each round you’d have to update all the neurons. Intuitively, it would be much more desirable to update only those neurons that have something to do with the current input data set. If, for example, you are trying to process the image of a cat, why should all the neurons associated with recognizing an astronaut be active as well? One…


The missing link in deep neural networks

The special thing about adding negative recurrent synapses to a neural network is that they introduce inner states within the network.

Take, for example, this simple flip-flop circuit:


What makes biological neural networks so superior to their technical counterparts? Is there anything we have overlooked so far?

Deep neural networks have been a tremendous success story over the last couple of years. Many advances in the field of AI, such as recognizing real world objects, fluently translating natural language or playing GO at a world class level, are based on deep neural networks. However, there were only few reports concerning the limitations of this approach. One such limitation is the inability to learn from a small amount of examples. Deep neural networks usually require a huge amount of training examples, whereas humans are able to learn from one single example. If you show a cat to a…

Lukas Molzberger

Neural network tinkerer with a background in symbolic AI. Employed @meinestadt.de GmbH in Cologne.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store