Björn Kindler

Training with Evolutionary Algorithms

Charles Darwin (1809-1882)

In the struggle for survival, the fittest win out at the expense of their rivals because they succeed in adapting themselves best to their environment. Charles Darwin "The Origin of Species"

The mixed-mode neural network chips developed in our group are especially well suited for demanding real-world applications that require large and complex yet fast neural networks. The training of such a network is a challenging task as a high number of synaptic weights has to be tuned. The numerous synaptic connections can exhibit strong interrelations that make it difficult to find the optimal set of weights. Even well-established neural network training algorithms like the common backpropagation algorithm tend to yield only suboptimal results when being applied to highly complex networks.

Furthermore, traditional neural network training algorithms typically require detailed knowledge of several individual neuron and synapse characteristics. Due to unavoidable device variations introduced during manufacturing, this information is not available for the synapses and neurons on our network chips. On the other hand, the operating speed of the chip allows to implement many different networks and test their performance on a given task within a short time. This makes it feasible to apply a special kind of optimization algorithm, a so-called evolutionary algorithm.

Like in the case of artificial neural networks, the concept of evolutionary algorithms is inspired by nature in that it mimics the fundamental principles of natural evolution. These principles have first been described in 1859 in Charles Darwin's work "The Origin of Species" and - expanded by modern genetics - still hold today. Please follow the links below to learn more about evolutionary algorithms in general, how they are used for the training of neural networks within our group and how they can be implemented to keep up with the speed of our neural hardware: