The EvoOpt Mixed-Signal Neural Network Chip
Our approach is to build a mixed-signal CMOS implementation of a neural network. Very much like the famous Perceptron model, we use continuous weights (analog synapses) and binary neurons (digital neurons). A first existing prototype processed in 0.35um technology implements a fully connected network with 64 input and 64 output neurons (4096 synapses) capable of simulating multi-layer feed-forward and recurrent networks. With a single-synapse size of only 100um^2 and clock frequencies of 100MHz it is remarkably small and fast and therefore builds a good foundation for further scaling of the network and realtime applications. The perceptron approach though trades simplicity with the difficulty of suitable learning strategies. Since backpropagation cannot be utilized, it is a major effort of our project to examine learning strategies for our hardware implementation. First tests with the prototype are momentarily performed with the help of a simple genetic algorithm partly implemented in an FPGA. Due to high generation rates of new generations of individuals (one individual is a complete weight assignment for the network), the approach shows promising results. For example, the parity problem - known to be difficult due to the pseudo-random nature and the neccesarily required hidden layer - was successfully learned for 8 input bits.
Besides further investigation of the prototype, a next generation network is in development which allows to arbitrarily connect elementary neural network blocks (one of which has 2 fully connected networks of 128 input and 64 output neurons) very much like logic blocks in a field programmable gate array. This will allow parallel operation of small networks as well as the formation of one large network. This network will be used to refine the learning strategies to establish whether it is feasible to train networks of the order of 0.5 million synapses, an integration size which possibly can be reached with the used 0.35um process.