login
Home / Papers / Neuromorphic Computing Concepts | Encyclopedia

Neuromorphic Computing Concepts | Encyclopedia

88 Citations•2020•
D. Ielmini
journal unavailable

The origin of neuromorphic computing can be traced back to 1949, when McCulloch and Pitts proposed a mathematical model of the biological neuron and Rosenblatt developed the model of a fundamental neural network called multiple-layer perceptron (MLP), which constitutes the backbone for the emerging concept of deep neural networks (DNNs).

Abstract

The origin of neuromorphic computing can be traced back to 1949, when McCulloch and Pitts proposed a mathematical model of the biological neuron. This is depicted in Figure 1a, where the neuron is conceived as a processing unit, operating (i) a summation of input signals (x , x , x , ...), each multiplied by a suitable synaptic weight (w , w , w , ...) and (ii) a non-linear transformation according to an activation function, e.g., a sigmoidal function . A second landmark came in 1957, when Rosenblatt developed the model of a fundamental neural network called multiple-layer perceptron (MLP) , which is schematically illustrated in Figure 1b. The MLP consists of an input layer, one or more intermediate layers called hidden layers, and an output layer, through which the input signal is forward propagated toward the output. The MLP model constitutes the backbone for the emerging concept of deep neural networks (DNNs). DNNs have recently shown excellent performance in tasks, such as pattern classification and speech recognition, via extensive supervised training techniques, such as the backpropagation rule . DNNs are usually implemented in hardware with von Neumann platforms, such as the graphics processing unit (GPU) and the tensor processing unit (TPU) , used to execute both training and inference. These hardware implementations, however, reveal all the typical limitations of the von Neumann architecture, chiefly the large energy consumption in contrast with the human brain model.