This analysis makes clear the strong similarity between linear neural networks and the general linear model developed by statisticians.
Neural networks are composed of basic units somewhat analogous to neurons. These units are linked to each other by connections whose strength is modiiable as a result of a learning process or algorithm. Each of these units integrates independently (in parallel) the information provided by its synapses in order to evaluate its state of activation. The unit response is then a linear or nonlinear function of its activation. Linear algebra concepts are used, in general, to analyze linear units, with eigenvectors and eigenvalues being the core concepts involved. This analysis makes clear the strong similarity between linear neural networks and the general linear model developed by statisticians. The linear models presented here are the perceptron, and the linear associator. The behavior of nonlinear networks can be described within the framework of optimization and approximation techniques with dynamical systems (e.g., like those used to model spin glasses). One of the main notions used with nonlinear unit networks is the notion of attractor. When the task of the network is to associate a response with some speciic input patterns, the most popular nonlinear technique consists of using hidden layers of neurons trained with back-propagation of error. The nonlinear models presented are the Hoppeld network, the Boltzmann machine, the back-propagation network, and the radial basis function network. Even though research in neural modeling started circa 1940 (i.e., McCulloch & Pitts in 1943 48]), there was little active development of the eld prior to the late fties and early sixties when Rosenblatt introduced the perceptron in 1958 69], (a close cousin of the perceptron was Widrow's adaline introduced in 1960 84]). These early models already posess most of the