If the number of hidden units is appropriately chosen multi layer perceptrons are universal approximators, i.e. they can solve, at least theoretically, any association problem (nonlinearly separable classification, nonlinear regression and prediction problems).
A multi layer perceptron (i.e. feedforward neural networks with hidden layers) contains at least two layers of functional units. This means that at least one layer contains hidden units, which do not communicate with the environment. If the number of hidden units is appropriately chosen multi layer perceptrons are universal approximators, i.e. they can solve, at least theoretically, any association problem (nonlinearly separable classification, nonlinear regression and prediction problems).