A method of combining learning algorithms is described that preserves attribute eeciency and yields learning algorithms that require a number of examples that is polynomial in the number of relevant variables and logarithmic in the total number of variables.
A method of combining learning algorithms is described that preserves attribute eeciency. It yields learning algorithms that require a number of examples that is polynomial in the number of relevant variables and logarithmic in the number of irrelevant ones. The algorithms are simple to implement and realizable on networks with a number of nodes linear in the total number of variables. They can be viewed as strict generalizations of Littlestone's Winnow algorithm, and, therefore, appropriate to domains having very large numbers of attributes, but where nonlinear hypotheses are sought.