Home / Papers / Random projections for scaling machine learning on FPGAs

Random projections for scaling machine learning on FPGAs

6 Citations2016
Sean Fox, Stephen Tridgell, C. Jin
2016 International Conference on Field-Programmable Technology (FPT)

A Field-Programmable Gate Array implementation alongside a kernel adaptive filter that is capable of reducing computational resources by introducing a controlled error term, achieving higher modelling capacity for given hardware resources.

Abstract

Random projections have recently emerged as a powerful technique for large scale dimensionality reduction in machine learning applications. Crucially, the projection can be obtained from sparse probability distributions, enabling hardware implementations with little overhead. In this paper, we describe a Field-Programmable Gate Array (FPGA) implementation alongside a kernel adaptive filter (KAF) that is capable of reducing computational resources by introducing a controlled error term, achieving higher modelling capacity for given hardware resources. Empirical results involving classification, regression and novelty detection show that a 40% net increase in available resources and improvements in prediction accuracy is achievable for projections which halve the input vector length, enabling us to scale-up hardware implementations of KAF learning algorithms by at least a factor of 2. An implementation on a FPGA-based network card allows novelty detection of an 8× 24-bit input vector with latency of 404 ns, this being a 26-fold reduction compared to an Intel Core i5-2400 processor.