Under reasonable assumptions, asymptotic speedups for Thermodynamic AI algorithms are rigorously established, relative to digital methods, that scale linearly in dimension.
Many Artificial Intelligence (AI) algorithms are inspired by physics and employ 1 stochastic fluctuations, such as generative diffusion models, Bayesian neural net-2 works, and Monte Carlo inference. These algorithms are currently run on dig-3 ital hardware, ultimately limiting their scalability and overall potential. Here, 4 we propose a novel computing device, called Thermodynamic AI hardware, that 5 could accelerate such algorithms. Thermodynamic AI hardware can be viewed 6 as a novel form of computing, since it uses novel fundamental building blocks, 7 called stochastic units (s-units), which naturally evolve over time via stochastic 8 trajectories. In addition to these s-units, Thermodynamic AI hardware employs 9 a Maxwell’s demon device that guides the system to produce non-trivial states. 10 We provide a few simple physical architectures for building these devices, such 11 as RC electrical circuits. Moreover, we show that this same hardware can be used 12 to accelerate various linear algebra primitives. We present simple thermodynamic 13 algorithms for (1) solving linear systems of equations, (2) computing matrix in-14 verses, (3) computing matrix determinants, and (4) solving Lyapunov equations. 15 Under reasonable assumptions, we rigorously establish asymptotic speedups for 16 our algorithms, relative to digital methods, that scale linearly in dimension. Nu-17 merical simulations also suggest a speedup is achievable in practical scenarios. 18