This work introduces several probabilistic quantum algorithms that overcome the normal unitary restrictions in quantum machine learning by leveraging the Linear Combination of Unitaries (LCU) method, and proposes a general framework for applying a linear combination of irreducible subspace projections on quantum encoded data.
We introduce several probabilistic quantum algorithms that overcome the normal unitary restrictions in quantum machine learning by leveraging the Linear Combination of Unitaries (LCU) method. Among our investigations are quantum native implementations of Residual Networks (ResNet), where we show that residual connections between layers of a variational ansatz can prevent barren plateaus in models which would otherwise contain them. Secondly, we implement a quantum analogue of average pooling layers from convolutional networks using single qubit controlled basic arithmetic operators and show that the LCU success probability remains stable for the MNIST database. This method can be further generalised to convolutional filters, while using exponentially fewer controlled unitaries than previous approaches. Finally, we propose a general framework for applying a linear combination of irreducible subspace projections on quantum encoded data. This enables a quantum state to remain within an exponentially large space, while selectively amplifying specific subspaces relative to others, alleviating simulability concerns that arise when fully projecting to a polynomially sized subspace. We demonstrate improved classification performance for partially amplified permutation invariant encoded point cloud data when compared to non-invariant or fully permutation invariant encodings. We also demonstrate a novel rotationally invariant encoding for point cloud data via Schur-Weyl duality. These quantum computing frameworks are all constructed using the LCU method, suggesting that further novel quantum machine learning algorithms could be created by utilising the LCU technique.