Pricing

login
Home / Papers / RL-MUL: Multiplier Design Optimization with Deep Reinforcement Learning

RL-MUL: Multiplier Design Optimization with Deep Reinforcement Learning

7 Citations2023
Dongsheng Zuo, Yikang Ouyang, Yuzhe Ma
2023 60th ACM/IEEE Design Automation Conference (DAC)

RL-MUL is proposed, a multiplier design optimization framework based on reinforcement learning that utilizes matrix and tensor representations for the compressor tree of a multiplier, based on which the convolutional neural networks can be seamlessly incorporated as the agent network.

Abstract

Multiplication is a fundamental operation in many applications, and multipliers are widely adopted in various circuits. However, optimizing multipliers is challenging and non-trivial due to the huge design space. In this paper, we propose RL-MUL, a multiplier design optimization framework based on reinforcement learning. Specifically, we utilize matrix and tensor representations for the compressor tree of a multiplier, based on which the convolutional neural networks can be seamlessly incorporated as the agent network. The agent can learn to adjust the multiplier structure based on a Pareto-driven reward which is customized to accommodate the trade-off between area and delay. Experiments are conducted on different bit widths of multipliers. The results demonstrate that the multipliers produced by RL-MUL dominate all baseline designs in terms of both area and delay. The performance gain of RL-MUL is further validated by comparing the area and delay of processing element arrays using multipliers from RL-MUL and baseline approaches.

Use the desktop version to access all features