Tree Tensor Network implemented on FPGA as ultra-low latency binary classifiers.
L. Borella*,
A. Coppi,
J. Pazzini,
A. Stanco,
A. Triossi and
M. Zanetti*: corresponding author
Pre-published on:
January 27, 2025
Published on:
—
Abstract
Tensor Networks (TNs) are a computational framework traditionally used to model quantum many-body systems. Recent research has demonstrated that TNs can also be effectively applied to Machine Learning (ML) tasks, producing results comparable to conventional supervised learning methods. In this work, we investigate the use of Tree Tensor Networks (TTNs) for high-frequency real-time applications by harnessing the low-latency capabilities of Field-Programmable Gate Arrays (FPGAs). We present the implementation of TTN classifiers on FPGA hardware, optimized for performing inference on classical ML benchmarking datasets. Various degrees of parallelization are explored to evaluate the trade-offs between resource utilization and algorithm latency. By deploying these TTNs on a hardware accelerator and utilizing an FPGA integrated into a server, we fully offload the TTN inference process, demonstrating the system’s viability for real-time ML applications.
DOI: https://doi.org/10.22323/1.476.1004
How to cite
Metadata are provided both in
article format (very
similar to INSPIRE)
as this helps creating very compact bibliographies which
can be beneficial to authors and readers, and in
proceeding format which
is more detailed and complete.