The sign problem that arises in Hybrid Monte Carlo calculations can be mitigated by deforming
the integration manifold. While simple transformations are highly efficient for simulation, their efficacy systematically decreases with decreasing temperature and increasing interaction. Machine
learning models have demonstrated the ability to push further, but require additional computational effort and upfront training. While neural networks possess the capacity to learn physical
symmetries through proper training, there are anticipated advantages associated with encoding them into the network’s structure. These include enhanced accuracy, accelerated training, and improved stability. The objective of the present study is twofold. First, we investigate the benefits of group convolutional models in comparison to fully connected networks, with a specific focus on
the effects on the sign problem and on computational aspects. Second, we examine their capabilities for transfer learning, demonstrating the ability to further reduce training cost. We perform our
investigations on the Hubbard model on select low-dimensional systems.
