Abstract
|
Machine learning has emerged as a powerful solution to the modern challenges in accelerator physics. However, the limited availability of beam time, the computational cost of simulations, and the high dimensionality of optimization problems pose significant challenges in generating the required data for training state-of-the-art machine learning models. In this work, we introduce heetah, a yorch-based high-speed differentiable linear beam dynamics code. heetah enables the fast collection of large datasets by reducing computation times by multiple orders of magnitude and facilitates efficient gradient-based optimization for accelerator tuning and system identification. This positions heetah as a user-friendly, readily extensible tool that integrates seamlessly with widely adopted machine learning tools. We showcase the utility of heetah through five examples, including reinforcement learning training, gradient-based beamline tuning, gradient-based system identification, physics-informed Bayesian optimization priors, and modular neural network surrogate modeling of space charge effects. The use of such a high-speed differentiable simulation code will simplify the development of machine learning-based methods for particle accelerators and fast-track their integration into everyday operations of accelerator facilities.\n \n \n \n \n Published by the American Physical Society\n 2024\n \n \n
|
Jan Kaiser et al., Bridging the gap between machine learning and particle accelerator physics with high-speed, differentiable simulations, Phys. Rev. Accel. Beams