Speaker
Description
ACE3P is a comprehensive set of parallel finite-element codes for multi-physics modeling of particle accelerators. Running on massively parallel computer platforms for high fidelity and high accuracy simulation, ACE3P enables rapid virtual prototyping of accelerator rf component design, optimization, and analysis. Recent advances of ACE3P have been achieved through the implementation of advanced numerical algorithms, enhancement of multi-physics modeling capabilities, integration with beams dynamics (IMPACT) and particle-matter interaction (Geant4) codes toward start-to-end simulation, and improvement of code performance on state-of-the-art high-performance computing (HPC) platforms for large-scale computation. ACE3P has been applied to the design and optimization of many DOE accelerator projects nationwide. In this paper, we will focus on ACE3P applications to high brightness injectors and high gradient accelerators. Using ACE3P enhanced multi-physics modeling capabilities, rf performance has been fully evaluated covering cavity shape design, multipacting, thermal and mechanical analysis for the NC and SRF guns in LCLS-II and LCLS-II-HE, respectively. A3PI, an integrated ACE3P-IMPACT workflow, provides realistic studies of beam performance for the LCLS-II Low Emittance Injector (LEI) accelerator system from start to end including gun cavity imperfections. ACE3P has been used to design and optimize the novel high gradient distributed-coupling accelerator cavity for the proposed Higgs factory C3 using Cool Copper C-band technology. The implementation of an integrated ACE3P-Geant4 workflow is underway with the ultimate objective of determining dark current radiation effects in high gradient accelerator systems. Furthermore, SLAC has collaborated with Kitware and Simmetrix through SBIR to make ACE3P more accessible to a broader accelerator community by the development of enabling technologies in GUI, simulation workflow for HPC systems, and advanced meshing techniques for automatic shape optimization. All the simulations presented in this paper have been performed on the computing resources at the National Energy Research Scientific Computing Center (NERSC).