TY - JOUR
T1 - A massive MPI parallel framework of smoothed particle hydrodynamics with optimized memory management for extreme mechanics problems
AU - Liu, Jiahao
AU - Yang, Xiufeng
AU - Zhang, Zhilang
AU - Liu, Moubin
N1 - Publisher Copyright:
© 2023 Elsevier B.V.
PY - 2024/2
Y1 - 2024/2
N2 - The dynamic failure process of structures under extreme loadings is very common in many fields of engineering and science. The smoothed particle hydrodynamics (SPH) method offers inherent benefits in dealing with complex interfaces and large material deformations in extreme mechanics problems. However, SPH simulations for 3D engineering applications are time-consuming. To address this issue, we introduce MPI (Message Passing Interface) in our SPH scheme to reduce computational time. Some optimizations are adopted to ensure the massive computation of the SPH method. In particular, an optimized memory management strategy is developed to control the memory footprint. With the present MPI-based massive parallelization of the SPH method, several validation examples are tested and analyzed. By comparing the present numerical results with the reference data, the dynamic failure process of complex structures subjected to extreme loadings like explosive and impact loadings can be well captured. A large number of particles, up to 2.04 billion, are adopted in the present simulations. The scaling tests show that the scalability of the massively parallel SPH program achieves a maximum parallel efficiency of 97% on 10020 CPU cores.
AB - The dynamic failure process of structures under extreme loadings is very common in many fields of engineering and science. The smoothed particle hydrodynamics (SPH) method offers inherent benefits in dealing with complex interfaces and large material deformations in extreme mechanics problems. However, SPH simulations for 3D engineering applications are time-consuming. To address this issue, we introduce MPI (Message Passing Interface) in our SPH scheme to reduce computational time. Some optimizations are adopted to ensure the massive computation of the SPH method. In particular, an optimized memory management strategy is developed to control the memory footprint. With the present MPI-based massive parallelization of the SPH method, several validation examples are tested and analyzed. By comparing the present numerical results with the reference data, the dynamic failure process of complex structures subjected to extreme loadings like explosive and impact loadings can be well captured. A large number of particles, up to 2.04 billion, are adopted in the present simulations. The scaling tests show that the scalability of the massively parallel SPH program achieves a maximum parallel efficiency of 97% on 10020 CPU cores.
KW - Extreme mechanics problems
KW - Massive high performance computing
KW - Memory management
KW - Message passing interface
KW - Smoothed particle hydrodynamics
UR - http://www.scopus.com/inward/record.url?scp=85174614361&partnerID=8YFLogxK
U2 - 10.1016/j.cpc.2023.108970
DO - 10.1016/j.cpc.2023.108970
M3 - Article
AN - SCOPUS:85174614361
SN - 0010-4655
VL - 295
JO - Computer Physics Communications
JF - Computer Physics Communications
M1 - 108970
ER -