TY - JOUR
T1 - Improving Patient-ventilator Synchrony during Pressure Support Ventilation based on Reinforcement Learning Algorithm
AU - Hao, Liming
AU - Wang, Xiaohan
AU - Ren, Shuai
AU - Shi, Yan
AU - Cai, Maolin
AU - Wang, Tao
AU - Luo, Zujin
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2025
Y1 - 2025
N2 - Mechanical ventilation is an effective treatment for critically ill patients and those with pulmonary diseases. However, patient-ventilator asynchrony (PVA) remains a significant challenge, potentially leading to high mortality. Improving patient-ventilator synchrony poses a complex decision-making problem in clinical practice. Traditional methods rely heavily on clinicians' experience, often resulting in inefficiencies, delayed ventilator adjustments, and resource shortages. This paper proposes a novel approach using a deep reinforcement learning (RL) algorithm based on deep Q-learning (DQN) to enhance patient-ventilator synchrony during pressure support ventilation. The action space and reward function are established from clinical experience, and a pneumatic model of the mechanical ventilation system is constructed to simulate various patient conditions and types of PVAs. Clinical data are used to evaluate the RL algorithm qualitatively and quantitatively. The RL-optimized ventilation strategy reduces the proportion of breaths containing PVAs from 37.52% to 7.08%, demonstrating its effectiveness in assisting clinical decision-making, improving synchrony, and enabling intelligent ventilator control, bedside monitoring, and automatic weaning.
AB - Mechanical ventilation is an effective treatment for critically ill patients and those with pulmonary diseases. However, patient-ventilator asynchrony (PVA) remains a significant challenge, potentially leading to high mortality. Improving patient-ventilator synchrony poses a complex decision-making problem in clinical practice. Traditional methods rely heavily on clinicians' experience, often resulting in inefficiencies, delayed ventilator adjustments, and resource shortages. This paper proposes a novel approach using a deep reinforcement learning (RL) algorithm based on deep Q-learning (DQN) to enhance patient-ventilator synchrony during pressure support ventilation. The action space and reward function are established from clinical experience, and a pneumatic model of the mechanical ventilation system is constructed to simulate various patient conditions and types of PVAs. Clinical data are used to evaluate the RL algorithm qualitatively and quantitatively. The RL-optimized ventilation strategy reduces the proportion of breaths containing PVAs from 37.52% to 7.08%, demonstrating its effectiveness in assisting clinical decision-making, improving synchrony, and enabling intelligent ventilator control, bedside monitoring, and automatic weaning.
KW - Decision-Making Optimization
KW - Deep Reinforcement Learning
KW - Patient-ventilator Synchrony
KW - Pressure Support Ventilation
UR - http://www.scopus.com/inward/record.url?scp=105001300621&partnerID=8YFLogxK
U2 - 10.1109/JBHI.2025.3551670
DO - 10.1109/JBHI.2025.3551670
M3 - Article
AN - SCOPUS:105001300621
SN - 2168-2194
JO - IEEE Journal of Biomedical and Health Informatics
JF - IEEE Journal of Biomedical and Health Informatics
ER -