TY - JOUR
T1 - Medical image fusion via discrete stationary wavelet transform and an enhanced radial basis function neural network
AU - Chao, Zhen
AU - Duan, Xingguang
AU - Jia, Shuangfu
AU - Guo, Xuejun
AU - Liu, Hao
AU - Jia, Fucang
N1 - Publisher Copyright:
© 2022 Elsevier B.V.
PY - 2022/3
Y1 - 2022/3
N2 - Medical image fusion of images obtained via different modes can expand the inherent information of original images, whereby the fused image has a superior ability to display details than the original sub-images, to facilitate diagnosis and treatment selection. In medical image fusion, an inherent challenge is to effectively combine the most useful information and image details without information loss. Despite the many methods that have been proposed, the effective retention and presentation of information proves challenging. Therefore, we proposed and evaluated a novel image fusion method based on the discrete stationary wavelet transform (DSWT) and radial basis function neural network (RBFNN). First, we analyze the details or feature information of two images to be processed by DSWT by using two-level decomposition to separate each image into seven parts, comprising both high-frequency and low-frequency sub-bands. Considering the gradient and energy attributes of the target, we substituted the pending parts in the same position in the two images by using the proposed enhanced RBFNN. The input, hidden, and output layers of the neural network comprised 8, 40, and 1 neuron(s), respectively. From the seven neural networks, we obtained seven fused parts. Finally, through inverse wavelet transform, we obtained the final fused image. For the neural network training method, the hybrid adaptive gradient descent algorithm (AGDA) and gravitational search algorithm (GSA) were implemented. The final experimental results revealed that the novel method has significantly better performance than the current state-of-the-art methods.
AB - Medical image fusion of images obtained via different modes can expand the inherent information of original images, whereby the fused image has a superior ability to display details than the original sub-images, to facilitate diagnosis and treatment selection. In medical image fusion, an inherent challenge is to effectively combine the most useful information and image details without information loss. Despite the many methods that have been proposed, the effective retention and presentation of information proves challenging. Therefore, we proposed and evaluated a novel image fusion method based on the discrete stationary wavelet transform (DSWT) and radial basis function neural network (RBFNN). First, we analyze the details or feature information of two images to be processed by DSWT by using two-level decomposition to separate each image into seven parts, comprising both high-frequency and low-frequency sub-bands. Considering the gradient and energy attributes of the target, we substituted the pending parts in the same position in the two images by using the proposed enhanced RBFNN. The input, hidden, and output layers of the neural network comprised 8, 40, and 1 neuron(s), respectively. From the seven neural networks, we obtained seven fused parts. Finally, through inverse wavelet transform, we obtained the final fused image. For the neural network training method, the hybrid adaptive gradient descent algorithm (AGDA) and gravitational search algorithm (GSA) were implemented. The final experimental results revealed that the novel method has significantly better performance than the current state-of-the-art methods.
KW - Discrete stationary wavelet transform
KW - Enhanced radial basis function neural network
KW - Medical image fusion
UR - http://www.scopus.com/inward/record.url?scp=85124461269&partnerID=8YFLogxK
U2 - 10.1016/j.asoc.2022.108542
DO - 10.1016/j.asoc.2022.108542
M3 - Article
AN - SCOPUS:85124461269
SN - 1568-4946
VL - 118
JO - Applied Soft Computing
JF - Applied Soft Computing
M1 - 108542
ER -