Statistical correlative model in the multimodal fusion of brain images |
| |
Authors: | Zhancheng Zhang Jie Cui Xiaoqing Luo Qingjun You |
| |
Affiliation: | 1. School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou, Jiangsu, China;2. School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi, Jiangsu, China
Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Wuxi, Jiangsu, China;3. Affiliated Hospital of Jiangnan University, Department of Thoracic Surgery, Wuxi, Jiangsu, China |
| |
Abstract: | Fusing multimodal medical images into an integrated image, providing more details and rich information thereby facilitating medical diagnosis and therapy. Most of the existing multiscale-based fusion methods ignore the correlations between the decomposition coefficients and lead to incomplete fusion results. A novel contextual hidden Markov model (CHMM) is proposed to construct the statistical model of contourlet coefficients. First, the pair brain images are decomposed into multiscale, multidirectional, and anisotropic subbands with a contourlet transform. Then the low-frequency components are fused with the choose-max rule. For the high-frequency coefficients, the CHMM is learned with the EM algorithm, and incorporate with a novel fuzzy entropy-based context, building the fuzzy relationships among these coefficients. Finally, the fused brain image is obtained by using the inverse contourlet transform. Fusion experiments on several multimodal brain images show the superiority of the proposed method in terms of both visual quality and some widely used objective measures. |
| |
Keywords: | contourlet transform hidden Markov model medical image fusion statistical model |
|
|