首页 | 本学科首页   官方微博 | 高级检索  
     


Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients
Affiliation:1. School of Engineering, Faculty of Applied Science, University of British Columbia (Okanagan), 1137 Alumni Avenue Kelowna, BC V1V 1V7 Canada;2. Information Directorate, Air Force Research Laboratory, Rome, NY, 13441 USA;3. Toyota Technological Institute, Tenpaku-Ku, Nagoya, 468-8511 Japan
Abstract:For the quality of the fused outcome is determined by the amount of the information captured from the source images, thus, a multi-modal medical image fusion method is developed in the shift-invariant shearlet transform (SIST) domain. The two-state Hidden Markov Tree (HMT) model is extended into the SIST domain to describe the dependent relationships of the SIST coefficients of the cross-scale and inter-subbands. Base on the model, we explain why the conventional Average–Maximum fusion scheme is not the best rule for medical image fusion, and therefore a new scheme is developed, where the probability density function and standard deviation of the SIST coefficients are employed to calculate the fused coefficients. Finally, the fused image is obtained by directly applying the inverse SIST. Integrating the SIST and the HMT model, more spatial feature information of the singularities and more functional information contents can be preserved and transferred into the fused results. Visual and statistical analyses demonstrate that the fusion quality can be significantly improved over that of five typical methods in terms of entropy and mutual information, edge information, standard deviation, peak signal to noise and structural similarity. Besides, color distortion can be suppressed to a great extent, providing a better visual sense.
Keywords:
本文献已被 ScienceDirect 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号