A self-adaptive mask-enhanced dual-dictionary learning method for MRI-CT image reconstruction


Multi-modality medical imaging has found its increasing application for better healthcare. The main advantage of multi-modality imaging is that the weakness of each modality are offset by the others. MRI and X-ray CT simultaneous imaging is a possible multi-modality imaging mode in clinical applications of the further. Both MRI and CT are widely used imaging modalities. Though both of them both belong to structure imaging, MRI and CT can provide complementary information about patient anatomy. Usually, MRI may offer good soft tissue information while CT depicts better quality image on bones. Hence, the idea of CT-MRI hybrid imaging integrated in one machine was put forward, which is expected to provide high image quality in both soft tissue and bones. Moreover, because CT scanning is much fast than MRI, it is possible to greatly shorten the scanning time of MRI by reconstructing the MRI image from the highly under-sampled data and the information of the CT image at the same tissues. This paper focuses on the problem of multi-modalities medical image reconstruction problem (MRI-CT), and proposes a self-adaptive mask-enhanced dual-dictionary learning (DDL) for MRI image reconstruction with highly under-sampled measurement data and training data of well-registered MRI and CT images. Its main feature is to establish a one-to-one mapping relationship between MRI image and CT image of the same tissues, and obtain a good initial value of MRI imaging from the CT data. Numerical simulations were performed to evaluate and validate the proposed algorithm.


    0 Figures and Tables

      Download Full PDF Version (Non-Commercial Use)