Diffusion Model Optimization with Deep Learning

Dimond: A Study on Optimizing Diffusion Models through Deep Learning

Deep Learning Model for Optimizing Diffusion Imaging

Academic Background

In brain science and clinical applications, Diffusion Magnetic Resonance Imaging (dMRI) is an essential tool for non-invasively mapping the microstructure and neural connectivity of brain tissue. However, accurately estimating parameters of the diffusion signal model is computationally expensive and susceptible to image noise. Various existing supervised deep learning-based estimation methods have shown potential in improving efficiency and performance, but these methods usually require additional training data and suffer from generalization issues.

Paper Source

This research was conducted collaboratively by Zihan Li, Ziyu Li, Berkin Bilgic, Hong-Hsi Lee, Kui Ying, Susie Y. Huang, Hongen Liao, and Qiyuan Tian (corresponding author). The paper was published in the journal “Advanced Science” with the DOI: 10.1002/advs.202307965.

Brief Overview of Research Workflow

This study proposed a new framework named “Dimond,” which uses physical information and self-supervised deep learning to optimize the diffusion model. The Dimond framework maps input image data to model parameters through a neural network and optimizes the network by minimizing discrepancies between input acquisition data and synthetic data generated using the diffusion model parameterized by the network.

1. Mapping Process

Dimond employs a neural network (NN) to map the input diffusion data to the parameter maps of the diffusion model. Specifically, the input diffusion data is [i = [i_1,\dots,i_n]^T] (where n is the number of image volumes), and the mapped parameter maps are [g(i) = p = [p_1,\dots,p_m]^T] (where m is the number of microstructure map volumes).

2. Modeling Process

The parameter maps are subsequently used to synthesize image volumes with the same diffusion-encoding directions and b-values as the input diffusion data through a forward model, resulting in [ \hat{i} = [\hat{i}_1,\dots,\hat{i}_n]^T ].

3. Optimization Process

Gradient descent is used to minimize the mean squared error between the original acquisition and the synthetic image intensities ([i] and [ \hat{i} ]) within the mask range of the diffusion model parameters of interest. The loss function can incorporate prior knowledge of the diffusion model (such as noise distribution, sparsity, low rankness) to further enhance performance.

4. Experimental Methods and Data

a. Simulated Data

Synthetic diffusion signals were generated to evaluate the effectiveness of Dimond. The simulated signals exist in forms of 1, 2, or 3 tensors, with added Rician noise at different levels (signal-to-noise ratios of 10 and 20). Inputs include diffusion-weighted signals with different numbers of b=0 and b=1000s/mm².

b. Human Connectome Project (HCP) Diffusion Data

Preprocessed diffusion MRI data from 10 healthy subjects from HCP was used, with a spatial resolution of 1.25mm. The data includes 18 b=0 and 90 diffusion-weighted signals at different b-values (1000, 2000, and 3000s/mm²). Different b-shell datasets were generated through subsampling to evaluate Dimond’s performance.

c. Massachusetts General Hospital (MGH) Microstructure Dataset (CDMD)

CDMD data was used to evaluate Dimond’s cross-dataset generalization ability. The data includes 50 b=0 and 800 diffusion-weighted signals at different b-values.

5. Dimond Results on Simulated Data

Results on simulated data showed that Dimond generated more accurate DTI metrics compared to ordinary least squares regression methods (e.g., implemented by FSL), especially under high noise conditions. Using MC dropout further improved Dimond’s performance.

6. Dimond Performance on HCP Data

Experiments demonstrated that Dimond generated tensor components and DTI metrics from HCP data that were cleaner and closer to reference values than traditional methods. It especially excelled in denoising and showed good generalizability across subjects and datasets. Additionally, fine-tuning the pre-trained network reduced training time and improved result consistency.

7. Dimond Performance on Complex Models

Dimond performed excellently in fitting more complex diffusion models (such as DKI and NODDI), surpassing traditional methods and significantly reducing model fitting time. For example, Dimond reduced the NODDI model fitting time for high-resolution HCP datasets from 12 hours (using traditional methods) to 24 minutes, and even further down to 32 seconds with transfer learning.

Research Significance and Value

The Dimond framework effectively addresses efficiency and accuracy issues in estimating diffusion model parameters by leveraging deep learning techniques, providing a more practical method for non-invasive mapping of brain tissue microstructure and connectivity. Its self-supervised and physics-informed characteristics substantially enhance practicality and acceptability in clinical and neuroscientific applications. As a versatile solver, Dimond simplifies the development, deployment, and distribution processes of diffusion models compared to existing methods, achieving higher computational efficiency and energy savings.

Research Highlights

  • Optimizing diffusion models with deep learning technology significantly improves fitting efficiency and accuracy.
  • Achieving self-supervised learning reduces the need for additional training data and resolves generalization issues.
  • The optimization process integrates various prior knowledge, further enhancing model estimation performance.
  • Experiments across datasets and subjects demonstrated good generalizability and supported transfer learning.
  • Proficient in fitting complex microstructure models (such as DKI, NODDI), greatly reducing fitting time.

Conclusion

The Dimond framework, through an innovative deep learning optimization approach, demonstrates its new application in estimating diffusion model parameters. This method proved its superiority not only on simulated and HCP data but also exhibited exceptional efficiency and accuracy on more complex diffusion models. Dimond provides a powerful tool for microstructural imaging and neural connectivity research, with extensive scientific and application value.