Skip to content

Scientific-Computing-Lab/SDT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dynamic Sparse-View Tomography Physics-Inspired Dataset and Radiance-Field Baseline

Overview

Reconstructing the 3D evolution of physical systems from only a handful of X-ray projections is a fundamental yet unsolved challenge in experimental science. Traditional tomography fails under sparse-view and dynamic conditions, while physics-driven reconstructions are slow and biased.

This repository provides:

  • Synthetic Dynamic Tomography (SDT) dataset – a physics-inspired synthetic dataset that lifts inexpensive 2D fluid simulations into 4D (3D + time) ground-truth volumes with physically realistic radiographs.
  • Radiance-field baseline model – an adaptation of Neural Radiance Fields (NeRF) for time-varying 3D reconstruction from sparse projections, without requiring volume supervision.

Dataset

  • 54 base dynamic sequences of 3D volumes, each derived from a unique combination of (g, 𝒜, a₀) and containing multiple time steps t.
  • 72 radiographs per 3D volume (one per evenly spaced angle around the object).
  • Two object families:
    1. Perfectly symmetric volumes (solid-of-revolution).
    2. Perturbed symmetry-broken volumes (with embedded spheres or density noise).
    • The latter provides a harder testbed for reconstruction algorithms, ensuring models don’t rely on trivial symmetry.
  • Rich metadata for each sample:
    • Initial simulation parameters (g, 𝒜, a₀)
    • Time index t
    • View angle θ for each projection
    • Noise level / perturbation type used
    • Link to the original 2D slice Sₜ (for reference only).

Repository Structure

├── dataset/ - Dataset files and loaders
├── figures/ - Figures and visualizations (for README/paper)
├── graf-main/ - External dependency or submodule (Generative Radiance Fields)
│ └── configs/ - Configuration files for training and experiments
├── models/ - Model architectures and implementations
├── renderings/ - Rendered outputs and experiment results
├── gan_training.py - GAN training script (entry point for training)
└── README.md - Project documentation

Train a model

Refer to graf-main folder and execute, replacing CONFIG.yaml with rt_g_amp_sym.yaml or rt_g_amp_sphere.yaml

python train.py configs/CONFIG.yaml

Illustration of the latent space captured after training sphere interference objects' dataset.

Different objects with the interference at different rotations can be observed.

Reconstruction given an X-ray

After training a model, you can test its capacity to reconstruct 3D-aware CT projections given a single X-ray.

To execute the reconstruction execute:

python graf-main/render_xray_G.py --config_file graf-main/configs/rt_g_amp_sym.yaml \
                        --xray_img_path datasets/rt_dataset/reconstruct/dataset_g_amp_sym/01 \
                        --save_dir renderings/rt_sym_g_amp_01_res_128_100k \
                        --model models/rt_sym_g_amp_iter_100k/model_best.pt \
                        --save_every 25 \
                        --psnr_stop 50 \
                        --img_size 128

or

bash infer_rt_sym_g_amp.sh

Acknowledgments

  • Our model is adapted from MedNeRF, which pioneered the use of Neural Radiance Fields for sparse-view medical imaging. We thank the authors for making their code publicly available.

About

Synthetic Dynamic Tomography (SDT)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors