Model Card for VertebralBodiesCT-Neighbors

VertebralBodiesCT-Neighbors is a 3D semantic segmentation model designed to identify the center vertebra and its two neighbors (above and below) within cropped CT volumes that include a small anatomical context. It consumes two channels: the CT crop and a binary "center vertebra marker" channel indicating the index vertebra location. The output classes are mutually exclusive multi-class targets: center vertebral body, above vertebral body, below vertebral body, plus any other vertebral body.

This card documents how the training data is generated and the intended usage. The training labels and images are not published; instead, we provide the exact notebook used to create the dataset so results are reproducible.

Model Details

  • Task: 3D multi-class semantic segmentation of vertebral context crops
  • Framework: nnU-Net v2 (standard, unmodified) with ignore label support and region-based training
  • Input channels (2):
    • channel 0: CT crop (*_0000.nii.gz)
    • channel 1: center vertebra marker (small sphere at centroid, *_0001.nii.gz)
  • Output classes:
    • 0: background
    • 1: other vertebrae (any vertebra not center/above/below)
    • 2: center vertebra
    • 3: above vertebra
    • 4: below vertebra

Data and Generation Process

Training data is derived from thoracic/lumbar vertebral body labels sourced from the refined public datasets released in fhofmann/VertebralBodiesCT-Labels. Crops are created around an index (center) vertebra and include its adjacent neighbors.

Key aspects implemented in the notebook [suppl/DatasetGeneration.ipynb]:

  • Fixed symmetric padding (in mm) around the index vertebra bounding box, data-driven from neighbor-distance analysis to include adjacent vertebrae in 95% of cases:
    • X (left-right): 8 mm
    • Y (anterior-posterior): 20 mm
    • Z (cranial-caudal): 70 mm
  • Dynamic expansion: if a projected neighbor mask touches a crop boundary within 1 voxel and the image permits, expand that axis by +10 mm per iteration (up to 6 iterations). All boxes are clamped to image bounds.

See dataset.json for the dataset spec, including channel names, label definitions, and metadata:

  • channel_names: {"0": "CT", "1": "vertebra_center_marker"}
  • labels: {background: 0, vertebra_any: [1,2,3,4], vertebra_center: 2, vertebra_above: 3, vertebra_below: 4, ignore: 5}
  • regions_class_order: [1,2,3,4]

Important: The images and labels produced by the notebook are NOT published here. The notebook is provided to enable you to reproduce the dataset locally from the licensed data sources.

Intended Use

  • Predict the identity of the center vertebra and its two direct neighbors within context crops. This can support downstream pipelines that rely on vertebral anatomies (e.g., body composition measurements standardized to L3).
  • The model is designed for thoracic and lumbar vertebral bodies; cervical spine is out of scope. Vertebral arches and spinous processes are excluded by design in the underlying labels.

How to Reproduce the Training and Testing Dataset

  1. Open suppl/DatasetGeneration.ipynb.
  2. Set the input/output directories at the top (e.g., dir_root, imagesTr, labelsTr locations). The defaults point to local storage; adjust to your environment.
  3. Run all cells. The notebook will:
    • enumerate CT volumes,
    • generate crops per vertebra with the specified padding/expansion,
    • write two-channel images and multi-class labels into VertebralBodiesCT-Neighbors-Labels/images{Tr,Ts} and VertebralBodiesCT-Neighbors-Labels/labels{Tr,Ts}.
  4. Ensure dataset.json accompanies the dataset for nnU-Net v2.

Note on ignore label: class 5 is used to mask regions above T1 or out-of-scope, following nnU-Net’s ignore label mechanism. For details, see the nnU-Net documentation on ignore labels.

Training and Inference Notes (nnU-Net v2)

  • Architecture/Plans: standard nnU-Net v2 configuration (3d_fullres) using region-based training with ResEnc presets.
  • Channels: the data uses two input channels. Ensure your nnU-Net dataset follows the *_0000.nii.gz and *_0001.nii.gz convention.
  • Splits: the notebook provides training/testing separation by mirroring the source dataset layout; adapt to your cross-validation strategy.
  • Inference requires the two-channel crops as input, thus creating the center marker and crop around each index vertebra using the same rules before feeding data to the trained model.

Limitations and Risks

  • Data source bias may persist despite heterogeneity in the underlying public datasets.
  • Performance may degrade in severe deformities, transitional vertebrae, implants, or rare anatomies.
  • The model focuses on vertebral bodies only (arches and spinous processes excluded) and does not cover the cervical spine.
  • Research use only. Not intended for clinical decision making.

Source Datasets and Acknowledgements

The training data for VertebralBodiesCT-Neighbors is derived from vertebral body labels produced from:

  • TotalSegmentator (Wasserthal et al., 2023)
  • VerSe (Sekuboyina et al., 2021)

Curated and refined labels are described in fhofmann/VertebralBodiesCT-Labels

Citation

If you use this work, please cite the following:

Wasserthal, J., Breit, H.-C., Meyer, M.T., Pradella, M., Hinck, D., Sauter, A.W., Heye, T., Boll, D., Cyriac, J., Yang, S., Bach, M., Segeroth, M. (2023). TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images. Radiology: Artificial Intelligence. https://doi.org/10.1148/ryai.230024
Sekuboyina, A., Husseini, M.E., Bayat, A., Löffler, M., Liebl, H., Li, H., Tetteh, G., Kukačka, J., Payer, C., Štern, D., Urschler, M., Chen, M., Cheng, D., Lessmann, N., Hu, Y., Wang, T., Yang, D., Xu, D., Ambellan, F., Amiranashvili, T., Ehlke, M., Lamecker, H., Lehnert, S., Lirio, M., Pérez de Olaguer, N., Ramm, H., Sahu, M., Tack, A., Zachow, S., Jiang, T., Ma, X., Angerman, C., Wang, X., Brown, K., Kirszenberg, A., Puybareau, É., Chen, D., Bai, Y., Rapazzo, B.H., Yeah, T., Zhang, A., Xu, S., Hou, F., He, Z., Zeng, C., Xiangshang, Z., Liming, X., Netherton, T.J., Mumme, R.P., Court, L.E., Huang, Z., He, C., Wang, L.-W., Ling, S.H., Huỳnh, L.D., Boutry, N., Jakubicek, R., Chmelik, J., Mulay, S., Sivaprakasam, M., Paetzold, J.C., Shit, S., Ezhov, I., Wiestler, B., Glocker, B., Valentinitsch, A., Rempfler, M., Menze, B.H., Kirschke, J.S. (2021). VerSe: A Vertebrae labelling and segmentation benchmark for multi-detector CT images. Medical Image Analysis. https://doi.org/10.1016/j.media.2021.102166
Haubold, J., Baldini, G., Parmar, V., Schaarschmidt, B.M., Koitka, S., Kroll, L., van Landeghem, N., Umutlu, L., Forsting, M., Nensa, F., Hosch, R. (2023). BOA: A CT-Based Body and Organ Analysis for Radiologists at the Point of Care. Investigative Radiology. https://doi.org/10.1097/RLI.0000000000001040

Since the model is based on the nnU-Net framework and the residual encoder UNet presets, please cite the following papers:

Isensee, F., Jaeger, P.F., Kohl, S. A., Petersen, J., Maier-Hein, K.H. (2021). nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature methods, 18(2), 203-211. https://doi.org/10.1038/s41592-020-01008-z
Isensee, F., Wald, T., Ulrich, C., Baumgartner, M. , Roy, S., Maier-Hein, K., Jaeger, P. (2024). nnU-Net Revisited: A Call for Rigorous Validation in 3D Medical Image Segmentation. arXiv preprint arXiv:2404.09556.

Please cite our dataset:

Hofmann F.O. et al. Thoracic & lumbar vertebral body labels corresponding to 1460 public CT scans. https://huggingface.co/datasets/fhofmann/VertebralBodiesCT-Labels/

License

  • CC BY-SA 4.0 for this repository’s documentation and configuration files.
  • Respect licenses and terms of the underlying data sources when reproducing the dataset locally.

Contact

Questions, feedback, or suggestions are welcome. Please open an issue or discussion in the repository, or reach out via the model’s discussion page.

Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train fhofmann/VertebralBodiesCT-Neighbors