| title | model_file | license | pipeline_tag | tags | ||||
|---|---|---|---|---|---|---|---|---|
pyment-public |
artifacts/sfcn-multi.onnx |
cc-by-nc-4.0 |
image-classification |
|
This repository contains the pretrained multi-task model from PAPER LINK. There are two approaches for running the model locally, via Docker or by installing the Python-package locally. Running the model with Docker requires less setup, but is less flexible.
- (Recommended) Instructions for generating predictions with the pretrained model using Docker
- Instructions for generating predictions with the pretrained model in a local Python-environment
All the approaches described below rely having data available. The IXI dataset can be downloaded with a prepackaged script that requires a minimal Python-environment:
python tutorials/download_ixi.py
The instructions below are configured for the data and paths that are generated by this script, but everything can easily be replaced if you want to run on your own data instead.
There are several ways of running the model. Using Docker is the most straight-forward, and the recommended approach.
(Recommended) Generating predictions with Docker
Preprocessing and predicting in a single step with Docker consists of running the container that first runs FastSurfer preprocessing over all raw input images and then runs the model on the resulting preprocessed images. The docker-container will result in both a folder with preprocessed images, and a file predictions.csv containing all predictions.
Running the container relies on mounting three volumes:
- Inputs: A folder containing input data. All nifti-files detected in this folder or one of its subfolders will be processed
- Outputs: A folder where the preprocessed images and predictions will be written. This must be created prior to running the container
- Licenses: A folder containing the freesurfer license. The file must be named freesurfer.txt
mkdir -p ~/data/ixi/outputs
docker pull estenhl/pyment-preprocess-and-predict:latest
docker run --rm -it \
--user $(id -u):$(id -g) \
--volume $HOME/data/ixi/images:/input \
--volume $HOME/data/ixi/outputs:/output \
--volume $HOME/licenses:/licenses \
--gpus all \
estenhl/pyment-preprocess-and-predict:latest
Running preprocessing and predictions in two steps using Docker
Preprocessing and predicting in two steps via docker requires using the two prebuilt docker containers for the two steps independently.
Running the container for preprocessing requires mounting three volumes:
- Inputs: A folder containing input data. All nifti-files detected in this folder or one of its subfolders will be processed
- Outputs: A folder where the preprocessed images will be written. This must be created prior to running the container
- Licenses: A folder containing the freesurfer license. The file must be named freesurfer.txt
mkdir -p ~/data/ixi/outputs
docker pull estenhl/pyment-preprocess:latest
docker run --rm \
--user $(id -u):$(id -g) \
--volume $HOME/data/ixi/images:/input \
--volume $HOME/data/ixi/outputs:/output \
--volume <path_to_licenses>:/licenses \
--gpus all \
estenhl/pyment-preprocess:latest
Running the container for predictions requires two volumes:
- Fastsurfer: The folder containing fastsurfer-processed images
- Outputs: The folder where the predictions are written
docker pull estenhl/pyment-predict:latest
docker run --rm -it \
--user $(id -u):$(id -g) \
--volume $HOME/data/ixi/outputs/fastsurfer:/fastsurfer \
--volume $HOME/data/ixi/outputs:/output \
--gpus all \
estenhl/pyment-predict:latest
Installing the package locally allows for more flexibility in terms of modifying the model. Depending on the OS, this generally requires three steps:
- Configure the system (only necessary on Linux)
- Prepare a Python environment
- Install the pyment-package
If we want to run either inference or finetuning locally, this would typically also require us to install FastSurfer for preprocessing. Once we have finished the setup, we can run either inference or finetuning via prepackaged scripts.
The system configuration is only necessary if we are sitting on a Linux machine:
Configure Ubuntu
First we need to download and install CUDA 11.2:
wget https://developer.download.nvidia.com/compute/cuda/11.2.2/local_installers/cuda_11.2.2_460.32.03_linux.run
sudo sh cuda_11.2.2_460.32.03_linux.run --silent --toolkit --installpath=/usr/local/cuda-11.2
Next, cudnn must be installed. Download a suitable deb-file from https://developer.nvidia.com/rdp/cudnn-archive. Then install the file:
sudo dpkg -i ~/Downloads/cudnn-local-repo-ubuntu2204-8.9.7.29_1.0-1_amd64.deb
sudo cp /var/cudnn-local-repo-ubuntu2204-8.9.7.29/cudnn-local-*-keyring.gpg /usr/share/keyrings/
sudo apt update
sudo apt install libcudnn8 libcudnn8-dev
sudo cp /usr/include/cudnn*.h /usr/local/cuda-11.2/include/
sudo cp -P /usr/lib/x86_64-linux-gnu/libcudnn*.so* /usr/local/cuda-11.2/lib64/
sudo ldconfig
Finally, we must configure the system paths in .bashrc:
echo 'export CUDA_HOME=/usr/local/cuda-11.2' >> ~/.bashrc
echo 'export PATH=$CUDA_HOME/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$CUDA_HOME/extras/CUPTI/lib64' >> ~/.bashrc
To ensure the virtual environment is created with the correct Python-version we will rely on pyenv:
Install pyenv on Ubuntu
On Ubuntu, install pyenv via curl:
curl https://pyenv.run | bash
After installation, add pyenv to the ~/.bashrc-file to enable terminal shortcuts:
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc
echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
echo -e 'if command -v pyenv 1>/dev/null 2>&1; then\n eval "$(pyenv init -)"\nfi' >> ~/.bashrc
source ~/.bashrc
Install pyenv on macOS
On macOS, install `pyenv` via `brew`: ``` brew update brew install pyenv ```After installation, add pyenv to the ~/.zshrc-file to enable terminal shortcuts:
echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.zshrc
echo '[[ -d $PYENV_ROOT/bin ]] && export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.zshrc
echo 'eval "$(pyenv init - zsh)"' >> ~/.zshrc
After installing pyenv, we can download the Python-version expected by this library:
pyenv install 3.10.4
From here, the approach diverges depending on whether we want to install the package using pip (simplest) or cloning and building it locally (most flexible):
(Simplest) Install via pip
First, we need to create a Python-environment with the correct python version
pyenv shell 3.10.4
Next, we can create a virtual environment with this same Python-version using Pythons venv-module:
mkdir -p $HOME/venv
python -m venv $HOME/venv/pyment
And finally, we can activate the environment:
source $HOME/venv/pyment/bin/activate
Finally, we can install the package using pip:
pip install git+https://github.com/estenhl/pyment-public
Build locally with poetry
First, we must install poetry:
curl -sSL https://install.python-poetry.org | python3 -
Follow the instructions that are prompted after the curl, to ensure poetry is on your path. Next, we must clone the repository (in a suitable location):
git clone git@github.com:estenhl/pyment-public.git
cd pyment-public
Then we can configure the local Python-version and install the package
pyenv local 3.10.4
poetry env use 3.10.4
poetry install
This will result in a virtual environment managed by poetry that can be activated with
eval $(poetry env activate)
Preprocessing and predicting manually relies on using the scripts provided in this repository to generate predictions via two steps
Prior to both inference and finetuning, images must be preprocessed using FastSurfer. This can either be achieved via the prebuilt preprocessing-container in Docker, described in the Inference in Docker-section above, or via a local installation.
Preprocessing images with FastSurfer locally
First, FastSurfer must be downloaded. If any of the subsequent steps fail, a comprehensive installation-guide can be found in the FastSurfer GitHub repository. The following steps downloads and installs FastSurfer into the folder ~/repos/fastsurfer. First, some system packages must be installed:
sudo apt-get update && apt-get install -y --no-install-recommends wget git ca-certificates file
Next, we can clone FastSurfer, and change to the correct branch:
mkdir -p ~/repos
export FASTSURFER_HOME=~/repos/fastsurfer
git clone --branch stable https://github.com/Deep-MI/FastSurfer.git $FASTSURFER_HOME
(cd $FASTSURFER_HOME && git checkout v2.0.1)
Then we can create a python environment for fastsurfer, and install its dependencies. Note that the packages are installed using pip from the newly created virtual environment, not the system default:
mkdir -p $HOME/venvs
export FASTSURFER_VENV=$HOME/venv/fastsurfer
python -m venv $FASTSURFER_VENV
# The SimpleITK version in the requirements-file has been yanked, so we manually install a valid version prior to installing the remaining requirements.
$FASTSURFER_VENV/bin/pip install simpleitk==2.1.1.2
# SimpleITK then has to be removed from requirements.txt before installing the rest
grep -v "simpleitk==2.1.1" $FASTSURFER_HOME/requirements.txt | $FASTSURFER_VENV/bin/pip install -r /dev/stdin
Finally, we can run the preprocessing script, pointing towards the python from the virtual environment. Note that a valid freesurfer license must also be passed to this script, and that the $FASTSURFER_HOME variable must be set:
sh scripts/preprocess.sh \
--license <path-to-license> \
--python $HOME/venv/fastsurfer/bin/python \
$HOME/ixi/images \
$HOME/data/ixi/preprocessed
After preprocessing the images, we can run scripts prepackaged in this repository to generate predictions. First, we must activate the virtual environment:
Activate the environment with venv (if you installed the package using pip)
python -m venv $HOME/venv/pyment/bin/activate
Activate the environment with poetry (if you installed using poetry)
eval $(poetry env activate)
Then, predictions can be generated either using the built-in CLI or native python:
Generate predictions using CLI
pyment-predict $HOME/data/ixi/preprocessed -d $HOME/data/ixi/predictions.csv
Generate predictions using Python
python pyment/cli/predict_from_fastsurfer_folder.py $HOME/data/ixi/preprocessed -d $HOME/data/ixi/predictions.csv
Evaluate the IXI predictions with
python scripts/evaluate_ixi_predictions.py
If everything is set up correctly, this should yield an MAE of 3.12. Note that the paths to both the labels and predictions can be given as keyword arguments to the script if they don't reside in the standard locations.
Prior to finetuning on the IXI-dataset, we must generate appropriate labels. This can be done with a prepackaged script, after activating the appropriate virtual environment (see Installing locally):
python scripts/create_ixi_labels \
-s $HOME/data/ixi/IXI.xls \
-d $HOME/data/ixi/labels.csv \
-i $HOME/data/ixi/images
The images should also be preprocessed using the same pipeline as during pretraining before further finetuning.
The prebuilt Docker-container for finetuning relies on a very experimental GPU-based TensorFlow parser for MGH/MGZ images from a different repo. Given its developmental status, its worth sanity checking this loader on the exact images you intend to use for finetuning before progressing further. This can be done by first generating the FastSurfer-crops that will be used for finetuning:
python scripts/create_fastsurfer_conformed_crops.py \
$HOME/data/ixi/outputs/fastsurfer/ \
-num_threads <Number of threads>
And then by running the built-in CLI for sanity checking from the other repository (this requires the library to be installed, either implicitly via installing this repository locally, or by cloning and building that repository directly):
verify-mgh-loader $HOME/data/ixi/outputs/fastsurfer -r crop.mgz