Official MICCAI2019 repo: https://github.com/neheller/kits19
ABOUT: There are more than 400,000 new cases of kidney cancer each year, and surgery is its most common treatment. Due to the wide variety in kidney and kidney tumor morphology, there is currently great interest in how tumor morphology relates to surgical outcomes, as well as in developing advanced surgical planning techniques. Automatic semantic segmentation is a promising tool for these efforts, but morphological heterogeneity makes it a difficult problem. The goal of this challenge, is to accelerate the development of reliable kidney and kidney tumor semantic segmentation methodologies, and within this repository, you will find my dedicated efforts and code as an attempt to address this problem.
I would like to express my gratitude to the following individuals and organizations for their valuable contributions, support, and inspiration throughout this project:
-Marco Domenico Santambrogio, Full Professor @Politecnico di Milano.
-Eleonora D'Arnese, PhD, Post-Doc Researcher @Politecnico di Milano.
-Isabella Poles, PhD Student in Information Technology and Computer Science @Politecnico di Milano.
-NecstLab, @Politecnico di Milano
You can obtain the dataset by visiting the official KITS19 repository:
Once you're on the KITS19 repository, you can follow their instructions to download the dataset.
- Import the necessary libraries, including PyTorch and Nibabel for image manipulation.
- Define the
make_img_path(cid)function to get the path to the imaging file for a given case. - Define the
make_seg_path(cid)function to get the path to the segmentation file for a given case. - Create output folders for preprocessed data:
trainandvalid. - Preprocess training images and save them as NumPy files (.npy) in their respective folders.
- Define the
KitsDatasetclass representing the custom dataset for this project. - Provide the paths to folders containing images and segmentations.
- Use the
lenmethod to get the dataset's length. - Define the
getitemfunction to obtain a sample from the dataset with the necessary features for your network. - Load images and segments as PyTorch tensors using
torch.tensor. - Apply transformations and normalizations to the image. Finally, return a
samplecontaining the image and mask.
- Define the
set_parameter_requires_gradfunction to set therequires_gradflag for model parameters. - Create the segmentation model based on the FCN-ResNet101 architecture using the
createModelfunction. - Define the
collate_fnfunction to handle batch sample grouping and potential errors. - Define the
train_modelfunction to train the model. - Iterate through each training and validation epoch, calculating losses and evaluation metrics.
- Save the model's weights based on the lowest validation loss.
- At the end of training, load the model with the best weights.
- Create the training and validation datasets using the
KitsDatasetclass. - Define data loaders for training and validation using the created datasets.
- Create the segmentation model using the
createModelfunction. - Define the optimizer and loss function for model training.
- Train the model using the
train_modelfunction. - Finally, perform image segmentation using the trained model.
Please note that this project is a part of my academic coursework, and it is not intended to be a perfect or production-ready solution. Please feel free to reach out if you have any questions or need further assistance. This README provides an overview of the project's structure and the processes involved.
