Authors: Alessia Saccardo (s212246@dtu.dk) & Felipe Delestro (fima@dtu.dk)
Authors: Alessia Saccardo (s212246@dtu.dk) & Felipe Delestro (fima@dtu.dk)
This notebook aims to demonstrate the feasibility of implementing a comprehensive deep learning segmentation pipeline solely leveraging the capabilities offered by the `qim3d` library. Specifically, it will highlight the utilization of the annotation tool and walk through the process of creating and training a Unet model.
This notebook aims to demonstrate the feasibility of implementing a comprehensive deep learning segmentation pipeline solely leveraging the capabilities offered by the `qim3d` library. Specifically, it will highlight the utilization of the annotation tool and walk through the process of creating and training a Unet model.
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
### Imports
### Imports
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
importqim3d
importqim3d
importnumpyasnp
importnumpyasnp
importos
importos
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
### Load data
### Load data
The `qim3d` library contains a set of example volumes which can be easily loaded using `qim3d.examples.{volume_name}`
The `qim3d` library contains a set of example volumes which can be easily loaded using `qim3d.examples.{volume_name}`
%% Cell type:code id: tags:
%% Cell type:code id: tags:
``` python
``` python
vol=qim3d.examples.bone_128x128x128
vol=qim3d.examples.bone_128x128x128
```
```
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
To easily have an insight of how the volume looks like we can interact with it using the `slicer` function from `qim3d`
To easily have an insight of how the volume looks like we can interact with it using the `slicer` function from `qim3d`
c:\Users\s193396\AppData\Local\miniconda3\envs\qim3d\lib\site-packages\gradio\analytics.py:106: UserWarning: IMPORTANT: You are using gradio version 4.44.0, however version 4.44.1 is available, please upgrade.
c:\Users\s193396\AppData\Local\miniconda3\envs\qim3d\lib\site-packages\gradio\analytics.py:106: UserWarning: IMPORTANT: You are using gradio version 4.44.0, however version 4.44.1 is available, please upgrade.
--------
--------
warnings.warn(
warnings.warn(
Annotation for slice 34 (training)
Annotation for slice 34 (training)
Thanks for being a Gradio user! If you have questions or feedback, please join our Discord server and chat with us: https://discord.gg/feTf9x3ZSB
Thanks for being a Gradio user! If you have questions or feedback, please join our Discord server and chat with us: https://discord.gg/feTf9x3ZSB
c:\Users\s193396\AppData\Local\miniconda3\envs\qim3d\lib\site-packages\gradio\analytics.py:106: UserWarning: IMPORTANT: You are using gradio version 4.44.0, however version 4.44.1 is available, please upgrade.
c:\Users\s193396\AppData\Local\miniconda3\envs\qim3d\lib\site-packages\gradio\analytics.py:106: UserWarning: IMPORTANT: You are using gradio version 4.44.0, however version 4.44.1 is available, please upgrade.
--------
--------
warnings.warn(
warnings.warn(
Annotation for slice 109 (test)
Annotation for slice 109 (test)
c:\Users\s193396\AppData\Local\miniconda3\envs\qim3d\lib\site-packages\gradio\analytics.py:106: UserWarning: IMPORTANT: You are using gradio version 4.44.0, however version 4.44.1 is available, please upgrade.
c:\Users\s193396\AppData\Local\miniconda3\envs\qim3d\lib\site-packages\gradio\analytics.py:106: UserWarning: IMPORTANT: You are using gradio version 4.44.0, however version 4.44.1 is available, please upgrade.
--------
--------
warnings.warn(
warnings.warn(
c:\Users\s193396\AppData\Local\miniconda3\envs\qim3d\lib\site-packages\gradio\analytics.py:106: UserWarning: IMPORTANT: You are using gradio version 4.44.0, however version 4.44.1 is available, please upgrade.
c:\Users\s193396\AppData\Local\miniconda3\envs\qim3d\lib\site-packages\gradio\analytics.py:106: UserWarning: IMPORTANT: You are using gradio version 4.44.0, however version 4.44.1 is available, please upgrade.
--------
--------
warnings.warn(
warnings.warn(
%% Cell type:markdown id: tags:
%% Cell type:markdown id: tags:
### Getting masks from the annotation tool
### Getting masks from the annotation tool
The masks are stored in the annotation tool. Here we extract the masks and save them to disk, following the standard needed for the DL model.
The masks are stored in the annotation tool. Here we extract the masks and save them to disk, following the standard needed for the DL model.
Then we need to decide which type of augumentation to apply to the data.
Then we need to decide which type of augumentation to apply to the data.
The `qim3d.ml.Augmentation` allows to specify how the images should be reshaped to the appropriate size and the level of transformation to apply respectively to train, test and validation sets.
The `qim3d.ml.Augmentation` allows to specify how the images should be reshaped to the appropriate size and the level of transformation to apply respectively to train, test and validation sets.
The resize must be choosen between [*crop*, *reshape*, *padding*] and the level of transformation must be chosse between [*None*, *light*, *moderate*, *heavy*]. The user can also specify the mean and standard deviation values for normalizing pixel intensities.
The resize must be choosen between [*crop*, *reshape*, *padding*] and the level of transformation must be chosse between [*None*, *light*, *moderate*, *heavy*]. The user can also specify the mean and standard deviation values for normalizing pixel intensities.
The hyperparameters are defined using the function `qim3d.ml.Hyperparameters` and the model can be easily trained by running the function `qim3d.ml.train_model` which returns also a plot of the losses at the end of the training if the option is selected by the user
The hyperparameters are defined using the function `qim3d.ml.Hyperparameters` and the model can be easily trained by running the function `qim3d.ml.train_model` which returns also a plot of the losses at the end of the training if the option is selected by the user
To compute the inference step it is just needed to run `qim3d.ml.inference`.
To compute the inference step it is just needed to run `qim3d.ml.inference`.
The results can be visualize with the function `qim3d.viz.grid_pred` that shows the predicted segmentation along with a comparison between the ground truth.
The results can be visualize with the function `qim3d.viz.grid_pred` that shows the predicted segmentation along with a comparison between the ground truth.