Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • QIM/tools/qim3d
1 result
Show changes
Commits on Source (11)
Showing
with 828 additions and 362 deletions
......@@ -13,6 +13,7 @@ build/
.idea/
.cache/
.pytest_cache/
.ruff_cache/
*.swp
*.swo
*.pyc
......
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v3.2.0
hooks:
- id: detect-private-key
- id: check-added-large-files
- id: check-docstring-first
- id: debug-statements
- id: double-quote-string-fixer
- id: name-tests-test
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.4.7
hooks:
# Run the formatter and fix code styling
- id: ruff-format
# Run the linter and fix what is possible
- id: ruff
args: ['--fix']
\ No newline at end of file
docs/assets/screenshots/viz-line_profile.gif

4.74 MiB

......@@ -23,6 +23,7 @@ The `qim3d` library aims to provide easy ways to explore and get insights from v
- plot_cc
- colormaps
- fade_mask
- line_profile
::: qim3d.viz.colormaps
options:
......
%% Cell type:markdown id:ae2a75fe tags:
# Logging system for qim3d
Using proper logging instead of print statements is a recommended practice.
While print statements can be helpful for quick debugging, logging provides a more powerful and versatile approach. Logging allows for better control over output, with options to configure log levels, filter messages, and redirect output to different destinations.
%% Cell type:code id:a31b2245 tags:
``` python
import qim3d
from qim3d.io.logger import log
from qim3d.utils._logger import log
```
%% Cell type:code id:94022824 tags:
``` python
# Here we test by sending one message for each level
# Note that DEBUG and INFO do not appear
# Note that DEBUG does not appear
log.debug('debug level message')
log.info('info level message')
log.warning('warning level message')
log.error('error level message')
log.critical('critical level message')
```
%% Output
info level message
warning level message
error level message
critical level message
%% Cell type:code id:b0856333 tags:
``` python
# Change the level to debug
qim3d.io.logger.level("debug")
qim3d.utils._logger.level("debug")
# Now all the levels get logged
log.debug('debug level message')
log.info('info level message')
log.warning('warning level message')
log.error('error level message')
log.critical('critical level message')
```
%% Output
debug level message
info level message
warning level message
error level message
critical level message
%% Cell type:code id:eb542404 tags:
``` python
# Change the level to error
qim3d.io.logger.level("error")
qim3d.utils._logger.level("error")
# And now only above ERROR is shown
log.debug('debug level message')
log.info('info level message')
log.warning('warning level message')
log.error('error level message')
log.critical('critical level message')
```
%% Output
error level message
critical level message
%% Cell type:code id:75af4473 tags:
``` python
# Change the level back to info
qim3d.utils._logger.level("info")
# And now only above INFO is shown again
log.debug('debug level message')
log.info('info level message')
log.warning('warning level message')
log.error('error level message')
log.critical('critical level message')
```
%% Output
info level message
warning level message
error level message
critical level message
%% Cell type:code id:af3cc812 tags:
``` python
# We can increase the level of detail
qim3d.io.logger.set_detailed_output()
qim3d.utils._logger.set_detailed_output()
# Note that DEBUG and INFO are still not shown
# Note that DEBUG is still not shown
log.debug('debug level message')
log.info('info level message')
log.warning('warning level message')
log.error('error level message')
log.critical('critical level message')
```
%% Output
ERROR 3224913348.py:8 error level message
CRITICAL 3224913348.py:9 critical level message
INFO 20327955.py:6 info level message
WARNING 20327955.py:7 warning level message
ERROR 20327955.py:8 error level message
CRITICAL 20327955.py:9 critical level message
%% Cell type:code id:d7239b1b tags:
``` python
# We can switch back to the simple output mode
qim3d.io.logger.set_simple_output()
qim3d.utils._logger.set_simple_output()
# Now we see all the levels on simple mode
# Now the levels are back to simple mode
log.debug('debug level message')
log.info('info level message')
log.warning('warning level message')
log.error('error level message')
log.critical('critical level message')
```
%% Output
info level message
warning level message
error level message
critical level message
%% Cell type:code id:eaceb5b6 tags:
``` python
# Change back to detailed and DEBUG level
qim3d.io.logger.set_detailed_output()
qim3d.io.logger.level("debug")
qim3d.utils._logger.set_detailed_output()
qim3d.utils._logger.level("debug")
log.debug('debug level message')
log.info('info level message')
log.warning('warning level message')
log.error('error level message')
log.critical('critical level message')
```
%% Output
DEBUG 1186198911.py:5 debug level message
INFO 1186198911.py:6 info level message
WARNING 1186198911.py:7 warning level message
ERROR 1186198911.py:8 error level message
CRITICAL 1186198911.py:9 critical level message
DEBUG 3380703693.py:5 debug level message
INFO 3380703693.py:6 info level message
WARNING 3380703693.py:7 warning level message
ERROR 3380703693.py:8 error level message
CRITICAL 3380703693.py:9 critical level message
......
%% Cell type:markdown id:855a28ff tags:
# Get references from DOI
This notebook shows how it is possible to use the `Qim3D` library to easily get well formatted references from a DOI
This notebook shows how it is possible to use the `qim3d` library to easily get well formatted references from a DOI
%% Cell type:code id:35b9fe6b tags:
``` python
import qim3d
qim3d.io.logger.level('info')
qim3d.utils._logger.level("info")
```
%% Cell type:code id:7962731c tags:
``` python
doi = "https://doi.org/10.1007/s10851-021-01041-3"
```
%% Cell type:code id:17720b94 tags:
``` python
bibtext = qim3d.utils.doi.get_bibtex(doi)
bibtext = qim3d.utils._doi.get_bibtex(doi)
```
%% Output
@article{Stephensen_2021,
doi = {10.1007/s10851-021-01041-3},
url = {https://doi.org/10.1007%2Fs10851-021-01041-3},
year = 2021,
month = {jun},
publisher = {Springer Science and Business Media {LLC}},
volume = {63},
number = {8},
pages = {1069--1083},
author = {Hans J. T. Stephensen and Anne Marie Svane and Carlos B. Villanueva and Steven A. Goldman and Jon Sporring},
title = {Measuring Shape Relations Using r-Parallel Sets},
journal = {Journal of Mathematical Imaging and Vision}
}
@article{Stephensen_2021, title={Measuring Shape Relations Using r-Parallel Sets}, volume={63}, ISSN={1573-7683}, url={http://dx.doi.org/10.1007/s10851-021-01041-3}, DOI={10.1007/s10851-021-01041-3}, number={8}, journal={Journal of Mathematical Imaging and Vision}, publisher={Springer Science and Business Media LLC}, author={Stephensen, Hans J. T. and Svane, Anne Marie and Villanueva, Carlos B. and Goldman, Steven A. and Sporring, Jon}, year={2021}, month=jun, pages={1069–1083} }
%% Cell type:code id:a15baf83 tags:
``` python
reference = qim3d.utils.doi.get_reference(doi)
reference = qim3d.utils._doi.get_reference(doi)
```
%% Output
Stephensen, H. J. T., Svane, A. M., Villanueva, C. B., Goldman, S. A., & Sporring, J. (2021). Measuring Shape Relations Using r-Parallel Sets. Journal of Mathematical Imaging and Vision, 63(8), 1069–1083. https://doi.org/10.1007/s10851-021-01041-3
%% Cell type:code id:6d4d215e tags:
``` python
qim3d.utils.doi.get_metadata(doi)
qim3d.utils._doi.get_metadata(doi)
```
%% Output
{'indexed': {'date-parts': [[2023, 9, 21]],
'date-time': '2023-09-21T13:58:37Z',
'timestamp': 1695304717010},
{'indexed': {'date-parts': [[2023, 10, 20]],
'date-time': '2023-10-20T12:08:45Z',
'timestamp': 1697803725999},
'reference-count': 26,
'publisher': 'Springer Science and Business Media LLC',
'issue': '8',
'license': [{'start': {'date-parts': [[2021, 6, 26]],
'date-time': '2021-06-26T00:00:00Z',
'timestamp': 1624665600000},
'content-version': 'tdm',
'delay-in-days': 0,
'URL': 'https://www.springer.com/tdm'},
{'start': {'date-parts': [[2021, 6, 26]],
'date-time': '2021-06-26T00:00:00Z',
'timestamp': 1624665600000},
'content-version': 'vor',
'delay-in-days': 0,
'URL': 'https://www.springer.com/tdm'}],
'funder': [{'DOI': '10.13039/100008398',
'name': 'Villum Fonden',
'doi-asserted-by': 'publisher'},
'doi-asserted-by': 'publisher',
'id': [{'id': '10.13039/100008398',
'id-type': 'DOI',
'asserted-by': 'publisher'}]},
{'DOI': '10.13039/501100005275',
'name': 'Region Hovedstaden',
'doi-asserted-by': 'publisher'}],
'doi-asserted-by': 'publisher',
'id': [{'id': '10.13039/501100005275',
'id-type': 'DOI',
'asserted-by': 'publisher'}]}],
'content-domain': {'domain': ['link.springer.com'],
'crossmark-restriction': False},
'published-print': {'date-parts': [[2021, 10]]},
'DOI': '10.1007/s10851-021-01041-3',
'type': 'journal-article',
'created': {'date-parts': [[2021, 6, 26]],
'date-time': '2021-06-26T15:02:20Z',
'timestamp': 1624719740000},
'page': '1069-1083',
'update-policy': 'http://dx.doi.org/10.1007/springer_crossmark_policy',
'source': 'Crossref',
'is-referenced-by-count': 3,
'title': 'Measuring Shape Relations Using r-Parallel Sets',
'prefix': '10.1007',
'volume': '63',
'author': [{'ORCID': 'http://orcid.org/0000-0001-8245-0571',
'authenticated-orcid': False,
'given': 'Hans J. T.',
'family': 'Stephensen',
'sequence': 'first',
'affiliation': []},
{'ORCID': 'http://orcid.org/0000-0001-6356-0484',
'authenticated-orcid': False,
'given': 'Anne Marie',
'family': 'Svane',
'sequence': 'additional',
'affiliation': []},
{'ORCID': 'http://orcid.org/0000-0001-9786-9439',
'authenticated-orcid': False,
'given': 'Carlos B.',
'family': 'Villanueva',
'sequence': 'additional',
'affiliation': []},
{'ORCID': 'http://orcid.org/0000-0002-5498-4303',
'authenticated-orcid': False,
'given': 'Steven A.',
'family': 'Goldman',
'sequence': 'additional',
'affiliation': []},
{'ORCID': 'http://orcid.org/0000-0003-1261-6702',
'authenticated-orcid': False,
'given': 'Jon',
'family': 'Sporring',
'sequence': 'additional',
'affiliation': []}],
'member': '297',
'published-online': {'date-parts': [[2021, 6, 26]]},
'reference': [{'key': '1041_CR1',
'doi-asserted-by': 'publisher',
'DOI': '10.1201/b19708',
'volume-title': 'Spatial Point Patterns: Methodology and Applications with R',
'author': 'A Baddeley',
'year': '2015',
'unstructured': 'Baddeley, A., Rubak, E., Turner, R.: Spatial Point Patterns: Methodology and Applications with R. CRC Press, Boca Raton (2015)',
'DOI': '10.1201/b19708'},
'unstructured': 'Baddeley, A., Rubak, E., Turner, R.: Spatial Point Patterns: Methodology and Applications with R. CRC Press, Boca Raton (2015)'},
{'key': '1041_CR2',
'series-title': 'Statistics Reference Online',
'doi-asserted-by': 'publisher',
'DOI': '10.1002/9781118445112.stat07751',
'volume-title': 'Ripley’s $$k$$ Function',
'author': 'PM Dixon',
'year': '2014',
'unstructured': 'Dixon, P.M.: Ripley’s $$k$$ Function. Statistics Reference Online, Wiley, New York (2014)'},
{'issue': '1',
'key': '1041_CR3',
'doi-asserted-by': 'publisher',
'first-page': '1',
'DOI': '10.1016/0962-8924(94)90025-6',
'volume': '4',
'author': 'R Fesce',
'year': '1994',
'unstructured': 'Fesce, R., Grohovaz, F., Valtorta, F., Meldolesi, J.: Neurotransmitter release: fusion or kiss-and-run? Trends Cell Biol. 4(1), 1–4 (1994)',
'journal-title': 'Trends Cell Biol.'},
{'issue': '8',
'key': '1041_CR4',
'doi-asserted-by': 'publisher',
'first-page': '1953',
'DOI': '10.1007/s00138-014-0625-2',
'volume': '25',
'author': 'Y Gavet',
'year': '2014',
'unstructured': 'Gavet, Y., Fernandes, M., Debayle, J., Pinoli, J.C.: Dissimilarity criteria and their comparison for quantitative evaluation of image segmentation: application to human retina vessels. Mach. Vis. Appl. 25(8), 1953–1966 (2014)',
'journal-title': 'Mach. Vis. Appl.'},
{'key': '1041_CR5',
'doi-asserted-by': 'publisher',
'first-page': '248',
'DOI': '10.3389/fncel.2018.00248',
'volume': '12',
'author': 'N Gavrilov',
'year': '2018',
'unstructured': 'Gavrilov, N., Golyagina, I., Brazhe, A., Scimemi, A., Turlapov, V., Semyanov, A.: Astrocytic coverage of dendritic spines, dendritic shafts, and axonal boutons in hippocampal neuropil. Front. Cell. Neurosci. 12, 248 (2018)',
'journal-title': 'Front. Cell. Neurosci.'},
{'issue': '498',
'key': '1041_CR6',
'doi-asserted-by': 'publisher',
'first-page': '754',
'DOI': '10.1080/01621459.2012.688463',
'volume': '107',
'author': 'U Hahn',
'year': '2012',
'unstructured': 'Hahn, U.: A studentized permutation test for the comparison of spatial point patterns. J. Am. Stat. Assoc. 107(498), 754–764 (2012). https://doi.org/10.1080/01621459.2012.688463',
'journal-title': 'J. Am. Stat. Assoc.'},
{'issue': '4',
'key': '1041_CR7',
'doi-asserted-by': 'publisher',
'first-page': '60:1',
'DOI': '10.1145/3197517.3201353',
'volume': '37',
'author': 'Y Hu',
'year': '2018',
'unstructured': 'Hu, Y., Zhou, Q., Gao, X., Jacobson, A., Zorin, D., Panozzo, D.: Tetrahedral meshing in the wild. ACM Trans. Gr. 37(4), 60:1-60:14 (2018). https://doi.org/10.1145/3197517.3201353',
'journal-title': 'ACM Trans. Gr.'},
{'key': '1041_CR8',
'doi-asserted-by': 'publisher',
'unstructured': 'Khanmohammadi, M., Waagepetersen, R.P., Sporring, J.: Analysis of shape and spatial interaction of synaptic vesicles using data from focused ion beam scanning electron microscopy (FIB-SEM). Front. Neuroanatomy (2015). https://doi.org/10.3389/fnana.2015.00116',
'DOI': '10.3389/fnana.2015.00116'},
{'issue': '3',
'key': '1041_CR9',
'doi-asserted-by': 'publisher',
'first-page': '797',
'DOI': '10.1083/jcb.135.3.797',
'volume': '135',
'author': 'J Koenig',
'year': '1996',
'unstructured': 'Koenig, J., Ikeda, K.: Synaptic vesicles have two distinct recycling pathways. J. Cell Biol. 135(3), 797–808 (1996)',
'journal-title': 'J. Cell Biol.'},
{'key': '1041_CR10',
'unstructured': 'Lucchi, A., Li, Y., Becker, C., Fua, P.: Electron microscopy dataset. https://cvlab.epfl.ch/data/data-em/. Accessed 14 Mar 2020'},
{'issue': '2',
'key': '1041_CR11',
'doi-asserted-by': 'publisher',
'first-page': '568',
'DOI': '10.1002/mrm.24477',
'volume': '70',
'author': 'J Marques',
'year': '2013',
'unstructured': 'Marques, J., Genant, H.K., Lillholm, M., Dam, E.B.: Diagnosis of osteoarthritis and prognosis of tibial cartilage loss by quantification of tibia trabecular bone from MRI. Magn. Reson. Med. 70(2), 568–575 (2013). https://doi.org/10.1002/mrm.24477',
'journal-title': 'Magn. Reson. Med.'},
{'issue': '1654',
'key': '1041_CR12',
'doi-asserted-by': 'publisher',
'first-page': '20140047',
'DOI': '10.1098/rstb.2014.0047',
'volume': '369',
'author': 'N Medvedev',
'year': '2014',
'unstructured': 'Medvedev, N., Popov, V., Henneberger, C., Kraev, I., Rusakov, D.A., Stewart, M.G.: Glia selectively approach synapses on thin dendritic spines. Philos. Trans. R Soc. B Biol. Sci. 369(1654), 20140047 (2014)',
'journal-title': 'Philos. Trans. R Soc. B Biol. Sci.'},
{'issue': '3',
'key': '1041_CR13',
'doi-asserted-by': 'publisher',
'first-page': '551',
'DOI': '10.1016/S0896-6273(00)00065-9',
'volume': '27',
'author': 'D Richards',
'year': '2000',
'unstructured': 'Richards, D., Guatimosim, C., Betz, W.: Two endocytic recycling routes selectively fill two vesicle pools in frog motor nerve terminals. Neuron 27(3), 551–559 (2000)',
'journal-title': 'Neuron'},
{'issue': '3',
'key': '1041_CR14',
'doi-asserted-by': 'publisher',
'first-page': '368',
'DOI': '10.1111/j.2517-6161.1979.tb01091.x',
'volume': '41',
'author': 'BD Ripley',
'year': '1979',
'unstructured': 'Ripley, B.D.: Tests of randomness for spatial point patterns. J. Roy. Stat. Soc. Ser. B (Methodol.) 41(3), 368–374 (1979). https://doi.org/10.1111/j.2517-6161.1979.tb01091.x',
'journal-title': 'J. Roy. Stat. Soc. Ser. B (Methodol.)'},
{'key': '1041_CR15',
'doi-asserted-by': 'publisher',
'DOI': '10.1017/CBO9780511624131',
'volume-title': 'Statistical Inference for Spatial Processes',
'author': 'BD Ripley',
'year': '1988',
'unstructured': 'Ripley, B.D.: Statistical Inference for Spatial Processes. Cambridge University Press, Cambridge (1988)',
'DOI': '10.1017/CBO9780511624131'},
'unstructured': 'Ripley, B.D.: Statistical Inference for Spatial Processes. Cambridge University Press, Cambridge (1988)'},
{'key': '1041_CR16',
'doi-asserted-by': 'crossref',
'unstructured': 'Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 234–241. Springer (2015)',
'DOI': '10.1007/978-3-319-24574-4_28'},
{'key': '1041_CR17',
'doi-asserted-by': 'publisher',
'DOI': '10.1007/978-3-540-78859-1',
'volume-title': 'Stochastic and Integral Geometry',
'author': 'R Schneider',
'year': '2008',
'unstructured': 'Schneider, R., Weil, W.: Stochastic and Integral Geometry. Springer, Heidelberg (2008)',
'DOI': '10.1007/978-3-540-78859-1'},
'unstructured': 'Schneider, R., Weil, W.: Stochastic and Integral Geometry. Springer, Heidelberg (2008)'},
{'key': '1041_CR18',
'unstructured': 'Scientific, T.F.: Amira-avizo software (2020)'},
{'key': '1041_CR19',
'doi-asserted-by': 'crossref',
'unstructured': 'Sethian, J.A.: Fast marching methods. SIAM Rev. 41(2), 199–235 (1999)',
'DOI': '10.1137/S0036144598347059'},
{'issue': '2',
'key': '1041_CR20',
'doi-asserted-by': 'publisher',
'first-page': '1',
'DOI': '10.1145/2629697',
'volume': '41',
'author': 'H Si',
'year': '2015',
'unstructured': 'Si, H.: TetGen, a delaunay-based quality tetrahedral mesh generator. ACM Trans. Math. Softw. 41(2), 1–36 (2015)',
'journal-title': 'ACM Trans. Math. Softw.'},
{'key': '1041_CR21',
'doi-asserted-by': 'crossref',
'unstructured': 'Sporring, J., Waagepetersen, R., Sommer, S.: Generalizations of Ripley’s k-function with application to space curves. In: International Conference on Information Processing in Medical Imaging, pp. 731–742. Springer (2019)',
'DOI': '10.1007/978-3-030-20351-1_57'},
{'issue': '1',
'key': '1041_CR22',
'doi-asserted-by': 'publisher',
'first-page': '1',
'DOI': '10.1038/s42003-020-0809-4',
'volume': '3',
'author': 'HJT Stephensen',
'year': '2020',
'unstructured': 'Stephensen, H.J.T., Darkner, S., Sporring, J.: Restoring drifted electron microscope volumes using synaptic vesicles at sub-pixel accuracy. Commun. Biol. 3(1), 1–7 (2020)',
'journal-title': 'Commun. Biol.'},
{'key': '1041_CR23',
'unstructured': 'Stephensen, H.J.T., Sporring, J.: Rodent neuronal volume annotations and segmentations (2020). https://www.doi.org/10.17894/ucph.33bd30d2-5796-48f4-a0a8-96fcc0ce6af5'},
{'key': '1041_CR24',
'doi-asserted-by': 'crossref',
'unstructured': 'Svane, A.M.: Valuations in image analysis. In: Tensor Valuations and Their Applications in Stochastic Geometry and Imaging, pp. 435–454. Springer (2017)',
'DOI': '10.1007/978-3-319-51951-7_15'},
{'key': '1041_CR25',
'doi-asserted-by': 'publisher',
'unstructured': 'Virtanen, P., Gommers, R., Oliphant, T.E., Haberland, M., Reddy, T., Cournapeau, D., Burovski, E., Peterson, P., Weckesser, W., Bright, J., van der Walt, S.J., Brett, M., Wilson, J., Jarrod Millman, K., Mayorov, N., Nelson, A.R.J., Jones, E., Kern, R., Larson, E., Carey, C., Polat, İ, Feng, Y., Moore, E.W., Van der Plas, J., Laxalde, D., Perktold, J., Cimrman, R., Henriksen, I., Quintero, E.A., Harris, C.R., Archibald, A.M., Ribeiro, A.H., Pedregosa, F., van Mulbregt, P., Contributors, S., et al.: SciPy 1.0: fundamental algorithms for scientific computing in python. Nat. Methods 17, 261–272 (2020). https://doi.org/10.1038/s41592-019-0686-2',
'DOI': '10.1038/s41592-019-0686-2'},
{'issue': '1',
'key': '1041_CR26',
'doi-asserted-by': 'publisher',
'first-page': '1',
'DOI': '10.1016/j.patcog.2003.07.008',
'volume': '37',
'author': 'D Zhang',
'year': '2004',
'unstructured': 'Zhang, D., Lu, G.: Review of shape representation and description techniques. Pattern Recogn. 37(1), 1–19 (2004). https://doi.org/10.1016/j.patcog.2003.07.008',
'journal-title': 'Pattern Recogn.'}],
'container-title': 'Journal of Mathematical Imaging and Vision',
'original-title': [],
'language': 'en',
'link': [{'URL': 'https://link.springer.com/content/pdf/10.1007/s10851-021-01041-3.pdf',
'content-type': 'application/pdf',
'content-version': 'vor',
'intended-application': 'text-mining'},
{'URL': 'https://link.springer.com/article/10.1007/s10851-021-01041-3/fulltext.html',
'content-type': 'text/html',
'content-version': 'vor',
'intended-application': 'text-mining'},
{'URL': 'https://link.springer.com/content/pdf/10.1007/s10851-021-01041-3.pdf',
'content-type': 'application/pdf',
'content-version': 'vor',
'intended-application': 'similarity-checking'}],
'deposited': {'date-parts': [[2023, 1, 1]],
'date-time': '2023-01-01T16:48:24Z',
'timestamp': 1672591704000},
'score': 1,
'resource': {'primary': {'URL': 'https://link.springer.com/10.1007/s10851-021-01041-3'}},
'subtitle': [],
'short-title': [],
'issued': {'date-parts': [[2021, 6, 26]]},
'references-count': 26,
'journal-issue': {'issue': '8',
'published-print': {'date-parts': [[2021, 10]]}},
'alternative-id': ['1041'],
'URL': 'http://dx.doi.org/10.1007/s10851-021-01041-3',
'relation': {},
'ISSN': ['0924-9907', '1573-7683'],
'subject': ['Applied Mathematics',
'Geometry and Topology',
'Computer Vision and Pattern Recognition',
'Condensed Matter Physics',
'Modeling and Simulation',
'Statistics and Probability'],
'subject': [],
'container-title-short': 'J Math Imaging Vis',
'published': {'date-parts': [[2021, 6, 26]]},
'assertion': [{'value': '7 November 2020',
'order': 1,
'name': 'received',
'label': 'Received',
'group': {'name': 'ArticleHistory', 'label': 'Article History'}},
{'value': '2 June 2021',
'order': 2,
'name': 'accepted',
'label': 'Accepted',
'group': {'name': 'ArticleHistory', 'label': 'Article History'}},
{'value': '26 June 2021',
'order': 3,
'name': 'first_online',
'label': 'First Online',
'group': {'name': 'ArticleHistory', 'label': 'Article History'}},
{'order': 1,
'name': 'Ethics',
'group': {'name': 'EthicsHeading', 'label': 'Declarations'}},
{'value': 'The authors declare that they have no conflict of interest.',
'order': 2,
'name': 'Ethics',
'group': {'name': 'EthicsHeading', 'label': 'Conflict of interest'}}]}
......
%% Cell type:code id:be66055b-8ee9-46be-ad9d-f15edf2654a4 tags:
 
``` python
%load_ext autoreload
%autoreload 2
```
 
%% Cell type:code id:0c61dd11-5a2b-44ff-b0e5-989360bbb677 tags:
 
``` python
from os.path import join
import qim3d
import os
 
%matplotlib inline
```
 
%% Cell type:code id:cd6bb832-1297-462f-8d35-1738a9c37ffd tags:
 
``` python
# Define function for getting dataset path from string
def get_dataset_path(name: str, datasets):
assert name in datasets, 'Dataset name must be ' + ' or '.join(datasets)
dataset_idx = datasets.index(name)
if os.name == 'nt':
datasets_path = [
'//home.cc.dtu.dk/3dimage/projects/2023_STUDIOS_SD/analysis/data/Belialev2020/side',
'//home.cc.dtu.dk/3dimage/projects/2023_STUDIOS_SD/analysis/data/Gaudez2022/3d',
'//home.cc.dtu.dk/3dimage/projects/2023_STUDIOS_SD/analysis/data/Guo2023/2d/',
'//home.cc.dtu.dk/3dimage/projects/2023_STUDIOS_SD/analysis/data/Stan2020/2d',
'//home.cc.dtu.dk/3dimage/projects/2023_STUDIOS_SD/analysis/data/Reichardt2021/2d',
'//home.cc.dtu.dk/3dimage/projects/2023_STUDIOS_SD/analysis/data/TestCircles/2d_binary'
]
else:
datasets_path = [
'/dtu/3d-imaging-center/projects/2023_STUDIOS_SD/analysis/data/Belialev2020/side',
'/dtu/3d-imaging-center/projects/2023_STUDIOS_SD/analysis/data/Gaudez2022/3d',
'/dtu/3d-imaging-center/projects/2023_STUDIOS_SD/analysis/data/Guo2023/2d/',
'/dtu/3d-imaging-center/projects/2023_STUDIOS_SD/analysis/data/Stan2020/2d',
'/dtu/3d-imaging-center/projects/2023_STUDIOS_SD/analysis/data/Reichardt2021/2d',
'/dtu/3d-imaging-center/projects/2023_STUDIOS_SD/analysis/data/TestCircles/2d_binary'
]
 
return datasets_path[dataset_idx]
```
 
%% Cell type:markdown id:7d07077a-cce3-4448-89f5-02413345becc tags:
 
### Datasets
 
%% Cell type:code id:9a3b9c3c-4bbb-4a19-9685-f68c437e8bee tags:
 
``` python
datasets = ['belialev2020_side', 'gaudez2022_3d', 'guo2023_2d', 'stan2020_2d', 'reichardt2021_2d', 'testcircles_2dbinary']
dataset = datasets[3]
root = get_dataset_path(dataset,datasets)
 
# should not use gaudez2022: 3d image
# reichardt2021: multiclass segmentation
```
 
%% Cell type:markdown id:254dc8cb-6f24-4b57-91c0-98fb6f62602c tags:
 
### Model and Augmentation
 
%% Cell type:code id:30098003-ec06-48e0-809f-82f44166fb2b tags:
 
``` python
# defining model
my_model = qim3d.models.UNet(size = 'medium', dropout = 0.25)
my_model = qim3d.ml.models.UNet(size = 'medium', dropout = 0.25)
# defining augmentation
my_aug = qim3d.utils.Augmentation(resize = 'crop', transform_train = 'light')
my_aug = qim3d.ml.Augmentation(resize = 'crop', transform_train = 'light')
```
 
%% Cell type:markdown id:7b56c654-720d-4c5f-8545-749daa5dbaf2 tags:
 
### Loading the data
 
%% Cell type:code id:84141298-054d-4322-8bda-5ec514528985 tags:
 
``` python
# level of logging
qim3d.io.logger.level('info')
qim3d.utils._logger.level('info')
 
# datasets and dataloaders
train_set, val_set, test_set = qim3d.utils.prepare_datasets(path = root, val_fraction = 0.3,
train_set, val_set, test_set = qim3d.ml.prepare_datasets(path = root, val_fraction = 0.3,
model = my_model , augmentation = my_aug)
 
train_loader, val_loader, test_loader = qim3d.utils.prepare_dataloaders(train_set, val_set,
train_loader, val_loader, test_loader = qim3d.ml.prepare_dataloaders(train_set, val_set,
test_set, batch_size = 6)
```
 
%% Output
 
The image size doesn't match the Unet model's depth. The image is changed with 'crop', from (852, 852) to (832, 832).
 
%% Cell type:code id:f320a4ae-f063-430c-b5a0-0d9fb64c2725 tags:
 
``` python
qim3d.viz.grid_overview(train_set,alpha = 1)
```
 
%% Output
 
<Figure size 1400x600 with 14 Axes>
 
%% Cell type:code id:7fa3aa57-ba61-4c9a-934c-dce26bbc9e97 tags:
 
``` python
# Summary of model
model_s = qim3d.utils.model_summary(train_loader,my_model)
model_s = qim3d.ml.model_summary(train_loader,my_model)
print(model_s)
```
 
%% Output
 
=======================================================================================================================================
Layer (type:depth-idx) Output Shape Param #
=======================================================================================================================================
UNet [6, 1, 832, 832] --
├─UNet: 1-1 [6, 1, 832, 832] --
│ └─Sequential: 2-1 [6, 1, 832, 832] --
│ │ └─Convolution: 3-1 [6, 64, 416, 416] --
│ │ │ └─Conv2d: 4-1 [6, 64, 416, 416] 640
│ │ │ └─ADN: 4-2 [6, 64, 416, 416] --
│ │ │ │ └─InstanceNorm2d: 5-1 [6, 64, 416, 416] --
│ │ │ │ └─Dropout: 5-2 [6, 64, 416, 416] --
│ │ │ │ └─PReLU: 5-3 [6, 64, 416, 416] 1
│ │ └─SkipConnection: 3-2 [6, 128, 416, 416] --
│ │ │ └─Sequential: 4-3 [6, 64, 416, 416] --
│ │ │ │ └─Convolution: 5-4 [6, 128, 208, 208] --
│ │ │ │ │ └─Conv2d: 6-1 [6, 128, 208, 208] 73,856
│ │ │ │ │ └─ADN: 6-2 [6, 128, 208, 208] --
│ │ │ │ │ │ └─InstanceNorm2d: 7-1 [6, 128, 208, 208] --
│ │ │ │ │ │ └─Dropout: 7-2 [6, 128, 208, 208] --
│ │ │ │ │ │ └─PReLU: 7-3 [6, 128, 208, 208] 1
│ │ │ │ └─SkipConnection: 5-5 [6, 256, 208, 208] --
│ │ │ │ │ └─Sequential: 6-3 [6, 128, 208, 208] --
│ │ │ │ │ │ └─Convolution: 7-4 [6, 256, 104, 104] --
│ │ │ │ │ │ │ └─Conv2d: 8-1 [6, 256, 104, 104] 295,168
│ │ │ │ │ │ │ └─ADN: 8-2 [6, 256, 104, 104] --
│ │ │ │ │ │ │ │ └─InstanceNorm2d: 9-1 [6, 256, 104, 104] --
│ │ │ │ │ │ │ │ └─Dropout: 9-2 [6, 256, 104, 104] --
│ │ │ │ │ │ │ │ └─PReLU: 9-3 [6, 256, 104, 104] 1
│ │ │ │ │ │ └─SkipConnection: 7-5 [6, 512, 104, 104] --
│ │ │ │ │ │ │ └─Sequential: 8-3 [6, 256, 104, 104] --
│ │ │ │ │ │ │ │ └─Convolution: 9-4 [6, 512, 52, 52] --
│ │ │ │ │ │ │ │ │ └─Conv2d: 10-1 [6, 512, 52, 52] 1,180,160
│ │ │ │ │ │ │ │ │ └─ADN: 10-2 [6, 512, 52, 52] --
│ │ │ │ │ │ │ │ │ │ └─InstanceNorm2d: 11-1 [6, 512, 52, 52] --
│ │ │ │ │ │ │ │ │ │ └─Dropout: 11-2 [6, 512, 52, 52] --
│ │ │ │ │ │ │ │ │ │ └─PReLU: 11-3 [6, 512, 52, 52] 1
│ │ │ │ │ │ │ │ └─SkipConnection: 9-5 [6, 1536, 52, 52] --
│ │ │ │ │ │ │ │ │ └─Convolution: 10-3 [6, 1024, 52, 52] --
│ │ │ │ │ │ │ │ │ │ └─Conv2d: 11-4 [6, 1024, 52, 52] 4,719,616
│ │ │ │ │ │ │ │ │ │ └─ADN: 11-5 [6, 1024, 52, 52] --
│ │ │ │ │ │ │ │ │ │ │ └─InstanceNorm2d: 12-1 [6, 1024, 52, 52] --
│ │ │ │ │ │ │ │ │ │ │ └─Dropout: 12-2 [6, 1024, 52, 52] --
│ │ │ │ │ │ │ │ │ │ │ └─PReLU: 12-3 [6, 1024, 52, 52] 1
│ │ │ │ │ │ │ │ └─Convolution: 9-6 [6, 256, 104, 104] --
│ │ │ │ │ │ │ │ │ └─ConvTranspose2d: 10-4 [6, 256, 104, 104] 3,539,200
│ │ │ │ │ │ │ │ │ └─ADN: 10-5 [6, 256, 104, 104] --
│ │ │ │ │ │ │ │ │ │ └─InstanceNorm2d: 11-6 [6, 256, 104, 104] --
│ │ │ │ │ │ │ │ │ │ └─Dropout: 11-7 [6, 256, 104, 104] --
│ │ │ │ │ │ │ │ │ │ └─PReLU: 11-8 [6, 256, 104, 104] 1
│ │ │ │ │ │ └─Convolution: 7-6 [6, 128, 208, 208] --
│ │ │ │ │ │ │ └─ConvTranspose2d: 8-4 [6, 128, 208, 208] 589,952
│ │ │ │ │ │ │ └─ADN: 8-5 [6, 128, 208, 208] --
│ │ │ │ │ │ │ │ └─InstanceNorm2d: 9-7 [6, 128, 208, 208] --
│ │ │ │ │ │ │ │ └─Dropout: 9-8 [6, 128, 208, 208] --
│ │ │ │ │ │ │ │ └─PReLU: 9-9 [6, 128, 208, 208] 1
│ │ │ │ └─Convolution: 5-6 [6, 64, 416, 416] --
│ │ │ │ │ └─ConvTranspose2d: 6-4 [6, 64, 416, 416] 147,520
│ │ │ │ │ └─ADN: 6-5 [6, 64, 416, 416] --
│ │ │ │ │ │ └─InstanceNorm2d: 7-7 [6, 64, 416, 416] --
│ │ │ │ │ │ └─Dropout: 7-8 [6, 64, 416, 416] --
│ │ │ │ │ │ └─PReLU: 7-9 [6, 64, 416, 416] 1
│ │ └─Convolution: 3-3 [6, 1, 832, 832] --
│ │ │ └─ConvTranspose2d: 4-4 [6, 1, 832, 832] 1,153
=======================================================================================================================================
Total params: 10,547,273
Trainable params: 10,547,273
Non-trainable params: 0
Total mult-adds (G): 675.50
=======================================================================================================================================
Input size (MB): 16.61
Forward/backward pass size (MB): 4153.34
Params size (MB): 42.19
Estimated Total Size (MB): 4212.15
=======================================================================================================================================
 
%% Cell type:markdown id:a665ae28-d9a6-419f-9131-54283b47582c tags:
 
### Hyperparameters and training
 
%% Cell type:code id:ce64ae65-01fb-45a9-bdcb-a3806de8469e tags:
 
``` python
# model hyperparameters
my_hyperparameters = qim3d.models.Hyperparameters(my_model, n_epochs=5,
my_hyperparameters = qim3d.ml.Hyperparameters(my_model, n_epochs=5,
learning_rate = 5e-3, loss_function='DiceCE',weight_decay=1e-3)
 
# training model
qim3d.utils.train_model(my_model, my_hyperparameters, train_loader, val_loader, plot=True)
qim3d.ml.train_model(my_model, my_hyperparameters, train_loader, val_loader, plot=True)
```
 
%% Output
 
 
Epoch 0, train loss: 0.7937, val loss: 0.5800
 
 
%% Cell type:markdown id:7e14fac8-4fd3-4725-bd0d-9e2a95552278 tags:
 
### Plotting
 
%% Cell type:code id:f8684cb0-5673-4409-8d22-f00b7d099ca4 tags:
 
``` python
in_targ_preds_test = qim3d.utils.inference(test_set,my_model)
in_targ_preds_test = qim3d.ml.inference(test_set,my_model)
qim3d.viz.grid_pred(in_targ_preds_test,alpha=1)
```
 
%% Output
 
<Figure size 1400x1000 with 28 Axes>
%% Cell type:code id:0b73f2d8 tags:
``` python
import qim3d
```
%% Cell type:code id:73db6886 tags:
``` python
vol = qim3d.examples.bone_128x128x128
```
%% Cell type:code id:22d86d4d tags:
``` python
qim3d.viz.orthogonal(vol)
```
%% Output
HBox(children=(interactive(children=(IntSlider(value=64, description='Z', max=127), Output()), layout=Layout(a…
%% Cell type:code id: tags:
``` python
import qim3d
import matplotlib.pyplot as plt
# Load example image
vol = qim3d.examples.bone_128x128x128
# Start annotation tool
annotation_tool = qim3d.gui.annotation_tool.Interface()
# We can directly pass the image we loaded to the interface
app = annotation_tool.launch(vol[0], server_name="10.197.104.229")
app = annotation_tool.launch(vol[0]) # , server_name="10.197.104.229")
```
%% Output
c:\Users\s193396\AppData\Local\miniconda3\envs\qim3d\lib\site-packages\gradio\analytics.py:106: UserWarning: IMPORTANT: You are using gradio version 4.44.0, however version 4.44.1 is available, please upgrade.
--------
warnings.warn(
%% Cell type:code id: tags:
``` python
annotation_tool.get_result()
```
%% Output
Loaded shape: (128, 128)
INFO:qim3d:Loaded shape: (128, 128)
Volume using 16.0 KB of memory
INFO:qim3d:Volume using 16.0 KB of memory
System memory:
• Total.: 31.6 GB
• Used..: 18.0 GB (56.8%)
• Free..: 13.7 GB (43.2%)
INFO:qim3d:System memory:
• Total.: 31.6 GB
• Used..: 18.0 GB (56.8%)
• Free..: 13.7 GB (43.2%)
{'mask_red': array([[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]])}
%% Cell type:code id: tags:
``` python
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import clear_output
import numpy as np
import time
while True:
clear_output(wait=True)
masks_dict = annotation_tool.get_result()
if len(masks_dict) == 0:
masks_dict["No mask"] = np.zeros((32,32))
fig, axs = plt.subplots(1, len(masks_dict), figsize=(8,3))
clear_output(wait=True)
masks_dict = annotation_tool.get_result()
if len(masks_dict) == 0:
masks_dict["No mask"] = np.zeros((32,32))
if len(masks_dict) == 1:
axs = [axs]
for idx, (name, mask) in enumerate(masks_dict.items()):
fig, axs = plt.subplots(1, len(masks_dict), figsize=(8,3))
axs[idx].imshow(mask, cmap='gray', interpolation='none')
axs[idx].set_title(name)
axs[idx].axis('off')
if len(masks_dict) == 1:
axs = [axs]
for idx, (name, mask) in enumerate(masks_dict.items()):
plt.tight_layout()
axs[idx].imshow(mask, cmap='gray', interpolation='none')
axs[idx].set_title(name)
axs[idx].axis('off')
plt.tight_layout()
plt.show()
time.sleep(2)
plt.show()
time.sleep(2)
```
%% Output
Loaded shape: (128, 128)
INFO:qim3d:Loaded shape: (128, 128)
Volume using 16.0 KB of memory
INFO:qim3d:Volume using 16.0 KB of memory
System memory:
• Total.: 31.6 GB
• Used..: 17.9 GB (56.7%)
• Free..: 13.7 GB (43.3%)
INFO:qim3d:System memory:
• Total.: 31.6 GB
• Used..: 17.9 GB (56.7%)
• Free..: 13.7 GB (43.3%)
......
This diff is collapsed.
%% Cell type:code id: tags:
``` python
import qim3d
import qim3d.processing.filters as filters
import qim3d.filters as filters
import numpy as np
from scipy import ndimage
```
%% Cell type:code id: tags:
``` python
vol = qim3d.examples.fly_150x256x256
```
%% Cell type:markdown id: tags:
## Using the filter functions directly
%% Cell type:code id: tags:
``` python
### Gaussian filter
out1_gauss = filters.gaussian(vol,3)
# or
out2_gauss = filters.gaussian(vol,sigma=3) # sigma is positional, but can be passed as a kwarg
out1_gauss = filters.gaussian(vol, sigma=3)
### Median filter
out_median = filters.median(vol,size=5)
out_median = filters.median(vol, size=5)
```
%% Cell type:markdown id: tags:
## Using filter classes
%% Cell type:code id: tags:
``` python
gaussian_fn = filters.Gaussian(sigma=3)
out3_gauss = gaussian_fn(vol)
```
%% Cell type:markdown id: tags:
## Using filter classes to construct a pipeline of filters
%% Cell type:code id: tags:
``` python
pipeline = filters.Pipeline(
filters.Gaussian(sigma=3),
filters.Median(size=10))
out_seq = pipeline(vol)
```
%% Cell type:markdown id: tags:
Filter functions can also be appended to the sequence after defining the class instance:
%% Cell type:code id: tags:
``` python
pipeline.append(filters.Maximum(size=5))
out_seq2 = pipeline(vol)
```
%% Cell type:markdown id: tags:
The filter objects are stored in the `filters` dictionary:
%% Cell type:code id: tags:
``` python
print(pipeline.filters)
```
%% Output
{'0': <qim3d.processing.filters.Gaussian object at 0x7b3fbdad7bb0>, '1': <qim3d.processing.filters.Median object at 0x7b3fbdad52a0>, '2': <qim3d.processing.filters.Maximum object at 0x7b40f7d3f6d0>}
{'0': <qim3d.filters._common_filter_methods.Gaussian object at 0x000001AB34BC5F90>, '1': <qim3d.filters._common_filter_methods.Median object at 0x000001AB34BC5FC0>, '2': <qim3d.filters._common_filter_methods.Maximum object at 0x000001AB34BC61A0>}
......
This diff is collapsed.
Source diff could not be displayed: it is too large. Options to address this: view the blob.
%% Cell type:code id: tags:
``` python
import qim3d
```
%% Cell type:markdown id: tags:
### Structure tensor notebook
%% Cell type:markdown id: tags:
This notebook shows how to compute eigenvalues and eigenvectors of the **structure tensor** of a 3D volume using the `qim3d` library. The structure tensor (matrix) represents information about the local gradient directions in the volume, such that the eigenvectors represent the orientation of the structure in the volume, and the corresponding eigenvaleus indicate the magnitude.
The function `qim3d.processing.structure_tensor` returns two arrays `val` and `vec` for the eigenvalues and eigenvectors, respectively.\
By having the argument `visulize = True`, the function displays a figure of three subplots:
* Slice of volume with vector field of the eigenvectors
* Orientation histogram of the eigenvectors
* Slice of volume with overlaying colors of the orientation
For all three subplots, the colors used to visualize the orientation within the volume are from the HSV colorspace. In these visualizations, the saturation of the color corresponds to the vector component of the slicing direction (i.e. $z$-component when choosing visualization along `axis = 0`). Hence, if an orientation in the volume is orthogonal to the slicing direction, the corresponding color of the visualization will be gray.
%% Cell type:markdown id: tags:
### **Example:** Structure tensor of brain tissue volume
%% Cell type:code id: tags:
``` python
# Import 3D volume of brain tissue
NT = qim3d.examples.NT_128x128x128
# Visuaize the 3D volume
qim3d.viz.vol(NT)
qim3d.viz.volumetric(NT)
```
%% Output
%% Cell type:markdown id: tags:
From the visualization of the full volume, it can be seen that the circular structures of the brain tissue are aligned orthogonal to the $z$-axis (`axis = 0`). By choosing to slice the volume in this direction, the structure tensor visualizations will largely be gray, since the $z$-component of the eigenvectors are close to $0$, meaning the saturation of the coloring will be close to $0$ (i.e. gray). This can be seen below.
%% Cell type:code id: tags:
``` python
# Compute eigenvalues and eigenvectors of the structure tensor
val, vec = qim3d.processing.structure_tensor(NT, visualize = True, axis = 0) # Slicing in z-direction
```
%% Output
%% Cell type:markdown id: tags:
By slicing the volume in the $x$-direction (`axis = 2`) instead, the orientation along the length of the structures in the brain tissue can be seen instead. Then the structure tensor visualizations will be largely blue, corresponding to eigenvectors along the $x$-direction with angles of $\approx \frac{\pi}{2}$ radians ($90$ degrees).
%% Cell type:code id: tags:
``` python
# Compute eigenvalues and eigenvectors of the structure tensor
val, vec = qim3d.processing.structure_tensor(NT, visualize = True, axis = 2) # Slicing in x-direction
```
%% Output
......
# See list of rules here: https://docs.astral.sh/ruff/rules/
[tool.ruff]
line-length = 88
indent-width = 4
[tool.ruff.lint]
# Allow fix for all enabled rules (when `--fix`) is provided.
fixable = ["ALL"]
unfixable = []
# Allow unused variables when underscore-prefixed
dummy-variable-rgx = "^(_+|(_+[a-zA-Z0-9_]*[a-zA-Z0-9]+?))$"
select = [
"F",
"E", # Errors
"W", # Warnings
"I", # Imports
"N", # Naming
"D", # Documentation
"UP", # Upgrades
"YTT",
"ANN",
"ASYNC",
"S",
"BLE",
"B",
"A",
"COM",
"C4",
"T10",
"DJ",
"EM",
"EXE",
"ISC",
"LOG",
"PIE",
"PYI",
"PT",
"RSE",
"SLF",
"SLOT",
"SIM",
"TID",
"TCH",
"INT",
"ERA",
"PGH",
]
ignore = [
"F821",
"F841",
"E501",
"E731",
"D100",
"D101",
"D107",
"D201",
"D202",
"D205",
"D211",
"D212",
"D401",
"D407",
"ANN002",
"ANN003",
"ANN101",
"ANN201",
"ANN204",
"S101",
"S301",
"S311",
"S507",
"S603",
"S605",
"S607",
"B008",
"B026",
"B028",
"B905",
"W291",
"W293",
"COM812",
"ISC001",
"SIM113",
]
[tool.ruff.format]
# Use single quotes for strings
quote-style = "single"
\ No newline at end of file
"""qim3d: A Python package for 3D image processing and visualization.
"""
qim3d: A Python package for 3D image processing and visualization.
The qim3d library is designed to make it easier to work with 3D imaging data in Python.
The qim3d library is designed to make it easier to work with 3D imaging data in Python.
It offers a range of features, including data loading and manipulation,
image processing and filtering, visualization of 3D data, and analysis of imaging results.
......@@ -8,13 +9,14 @@ Documentation available at https://platform.qim.dk/qim3d/
"""
__version__ = "1.0.0"
__version__ = '1.0.0'
import importlib as _importlib
class _LazyLoader:
"""Lazy loader to load submodules only when they are accessed"""
def __init__(self, module_name):
......@@ -48,7 +50,7 @@ _submodules = [
'mesh',
'features',
'operations',
'detection'
'detection',
]
# Creating lazy loaders for each submodule
......
import argparse
import webbrowser
import os
import platform
import webbrowser
import outputformat as ouf
import qim3d
import os
QIM_TITLE = ouf.rainbow(
rf"""
......@@ -16,126 +18,123 @@ QIM_TITLE = ouf.rainbow(
""",
return_str=True,
cmap="hot",
cmap='hot',
)
def parse_tuple(arg):
# Remove parentheses if they are included and split by comma
return tuple(map(int, arg.strip("()").split(",")))
return tuple(map(int, arg.strip('()').split(',')))
def main():
parser = argparse.ArgumentParser(description="qim3d command-line interface.")
subparsers = parser.add_subparsers(title="Subcommands", dest="subcommand")
parser = argparse.ArgumentParser(description='qim3d command-line interface.')
subparsers = parser.add_subparsers(title='Subcommands', dest='subcommand')
# GUIs
gui_parser = subparsers.add_parser("gui", help="Graphical User Interfaces.")
gui_parser = subparsers.add_parser('gui', help='Graphical User Interfaces.')
gui_parser.add_argument(
"--data-explorer", action="store_true", help="Run data explorer."
)
gui_parser.add_argument("--iso3d", action="store_true", help="Run iso3d.")
gui_parser.add_argument(
"--annotation-tool", action="store_true", help="Run annotation tool."
'--data-explorer', action='store_true', help='Run data explorer.'
)
gui_parser.add_argument('--iso3d', action='store_true', help='Run iso3d.')
gui_parser.add_argument(
"--local-thickness", action="store_true", help="Run local thickness tool."
'--annotation-tool', action='store_true', help='Run annotation tool.'
)
gui_parser.add_argument(
"--layers", action="store_true", help="Run Layers."
'--local-thickness', action='store_true', help='Run local thickness tool.'
)
gui_parser.add_argument("--host", default="0.0.0.0", help="Desired host.")
gui_parser.add_argument('--layers', action='store_true', help='Run Layers.')
gui_parser.add_argument('--host', default='0.0.0.0', help='Desired host.')
gui_parser.add_argument(
"--platform", action="store_true", help="Use QIM platform address"
'--platform', action='store_true', help='Use QIM platform address'
)
gui_parser.add_argument(
"--no-browser", action="store_true", help="Do not launch browser."
'--no-browser', action='store_true', help='Do not launch browser.'
)
# Viz
viz_parser = subparsers.add_parser("viz", help="Volumetric visualization.")
viz_parser.add_argument("source", help="Path to the image file")
viz_parser = subparsers.add_parser('viz', help='Volumetric visualization.')
viz_parser.add_argument('source', help='Path to the image file')
viz_parser.add_argument(
"-m",
"--method",
'-m',
'--method',
type=str,
metavar="METHOD",
default="itk-vtk",
help="Which method is used to display file.",
metavar='METHOD',
default='itk-vtk',
help='Which method is used to display file.',
)
viz_parser.add_argument(
"--destination", default="k3d.html", help="Path to save html file."
'--destination', default='k3d.html', help='Path to save html file.'
)
viz_parser.add_argument(
"--no-browser", action="store_true", help="Do not launch browser."
'--no-browser', action='store_true', help='Do not launch browser.'
)
# Preview
preview_parser = subparsers.add_parser(
"preview", help="Preview of the image in CLI"
'preview', help='Preview of the image in CLI'
)
preview_parser.add_argument(
"filename",
'filename',
type=str,
metavar="FILENAME",
help="Path to image that will be displayed",
metavar='FILENAME',
help='Path to image that will be displayed',
)
preview_parser.add_argument(
"--slice",
'--slice',
type=int,
metavar="S",
metavar='S',
default=None,
help="Specifies which slice of the image will be displayed.\nDefaults to middle slice. If number exceeds number of slices, last slice will be displayed.",
help='Specifies which slice of the image will be displayed.\nDefaults to middle slice. If number exceeds number of slices, last slice will be displayed.',
)
preview_parser.add_argument(
"--axis",
'--axis',
type=int,
metavar="AX",
metavar='AX',
default=0,
help="Specifies from which axis will be the slice taken. Defaults to 0.",
help='Specifies from which axis will be the slice taken. Defaults to 0.',
)
preview_parser.add_argument(
"--resolution",
'--resolution',
type=int,
metavar="RES",
metavar='RES',
default=80,
help="Resolution of displayed image. Defaults to 80.",
help='Resolution of displayed image. Defaults to 80.',
)
preview_parser.add_argument(
"--absolute_values",
action="store_false",
help="By default set the maximum value to be 255 so the contrast is strong. This turns it off.",
'--absolute_values',
action='store_false',
help='By default set the maximum value to be 255 so the contrast is strong. This turns it off.',
)
# File Convert
convert_parser = subparsers.add_parser(
"convert",
help="Convert files to different formats without loading the entire file into memory",
'convert',
help='Convert files to different formats without loading the entire file into memory',
)
convert_parser.add_argument(
"input_path",
'input_path',
type=str,
metavar="Input path",
help="Path to image that will be converted",
metavar='Input path',
help='Path to image that will be converted',
)
convert_parser.add_argument(
"output_path",
'output_path',
type=str,
metavar="Output path",
help="Path to save converted image",
metavar='Output path',
help='Path to save converted image',
)
convert_parser.add_argument(
"--chunks",
'--chunks',
type=parse_tuple,
metavar="Chunk shape",
metavar='Chunk shape',
default=(64, 64, 64),
help="Chunk size for the zarr file. Defaults to (64, 64, 64).",
help='Chunk size for the zarr file. Defaults to (64, 64, 64).',
)
args = parser.parse_args()
if args.subcommand == "gui":
if args.subcommand == 'gui':
arghost = args.host
inbrowser = not args.no_browser # Should automatically open in browser
......@@ -152,7 +151,7 @@ def main():
interface_class = qim3d.gui.layers2d.Interface
else:
print(
"Please select a tool by choosing one of the following flags:\n\t--data-explorer\n\t--iso3d\n\t--annotation-tool\n\t--local-thickness"
'Please select a tool by choosing one of the following flags:\n\t--data-explorer\n\t--iso3d\n\t--annotation-tool\n\t--local-thickness'
)
return
interface = (
......@@ -164,31 +163,27 @@ def main():
else:
interface.launch(inbrowser=inbrowser, force_light_mode=False)
elif args.subcommand == "viz":
if args.method == "itk-vtk":
elif args.subcommand == 'viz':
if args.method == 'itk-vtk':
# We need the full path to the file for the viewer
current_dir = os.getcwd()
full_path = os.path.normpath(os.path.join(current_dir, args.source))
qim3d.viz.itk_vtk(full_path, open_browser = not args.no_browser)
qim3d.viz.itk_vtk(full_path, open_browser=not args.no_browser)
elif args.method == "k3d":
elif args.method == 'k3d':
volume = qim3d.io.load(str(args.source))
print("\nGenerating k3d plot...")
print('\nGenerating k3d plot...')
qim3d.viz.volumetric(volume, show=False, save=str(args.destination))
print(f"Done, plot available at <{args.destination}>")
print(f'Done, plot available at <{args.destination}>')
if not args.no_browser:
print("Opening in default browser...")
print('Opening in default browser...')
webbrowser.open_new_tab(args.destination)
else:
raise NotImplementedError(
f"Method '{args.method}' is not valid. Try 'k3d' or default 'itk-vtk-viewer'"
)
elif args.subcommand == "preview":
elif args.subcommand == 'preview':
image = qim3d.io.load(args.filename)
qim3d.viz.image_preview(
......@@ -199,22 +194,21 @@ def main():
relative_intensity=args.absolute_values,
)
elif args.subcommand == "convert":
elif args.subcommand == 'convert':
qim3d.io.convert(args.input_path, args.output_path, chunk_shape=args.chunks)
elif args.subcommand is None:
print(QIM_TITLE)
welcome_text = (
"\nqim3d is a Python package for 3D image processing and visualization.\n"
'\nqim3d is a Python package for 3D image processing and visualization.\n'
f"For more information, please visit {ouf.c('https://platform.qim.dk/qim3d/', color='orange', return_str=True)}\n"
" \n"
' \n'
"For more information on each subcommand, type 'qim3d <subcommand> --help'.\n"
)
print(welcome_text)
parser.print_help()
print("\n")
print('\n')
if __name__ == "__main__":
if __name__ == '__main__':
main()
from qim3d.detection._common_detection_methods import *
\ No newline at end of file
from qim3d.detection._common_detection_methods import *
""" Blob detection using Difference of Gaussian (DoG) method """
"""Blob detection using Difference of Gaussian (DoG) method"""
import numpy as np
from qim3d.utils._logger import log
__all__ = ["blobs"]
__all__ = ['blobs']
def blobs(
vol: np.ndarray,
background: str = "dark",
background: str = 'dark',
min_sigma: float = 1,
max_sigma: float = 50,
sigma_ratio: float = 1.6,
......@@ -56,18 +58,19 @@ def blobs(
# Visualize detected blobs
qim3d.viz.circles(blobs, vol, alpha=0.8, color='blue')
```
![blob detection](../../assets/screenshots/blob_detection.gif)
![blob detection](../../assets/screenshots/blob_detection.gif)
```python
# Visualize binary binary_volume
qim3d.viz.slicer(binary_volume)
```
![blob detection](../../assets/screenshots/blob_get_mask.gif)
"""
from skimage.feature import blob_dog
if background == "bright":
log.info("Bright background selected, volume will be inverted.")
if background == 'bright':
log.info('Bright background selected, volume will be inverted.')
vol = np.invert(vol)
blobs = blob_dog(
......@@ -109,8 +112,8 @@ def blobs(
(x_indices - x) ** 2 + (y_indices - y) ** 2 + (z_indices - z) ** 2
)
binary_volume[z_start:z_end, y_start:y_end, x_start:x_end][
dist <= radius
] = True
binary_volume[z_start:z_end, y_start:y_end, x_start:x_end][dist <= radius] = (
True
)
return blobs, binary_volume
""" Example images for testing and demonstration purposes. """
"""Example images for testing and demonstration purposes."""
from pathlib import Path as _Path
from qim3d.utils._logger import log as _log
from qim3d.io import load as _load
from qim3d.utils._logger import log as _log
# Save the original log level and set to ERROR
# to suppress the log messages during loading
_original_log_level = _log.level
_log.setLevel("ERROR")
_log.setLevel('ERROR')
# Load image examples
for _file_path in _Path(__file__).resolve().parent.glob("*.tif"):
for _file_path in _Path(__file__).resolve().parent.glob('*.tif'):
globals().update({_file_path.stem: _load(_file_path, progress_bar=False)})
# Restore the original log level
......