Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • Article
  • Published:

Spotiflow: accurate and efficient spot detection for fluorescence microscopy with deep stereographic flow regression

Abstract

Identification of spot-like structures in large, noisy microscopy images is a crucial step for many life-science applications. Imaging-based spatial transcriptomics (iST), in particular, relies on the precise detection of millions of transcripts in low signal-to-noise images. Despite recent advances in computer vision, most of the currently used spot detection techniques are still based on classical signal processing and require tedious manual tuning per dataset. Here we introduce Spotiflow, a deep learning method for subpixel-accurate spot detection that formulates spot detection as a multiscale heatmap and stereographic flow regression problem. Spotiflow supports 2D and 3D images, generalizes across different imaging conditions and is more time and memory efficient than existing methods. We show the efficacy of Spotiflow by extensive quantitative experiments on diverse datasets and demonstrate that its increased accuracy leads to meaningful improvements in biological insights obtained from iST and live imaging experiments. Spotiflow is available as an easy-to-use Python library as well as a napari plugin at https://github.com/weigertlab/spotiflow.

This is a preview of subscription content, access via your institution

Access options

Buy this article

USD 39.95

Prices may be subject to local taxes which are calculated during checkout

Fig. 1: Fast, scalable, and accurate fluorescence spot detection with Spotiflow.
Fig. 2: Application of Spotiflow in a variety of iST settings.
Fig. 3: General applications of Spotiflow: live-cell imaging and autofluorescence-aware training.
Fig. 4: Assessment of spot detection performance on volumetric data with Spotiflow.
Fig. 5: Spotiflow accessibility and integration.

Similar content being viewed by others

Data availability

The benchmark datasets, including images and annotated spots, as well as HybISS images of mouse brain (1 cycle), frog tadpole limb (1 cycle) and POUIV smFISH of P.dumerilii are available at https://doi.org/10.5281/zenodo.10629708 (ref. 61). The reference transcript sequences used in the P.dumerilii experiments can be found on GenBank with accession numbers Pdu-POUIV, KC109636, Pdu-Prox and FN357281. Source data are provided with this paper.

Code availability

Spotiflow is available as an open-source Python library and as a napari plugin at https://github.com/weigertlab/spotiflow.

References

  1. Smal, I., Loog, M., Niessen, W. & Meijering, E. Quantitative comparison of spot detection methods in fluorescence microscopy. IEEE Trans. Med. Imaging 29, 282–301 (2010).

    Article  PubMed  Google Scholar 

  2. Štěpka, K. et al. Performance and sensitivity evaluation of 3D spot detection methods in confocal microscopy. Cytometry A 87, 759–772 (2015).

    Article  PubMed  Google Scholar 

  3. Mabaso, M. A., Withey, D. J. & Twala, B. Spot detection methods in fluorescence microscopy imaging: a review. Image Anal. Stereol. 37, 173–190 (2018).

    Article  Google Scholar 

  4. Sage, D. et al. Super-resolution fight club: assessment of 2D and 3D single-molecule localization microscopy software. Nat. Methods 16, 387–395 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  5. Steinfath, M. et al. Automated image analysis for array hybridization experiments. Bioinformatics 17, 634–641 (2001).

    Article  CAS  PubMed  Google Scholar 

  6. Dowsey, A. W., Dunn, M. J. & Yang, G.-Z. The role of bioinformatics in two-dimensional gel electrophoresis. Proteomics 3, 1567–1596 (2003).

    Article  CAS  PubMed  Google Scholar 

  7. Moses, L. & Pachter, L. Museum of spatial transcriptomics. Nat. Methods 19, 534–546 (2022).

    Article  CAS  PubMed  Google Scholar 

  8. Moffitt, J. R. et al. Molecular, spatial, and functional single-cell profiling of the hypothalamic preoptic region. Science 362, eaau5324 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  9. Gyllborg, D. et al. Hybridization-based in situ sequencing (HybISS) for spatially resolved transcriptomics in human and mouse brain tissue. Nucleic Acids Res. 48, e112 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  10. Shah, S., Lubeck, E., Zhou, W. & Cai, L. In situ transcription profiling of single cells reveals spatial organization of cells in the mouse hippocampus. Neuron 92, 342–357 (2016).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  11. Codeluppi, S. et al. Spatial organization of the somatosensory cortex revealed by osmFISH. Nat. Methods 15, 932–935 (2018).

    Article  CAS  PubMed  Google Scholar 

  12. Wang, X. et al. Three-dimensional intact-tissue sequencing of single-cell transcriptional states. Science 361, eaat5691 (2018).

    Article  PubMed  PubMed Central  Google Scholar 

  13. Wang, Y. et al. EASI-FISH for thick tissue defines lateral hypothalamus spatio-molecular organization. Cell 184, 6361–6377 (2021).

    Article  CAS  PubMed  Google Scholar 

  14. Axelrod, S. et al. Starfish: scalable pipelines for image-based transcriptomics. J. Open Source Softw. 6, 2440 (2021).

    Article  Google Scholar 

  15. Imbert, A. et al. FISH-quant v2: a scalable and modular tool for smFISH image analysis. RNA 28, 786–795 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  16. Bahry, E. et al. RS-FISH: precise, interactive, fast, and scalable FISH spot detection. Nat. Methods 19, 1563–1567 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  17. Maynard, K. R. et al. dotdotdot: an automated approach to quantify multiplex single molecule fluorescent in situ hybridization (smFISH) images in complex tissues. Nucleic Acids Res. 48, e66 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  18. Gudia, P. R., Nakayama, K., Pegoraro, G. & Misteli, T. SpotLearn: convolutional neural network for detection of fluorescence in situ hybridization (FISH) signals in high-throughput imaging approaches. Cold Spring Harb. Symp. Quant. Biol. 82, 57–70 (2017).

    Article  Google Scholar 

  19. Eichenberger, B. T., Zhan, Y., Rempfler, M., Giorgetti, L. & Chao, J. A. deepBlink: threshold-independent detection and localization of diffraction-limited spots. Nucleic Acids Res. 49, 7292–7297 (2020).

    Article  Google Scholar 

  20. Wollmann, T. et al. Detnet: deep neural network for particle detection in fluorescence microscopy images. In 2019 IEEE 16th International Symposium on Biomedical Imaging 517–520 (IEEE, 2019); https://doi.org/10.1109/ISBI.2019.8759234

  21. Sofroniew, N. et al. napari: a multi-dimensional image viewer for Python. Zenodo https://doi.org/10.5281/zenodo.7098045 (2022).

  22. Ronneberger, O., Fischer, P. & Brox, T. U-net: convolutional networks for biomedical image segmentation. In Proc. 18th International Conference on Medical Image Computing and Computer Assisted Intervention–MICCAI 2015 Part III (eds Navab, N. et al.) 234–241 (Springer, 2015).

  23. Xie, W., Noble, J. A. & Zisserman, A. Microscopy cell counting and detection with fully convolutional regression networks. Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 6, 283–292 (2018).

    Article  Google Scholar 

  24. van der Walt, S. et al. scikit-image: image processing in Python. PeerJ 2, e453 (2014).

    Article  PubMed  PubMed Central  Google Scholar 

  25. Tinevez, J.-Y. et al. Trackmate: an open and extensible platform for single-particle tracking. Methods 115, 80–90 (2017).

    Article  CAS  PubMed  Google Scholar 

  26. Katz, M. L. & Robison, W. G. What is lipofuscin? Defining characteristics and differentiation from other autofluorescent lysosomal storage bodies. Arch. Gerontol. Geriatr. 34, 169–184 (2002).

    Article  CAS  PubMed  Google Scholar 

  27. Yang, J. et al. Quenching autofluorescence in tissue immunofluorescence. Wellcome Open Res. 2, 79 (2017).

    Article  Google Scholar 

  28. Majchrowska, S. et al. AGAR a microbial colony dataset for deep learning detection. Preprint at https://arxiv.org/abs/2108.01234 (2021).

  29. Wernersson, E. et al. Deconwolf enables high-performance deconvolution of widefield fluorescence microscopy images. Nat. Methods 21, 1245–1256 (2024).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  30. Allan, D. B., Caswell, T., Keim, N. C., van der Wel, C. M. & Verweij, R. W. soft-matter/trackpy: V0.6.3rc1. Zenodo https://doi.org/10.5281/zenodo.11397726 (2024).

  31. Paszke, A. et al. Pytorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems Vol. 32, 8024–8035 (Curran Associates, 2019); http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf

  32. Schmidt, U., Weigert, M., Broaddus, C. & Myers, G. Cell detection with star-convex polygons. In Proc. 21st International Conference on Medical Image Computing and Computer Assisted Intervention–MICCAI 2018 Part II (eds by Frangi, A. F. et al.) 265–273 (Springer, 2018).

  33. Stringer, C., Wang, T., Michaelos, M. & Pachitariu, M. Cellpose: a generalist algorithm for cellular segmentation. Nat. Methods 18, 100–106 (2020).

    Article  PubMed  Google Scholar 

  34. Weigert, M., Schmidt, U., Haase, R., Sugawara, K. & Myers, G. Star-convex polyhedra for 3D object detection and segmentation in microscopy. In 2020 IEEE Winter Conference on Applications of Computer Vision (WACV) (IEEE, 2020); https://doi.org/10.1109/WACV45572.2020.9093435

  35. Saha, D. et al. Practical sensorless aberration estimation for 3D microscopy with deep learning. Opt. Express 28, 29044–29053 (2020).

    Article  PubMed  PubMed Central  Google Scholar 

  36. Finn, E. H. et al. Extensive heterogeneity and intrinsic variation in spatial genome organization. Cell 176, 1502–1515 (2019).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  37. Zhang, M. et al. Spatially resolved cell atlas of the mouse primary motor cortex by MERFISH. Nature 598, 137–143 (2021).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  38. Horvathova, I. et al. The dynamics of mRNA turnover revealed by single-molecule imaging in single cells. Mol. Cell 68, 615–625 (2017).

    Article  CAS  PubMed  Google Scholar 

  39. Virtanen, P. et al. SciPy 1.0: fundamental algorithms for scientific computing in Python. Nat. Methods 17, 261–272 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  40. Kirillov, A., He, K., Girshick, R., Rother, C. & Dollár, P. Panoptic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 9404–9413 (IEEE, 2019).

  41. Berger, E. D., Stern, S. & Altmayer Pizzorno, J. Triangulating Python performance issues with Scalene. In Proc. 17th USENIX Symposium on Operating Systems Design and Implementation (OSDI 23) 51–64 (USENIX Association, 2023).

  42. Beccari, L. et al. Multi-axial self-organization properties of mouse embryonic stem cells into gastruloids. Nature 562, 272–276 (2018).

    Article  CAS  PubMed  Google Scholar 

  43. Gyllborg, D. & Nilsson, M. HybISS: hybridization-based in situ sequencing v1. protocols.io https://doi.org/10.17504/protocols.io.xy4fpyw (2020).

  44. Özpolat, B. D. et al. The nereid on the rise: Platynereis as a model system. Evodevo 12, 10 (2021).

    Article  PubMed  PubMed Central  Google Scholar 

  45. Choi, H. M. T. et al. Third-generation in situ hybridization chain reaction: multiplexed, quantitative, sensitive, versatile, robust. Development 145, 165753 (2018).

    Article  Google Scholar 

  46. Azioune, A., Carpi, N., Tseng, Q., Théry, M. & Piel, M. Protein micropatterns: a direct printing protocol using deep UVs. Methods Cell Biol. 97, 133–146 (2010).

    Article  CAS  PubMed  Google Scholar 

  47. Forster, B., Van De Ville, D., Berent, J., Sage, D. & Unser, M. Extended depth-of-focus for multi-channel microscopy images: a complex wavelet approach. In Proc. 2nd IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI’04) 660–663 (IEEE, 2004).

  48. Muhlich, J. L. et al. Stitching and registering highly multiplexed whole-slide images of tissues and tumors using ASHLAR. Bioinformatics 38, 4613–4621 (2022).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  49. Kuglin, C. & Hines, D. The phase correlation image alignment method. In Proceedings of the IEEE International Conference on Cybernetics and Society 163–165 (IEEE, 1975).

  50. Guizar-Sicairos, M., Thurman, S. T. & Fienup, J. R. Efficient subpixel image registration algorithms. Opt. Lett. 33, 156–158 (2008).

    Article  PubMed  Google Scholar 

  51. Patterson, H. Multi-modal whole slide image registration (wsireg). GitHub https://github.com/NHPatterson/wsireg

  52. Klein, S., Staring, M., Murphy, K., Viergever, M. A. & Pluim, J. P. W. elastix: a toolbox for intensity-based medical image registration. IEEE Trans. Med. Imaging 29, 196–205 (2010).

    Article  PubMed  Google Scholar 

  53. Shamonin, D. et al. Fast parallel image registration on CPU and GPU for diagnostic classification of Alzheimer’s disease. Front. Neuroinform. 7, 50 (2014).

    PubMed  PubMed Central  Google Scholar 

  54. Harris, C. R. et al. Array programming with numpy. Nature 585, 357–362 (2020).

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  55. La Manno, G. et al. Molecular architecture of the developing mouse brain. Nature 596, 92–96 (2021).

    Article  PubMed  Google Scholar 

  56. Schindelin, J. et al. Fiji: an open-source platform for biological-image analysis. Nat. Methods 9, 676–682 (2012).

    Article  CAS  PubMed  Google Scholar 

  57. Hörl, D. et al. Bigstitcher: reconstructing high-resolution image datasets of cleared and expanded samples. Nat. Methods 16, 870–874 (2019).

    Article  PubMed  Google Scholar 

  58. Crocker, J. C. & Grier, D. G. Methods of digital video microscopy for colloidal studies. J. Colloid Interface Sci. 179, 298–310 (1996).

    Article  CAS  Google Scholar 

  59. Weigert, M. et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods 15, 1090–1097 (2018).

    Article  CAS  PubMed  Google Scholar 

  60. Nah, S., Kim, T. H. & Lee, K. M. Deep multi-scale convolutional neural network for dynamic scene deblurring. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (IEEE, 2017).

  61. Dominguez Mantes, A. et al. Spotiflow: manuscript data. Zenodo https://doi.org/10.5281/zenodo.10629708 (2024).

Download references

Acknowledgements

We thank members of the Weigert and La Manno labs as well as L. Borm (KU Leuven) for their feedback and discussions of the project. The authors also thank S. Preibisch (HHMI Janelia) for providing access to the EASI-FISH data; and the EPFL BioImaging and Optics Core Facility (BIOP) and the EPFL Histology Core Facility for their assistance in imaging and sample preparation. This project was supported by the EPFL Center for Imaging. M.W. was supported by the ELISIR program of the EPFL School of Life Sciences, and by the German Federal Ministry of Education and Research (BMBF, SCADS22B) and the Saxon State Ministry for Science, Culture and Tourism (SMWK) by funding the competence center for Big Data and AI ’ScaDS.AI Dresden/Leipzig’. G.L.M. received support from the Swiss National Science Foundation grant PZ00P3_193445. L.S. and D.A. were funded by the Marie Skłodowska-Curie ITN ’EvoCELL’ 766053 and the ERC Advanced grant NeuralCellTypeEvo 788921. Work in J.L.’s group was supported by the Swiss National Science Foundation (SNSF) [310030_214833] and the SNSF-funded National Centre of Competence in Research RNA and Disease Network [205601]. E.K. was a recipient of a postdoctoral fellowship from the Peter and Traudl Engelhorn Stiftung [532515]. Work in S.M.’s group was supported by the Swiss National Science Foundation (SNSF) [310030_215737] and the European Research Council (ERC CoG Piko). E.B. acknowledges support by the Joachim Herz Foundation through an Add-on Fellowship for Interdisciplinary Life Science.

Author information

Authors and Affiliations

Authors

Contributions

A.D.M. developed the idea, wrote the software, performed computational experiments, annotated data, performed and interpreted analyses, created figures and wrote the paper. A.H. acquired HybISS data (mouse embryo, frog tadpole and gastruloids), helped to interpret the related analyses, annotated 2D data and created figures. I.K. designed the mouse brain gene panel, acquired HybISS data (mouse embryo), helped to interpret the related analyses, annotated 2D data and ran preliminary computational experiments. A.S. acquired HybISS data (adult mouse), helped to interpret the related analyses and annotated 2D data. E.K. acquired the 2D live-cell imaging movies, annotated 2D data, performed tracking and helped to interpret the related analyses under the supervision of J.L. G.T. prepared limb samples and E.S. generated gastruloids under the supervision of C.A. L.S. acquired the 3D smFISH sample of P.dumerilii under the supervision of D.A. E.B. processed the raw 3D smFISH stack of P.dumerilii, annotated 3D data and helped to interpret the related analyses under the supervision of A.K. E.B.D. acquired the live 3D movies under the supervision of S.M. G.L.M. supervised the project, developed the idea, designed the mouse brain gene panel, interpreted analyses, created figures and wrote the paper. M.W. supervised the project, developed the idea, wrote the software, interpreted analyses, created figures and wrote the paper. All of the co-authors read and approved the paper.

Corresponding authors

Correspondence to Gioele La Manno or Martin Weigert.

Ethics declarations

Competing interests

M.W. holds shares of and is an unpaid advisor for katana labs GmbH. The other authors have no competing interests.

Peer review

Peer review information

Nature Methods thanks Erik Wernersson and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available. Primary Handling Editor: Rita Strack, in collaboration with the Nature Methods team.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended Data Fig. 1 Architecture of Spotiflow.

a) An input image is processed by a U-Net, which consists of convolutional blocks of depth d = 3 and increasing feature maps (16, 32, 64 channels) and lightweight convolutional blocks of depth d = 3 and non-increasing feature maps (1 channel). All convolution kernels are 3 × 3 (3 × 3 × 3 in the 3D case). The combined loss is the sum of L multiscale heatmap losses \({{\mathcal{L}}}_{heat}^{(i)}\) (binary cross-entropy loss) and the stereographic flow loss \({{\mathcal{L}}}_{flow}\) (L1 loss). b) Ground truth generation from point annotations for training Spotiflow. First, a Gaussian is generated on top of every annotation yielding the full-resolution Gaussian heatmap Y(0). This heatmap is then further processed to obtain its representation at different resolutions (multiscale heatmaps). Second, a local offset vector field is built in which every position contains the vector directed to the closest ground truth spot. The stereographic flow is then obtained by computing the inverse stereographic projection position-wise.

Extended Data Fig. 2 Principle of stereographic flow (2D).

a) For given ground truth center locations (blue dots) the 2D vector field \(\{{({v}_{x},{v}_{y})}_{ij}\}\) is defined as the vector from each pixel ij to the nearest ground truth spot. b) The 2D vector field is embedded in R3 onto the unit 3-dimensional sphere S2 via an inverse stereographic projection yielding the 3D stereographic flow\(({v}_{x}^{{\prime} },{v}_{y}^{{\prime} },{v}_{z}^{{\prime} })\).

Extended Data Fig. 3 Example images for 2D FISH benchmark datasets and corresponding spot predictions of compared detection methods.

TP, FP and FN denote true positive, false positive and false negative detections respectively.

Extended Data Fig. 4 Example images for 2D synthetic and live-cell benchmark datasets and corresponding spot predictions of compared detection methods.

TP, FP and FN denote true positive, false positive and false negative detections respectively.

Extended Data Fig. 5 Evaluation of 2D spot detection methods on benchmark datasets.

Shown are a) F1-score (higher is better), b) Localization error in pixels (lower is better) and c) Panoptic Quality score (higher is better) for different methods on different synthetic and real datasets. Compared methods are: Starfish, Big-FISH, RS-FISH, and deepBlink. Shown are median and interquartile ranges across all images (boxplot) and individual metrics per image (dots).

Source data

Extended Data Fig. 6 Assessment of the impact of common image artifacts on spot detection performance.

For a test image of the HybISS dataset an increasing number of different types of artifacts were introduced. Shown is the F1 score for Spotiflow and LoG. a) Hot pixels. b) Uneven illumination, which may cause vignetting effects. c) Vertical-striped noise. Results show the F1 score (higher is better) for different levels.

Extended Data Fig. 7 Fine-tuning Spotiflow pretrained on synthetic data.

a) We fine-tune a Spotiflow network pretrained on a realistically-simulated synthetic dataset on several datasets: b) on HybISS (iST), c) on smFISH (iST), and d) on Terra (live-cell). Line plots show the mean F1 score (area denotes standard deviation) across different fine-tuning runs on the whole test split of each dataset after fine-tuning on subsets of different size of the training split.

Extended Data Fig. 8 Application of Spotiflow as a general object detection method: bacteria colony detection.

Use case of Spotiflow’s capability as a general detection method for detecting bacteria colonies. Spotiflow was trained on RGB images from the AGAR dataset (N = 3152). a) Raw RGB images from the test split (N = 1106) of the dataset. b) Overlaid detections of Spotiflow on the same test images.

Extended Data Fig. 9 Spotiflow predictions on EASI-FISH data.

a) Overview of a lateral hypothalamus section of a mouse brain processed using EASI-FISH. Data from Wang et al. 2021(ref. 13). b) Pairs of insets of raw data (left) and with overlaid detections of Spotiflow (right) of the same stack.

Supplementary information

Supplementary Information

Supplementary Figs. 1–11, Tables 1–4, 6 and 7, and Notes 3.1–3.5.

Reporting Summary

Peer Review File

Supplementary Table 5

HybISS (Mus musculus) and HCR RNA-FISH (Platynereis dumerilii) probe sequences.

Supplementary Video 1

Data overview. Visualization of a single channel of a mouse brain embryo acquired with the HybISS protocol highlighting the challenges of processing iST data.

Supplementary Video 2

Spotiflow napari plugin. Demonstration of the Spotiflow napari plugin on a crop of 2D HybISS data as well as a 2D+time live-cell movie containing labeled telomeres.

Supplementary Video 3

Stereographic flow. Animation showcasing the stereographic flow principle.

Supplementary Video 4

Live-cell tracking. Side-to-side comparison of telomere tracking results obtained from the detections of LoG, deepBlink and Spotiflow on live-cell time-lapses.

Supplementary Video 5

3D spot detection on smFISH data. Side-to-side comparison of Platynereis dumerilii smFISH data: raw signal, LoG detections and Spotiflow detections.

Supplementary Video 6

3D lipid droplet tracking. Lipid droplet tracking results obtained from the detections of 3D Spotiflow on a live-cell, label-free volumetric movie.

Source data

Source Data Fig. 1 and Source Data Extended Data Fig. 5

Benchmarking results on synthetic and real 2D datasets (Fig. 1d and Extended Data Fig. 5).

Source Data Fig. 2

Decoded transcripts of E13.5 mouse HybISS (Fig. 2c).

Source Data Fig. 4

Benchmarking results on the 3D synthetic dataset (Fig. 4b).

Source Data Fig. 5

Decoded transcripts of starMAP sample data (Fig. 5d).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dominguez Mantes, A., Herrera, A., Khven, I. et al. Spotiflow: accurate and efficient spot detection for fluorescence microscopy with deep stereographic flow regression. Nat Methods 22, 1495–1504 (2025). https://doi.org/10.1038/s41592-025-02662-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Version of record:

  • Issue date:

  • DOI: https://doi.org/10.1038/s41592-025-02662-x

This article is cited by

Search

Quick links

Nature Briefing

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing