Fig. 1: pySTED simulation platform.
From: Development of AI-assisted microscopy frameworks through realistic simulation with pySTED

a, Schematic of the pySTED microscopy simulation platform. The user specifies the fluorophore properties (for example, brightness and photobleaching) and the positions of emitters in the data map. A simulation is built from several components (excitation and depletion lasers, detector and objective lens) that can be configured by the user. A low-resolution (Conf) or high-resolution (STED) image of a data map is simulated using the provided imaging parameters. The number of fluorophores at each location in the data map is updated according to their photophysical properties and associated photobleaching effects. b, Modulating the excitation with the depletion beam impacts the effective PSF (E-PSF) of the microscope. c, Time-gating module is implemented in pySTED, which affects the lasers and detection unit. The time-gating parameters of the simulation (gating delay, Tdel; gating time, Tg) as well as the repetition rate of the lasers (τrep) are presented. A grey box is used to indicate when a component is active. d, Two-state Jablonski diagram (ground state, S0; excited state, S1) presents the transitions that are included in the fluorescence (spontaneous decay, \({k}_{{\rm{S}}_{1}}\); stimulated emission decay, kSTED) and photobleaching dynamics (photobleaching rate, kb; photobleached state, β) of pySTED. The vibrational relaxation rate (1/τvib) affects the effective saturation factor in STED. e, Image acquisition is simulated as a two-step process at each location. Acquire (i): convolution of the E-PSF with the number of emitters in the data map (Data map: emitters) is calculated to obtain the signal intensity (Image: photons). Photobleaching (ii): number of emitters at each position in the data map is updated according to the photobleaching probability (line profile from kb, compare the top and bottom lines). The same colour maps as in a are used. f, Realistic data maps are generated from real images. A U-Net model is trained to predict the underlying structure from a real STED image. Convolving the predicted data map with the approximated PSF results in a realistic synthetic image. During training, the mean squared error loss (MSELoss) is calculated between the real and synthetic image. Once trained, the convolution step can be replaced by pySTED. Objective lens in panel a created with BioRender.com.