Abstract
Noninvasive optical imaging through complex scattering media presents a major challenge across multiple fields. State-of-the-art techniques, such as reflection matrix decomposition and neural networks, rely on multiple measurements with varying illumination within the sample decorrelation time, making their application challenging in rapidly varying dynamic media. Here, we show that due to commutativity property of the convolution operation, dynamic scattering in isoplanatic imaging is mathematically analogous to varying illumination in static media. This insight allows leveraging matrix-based approaches developed for static scattering to rapidly varying dynamic media. Specifically, we show that the covariance matrix of a set of scattered light camera frames captured through a dynamic scattering sample has the same mathematical form as the reflection matrix of a static medium, with the target object playing the scattering medium’s role. We demonstrate this concept by high-resolution diffraction-limited imaging through dynamic scattering across multiple modalities, from incoherent fluorescence microscopy to coherence-gated holographic reflection imaging.
Introduction
Optical imaging through scattering media poses a fundamental challenge and opportunity for optical imaging1,2, and is a field with intense active research and significant recent advancements. Imaging objects through rapidly changing environments, such as atmospheric turbulence3,4,5, biological tissues6,7, or fog8, holds a unique importance in fields ranging from medical imaging, through remote sensing, to astronomical observations.
Among the recently developed scattering-compensation techniques, reflection-matrix-based methods have emerged as essential tools for noninvasive computational imaging through such media. These approaches rely on dynamic (controlled9,10,11,12 or uncontrolled13,14,15) illumination of a static scene, allowing the measurement of the sample reflection matrix, and subsequent application of scattering compensation and image reconstruction algorithms. As a result of the relatively large number of required consecutive measurements, these techniques are inadequate to tackle rapidly varying scattering, such as those encountered in flowing blood, atmospheric turbulence, or fog.
Recently, approaches based on neural networks or online learning have been proposed to tackle dynamic aberrations or scattering computationally. Neural-network-based approaches include supervised methods16,17, which heavily rely on training data and usually lack interpretability, and unsupervised methods using neural representations18,19 that assume slow, correlated temporal variations in the scattering medium, making it challenging to adapt to rapidly uncorrelated media. Online learning of the transmission matrix of dynamic media20 has been demonstrated, but it too is limited to slowly temporally varying scattering. Impressive efforts to computationally undo dynamic atmospheric turbulence by deep learning have been reported in recent years3,4,5,21,22. However, these methods are specialized in low-order atmospheric aberrations and are not aimed at correcting complex scattering, such as that encountered in biological tissues or highly scattering layers.
Alternatively, speckle-correlation imaging techniques, inspired by stellar speckle interferometry23, which exploit the statistical properties of speckle patterns to recover image information, have been applied to varying scattering conditions23,24,25,26,27 and, under certain conditions, can function as single-shot imaging techniques28,29,30,31,32,33,34. However, despite this advantage, these techniques are hindered by their reliance on iterative phase retrieval35, which can require a very large number of iterations to converge, as well as specific support priors and potentially a large number of initial guesses. While deterministic bispectrum reconstruction can address the convergence challenge of phase retrieval, it still requires averaging a large number of speckle grains, limiting the reconstruction to relatively simple objects. These limitations underscore the need for an imaging technique that is inherently adapted to image complex scenes through rapidly varying media.
Here, we present an approach that allows us to directly apply state-of-the-art reflection-matrix techniques to dynamic scattering compensation in both coherent and incoherent imaging modalities. Our method overcomes the limitation of a matrix-based approach to static scenes by exploiting the mathematical equivalence of dynamic illumination of a static scene to dynamic scattering under static illumination, leveraging the commutativity property of the convolution model of isoplanatic imaging. Thus, our approach provides a natural and fully interpretable extension of matrix-based imaging techniques to the case of rapidly dynamic scatterers. It enables the reconstruction of complex, megapixel-scale images through rapidly time-varying scattering. Importantly, unlike state-of-the-art neural-networks-based techniques, our approach does not require any assumptions on the temporal variations or other regularization, making it suitable for rapid dynamic scattering.
As our approach is based on a very general principle, it allows versatility in addressing the challenges posed by dynamic scattering media across a wide range of imaging scenarios and modalities. We experimentally demonstrate the approach’s efficacy for both incoherent and coherent imaging modalities, including fluorescence microscopy, widefield transmission imaging, and coherent holographic time-gated imaging.
Results
Principle
Here, we establish the mathematical foundation for our approach, deriving the mathematical analogy between a dynamic scattering medium and dynamic illumination (Fig. 1).
Our approach leverages matrix-based techniques that were developed for imaging through static scatter using multiple illuminations (a), to image through dynamically varying media (b). The enabling underlying mathematical principle is the commutativity property of convolution, making the image formation equation in the static-scattering case ((a), bottom) mathematically equivalent to the dynamic case ((b), bottom), with only the roles of the point spread function (PSF) and the object are interchanged. a Conventional matrix-based approaches image through static scattering media by processing a set of captured frames of the scattered light, each obtained by illuminating the object with a different unknown random illumination13,14. In the common case of isoplanatic scattering, each captured image is the convolution of the scattering PSF with the illuminated object ((a), bottom). b The case of rapidly-varying dynamic scattering poses a challenge for conventional matricial approaches, as multiple captures within the sample decorrelation time are impossible. However, mathematically, for a static object, the captured frames at different times are given by the same convolution equation of the static scattering case (a), just with the roles of object and PSF interchanged. Thus, the object and PSF can be reconstructed by applying the conventional matricial algorithms13,14,57 on the captured frames in the dynamic medium case.
In isoplanatic (coherent or incoherent) imaging conditions, the image plane distribution is given by a convolution of the object’s optical field (in the coherent case) or intensity (in the incoherent case), denoted as O, with the effective (field or intensity) point spread function (PSF) P:
It is important to note that this convolution model is strictly valid only for objects within an isoplanatic patch. All experiments in this work were designed within this constraint. Potential extensions to larger fields of view or thick complex media are discussed in the “Discussion” section.
In the common case where no scattering is present, the PSF is a narrow, sharply peaked function. Consequently, the image on the camera sensor I provides a good direct representation of the object, with a resolution given by the PSF. In the case where isoplanatic scattering or aberrations are present, i.e., in the optical memory-effect range2, Eq. (1) still holds. However, the scattering PSF is a complex and potentially spatially extended speckle pattern. This results in a low-contrast, blurry, and seemingly information-less image on the camera sensor2. The goal of computational scattering compensation is to retrieve the object function, O, and potentially the scattering PSF, P, without prior knowledge of either O or P.
The state-of-the-art techniques for computational scattering compensation rely on measuring the reflection matrix of the sample using a set of controlled9,10,36 or random13,14,15 spatial illumination patterns. The reflection matrix is obtained by multiple recordings, m = 1... M, of the scattered complex-valued light field in the coherent case (or scattered light intensity in the incoherent case14) under these different illuminations, with each recorded frame in these measurements being expressed as (Fig. 1a):
Where Om denotes the mth realization of the illuminated object, \({O}_{m}(r)=O(r){I}_{m}^{{{{\rm{ill}}}}}(r)\), and \({I}_{m}^{{{{\rm{ill}}}}}(r)\) is the mth illumination pattern. By arranging these measured images into columns of a matrix A, we can write the measured dataset as A = PO, where P is a convolution (Toeplitz) matrix, and O is a matrix containing the different illuminated object realizations in its columns. Following Lee et al. and Weinberg et al.13,14, in the case of uncorrelated illuminations, (defined as \({\langle {\hat{O}}_{m}({r}_{i}){\hat{O}}_{m}({r}_{j})\rangle }_{m}\propto | O({r}_{i}){| }^{2}{\delta }_{i,j}\), where \(\hat{O}(r) {=}^{{\mathrm{def}}} O(r)-{\langle {O}_{m}(r)\rangle }_{m}\)), the I-CLASS (Incoherent Closed-Loop Accumulation of Single Scattering) algorithm14 enables simultaneous retrieval of both P(r) and ∣O(r)∣2 by decomposing the covariance matrix of A, Cov(A), to Cov(A) = PCov(O)PT 13,14.
In dynamic scattering scenarios, where the PSF varies in an uncorrelated manner, but the target object remains relatively unchanged, the state-of-the-art matrix-based approaches fail. However, in such cases, assuming a sufficiently short exposure time, each camera frame (or hologram in the coherent case) (1) can be written as (Fig. 1b):
With O as the static object function and Pm as the mth PSF. Due to the convolution commutativity property, equation (3) can be written as:
Since Equations (4) and (2) have the exact same form, just with the roles of the object and PSF exchanged (Fig. 1), the CTR-CLASS13 (Compressed Time-Reversal CLASS) or I-CLASS14 algorithm can be applied on the measurements Im(r), to allow the efficient extraction of the object O(r). More specifically, in matrix form, arranging the measurements Im(r) as columns in a matrix A, allows us to write it as A = OP, where O is now a Toeplitz matrix with the object function as the convolution kernel, and the columns of P are the different PSF realizations. For uncorrelated PSF realizations (\({\langle {\hat{P}}_{m}({r}_{i}){\hat{P}}_{m}({r}_{j})\rangle }_{m}\propto {\delta }_{i,j}\), Cov(P) is a diagonal matrix, and the CLASS algorithm can be applied on Cov(A) (for a discussion of the important case of residual spatial correlations in the covariance matrix see Supplementary Section S2).
Thus, the presented approach is realized by performing the following steps: (1) capture m = 1... M scattered light patterns through a rapidly varying medium; (2) arrange the measurements as columns in a matrix A; (3) apply the I-CLASS14 algorithm on A to retrieve the hidden target object (effectively applying the matrix-based adapted CLASS algorithm on the covariance of A).
Experimental results: incoherent imaging
As a first demonstration, we experimentally demonstrate the effectiveness of our method in conventional transmission imaging through dynamically varying scattering. The optical setup is schematically illustrated in Fig. 2a. A conventional widefield microscope captures images of the various objects through a rotating diffuser illuminated by a spatially incoherent LED illumination at 625 nm central wavelength (see “Methods”). For each imaged object, M = 150 short-exposure (0.5–7 ms, see “Methods”), camera frames were captured and processed by the I-CLASS algorithm14. The diffuser rotation between captures was such that the (PSFs) of the different captures were uncorrelated (see Supplementary Section S1).
a Experimental setup: a conventional widefield microscope records M = 150 distorted images of incoherently-illuminated targets through a dynamically rotating scattering diffuser. b, c Captured raw camera frames. d I-CLASS corrected image, revealing the fine details and features of the target. The reconstructed PSF is given in Supplementary Section S1 and Supplementary Movie S1. e Reference image of the object as imaged without the diffuser present. f–q Same as (b–e) for different target objects and diffusers. Colormaps are scaled between the minimum and maximum values of each reconstruction. Scale bars, 100 μm.
Note that although these initial experiments utilize a transmission geometry with illumination from behind the target, the addition of a scattering medium between the light source and the target in these experiments would not change the imaging performance as long as a sufficiently high intensity is passed through the scattering medium, as the incoherent imaging configuration only requires homogeneous illumination of the target. Additionally, our subsequent experimental demonstrations in epi-illumination and detection in fluorescence microscopy and coherent holographic imaging demonstrate applicability to noninvasive imaging across diverse optical configurations and imaging modalities.
Figure 2b–q presents the experimental imaging results after applying the I-CLASS reconstruction algorithm. Example raw captured frames of the target objects distorted by the diffuser are given in Fig. 2b, c, f, g, j, k, n, o. As expected, the details of the objects are distorted due to scattering compared to their direct imaging without the diffuser present (Fig. 2e, i, m, q). The reconstructed images obtained by applying the I-CLASS algorithm14 on the captured frames (after multiplying each frame in Fig. 2b, c, f, g by a fixed scalar value between 1 and 2, see “Methods” and Supplementary Section S2) reveal fine details and features of the objects, are presented in Fig. 2c, g, k, o. The reconstructed object allows estimation of the PSF of each frame by frame-wise deconvolution of each of the raw captured frames with the reconstructed object (Supplementary Section S1). Examples for the full dataset of captured frames, reconstructed objects, and reconstructed PSFs are presented in Supplementary Section S1 and Supplementary Videos S1 and S2.
As a second demonstration, we performed transmission imaging through a naturally-dynamic scatterer. To this end, we replaced the rotating diffuser with a 1 mm-thick cuvette containing 45 μm-diameter Polystyrene beads in solution, creating a dynamically varying scattering as the beads flow freely in the suspension. Additionally, a static diffuser with a 0.5∘ scattering angle was placed adjacent to the cuvette to ensure no ballistic component was present in the captured scattered light images. The experimental setup and results for these experiments are presented in Fig. 3. Using this configuration, we imaged and reconstructed two resolution test targets. The captured camera frames (Fig. 3b, f, c, g) appear highly blurred, exhibiting no clear features. However, the I-CLASS corrected images (Fig. 3d, h) successfully reconstruct the fine features of the target objects, demonstrating the effectiveness of the approach in correcting such natural dynamic scattering where no ballistic component is present. For reference, direct images of the same resolution targets captured without the scattering medium, using the same widefield transmission microscope, are provided in Fig. 3e, i.
a Experimental setup: a conventional widefield microscope records M = 150 distorted images of incoherently-illuminated targets through a 1 mm-thick cuvette filled with a solution of 45 μm polystyrene beads. A static diffuser is added in front of the cuvette to ensure no ballistic component is present. b, c Experimental camera frames of target imaged through a dynamically rapidly varying medium taken at distinct times. d I-CLASS reconstructed images. e Images of the object with the cuvette removed. f–i Same as (b–e) for a different target object. Insets in h, i show zoomed-in areas marked by red-dashed lines. Colormaps are scaled between the minimum and maximum values of each reconstruction. Scale bars, 150 μm.
As an additional proof of principle we tested our approach in a fluorescence microscopy experiment performed in epi-detection geometry through dynamic scattering. The setup for this experiment is depicted in Fig. 4a. It is a conventional widefield fluorescence microscope with a rotating diffuser placed between the microscope objective and the fluorescent sample (see “Methods”). The illumination source is a narrowband spatially-incoherent source composed of a 200-mW CW laser (06-MLD-488, Cobolt) and a rapidly rotating diffuser (see “Methods”). An sCMOS camera (Andor Neo 5.5) captures M = 150 images of the scattered fluorescence light through a dichroic mirror and appropriate emission filters.
a Experimental setup: a conventional widefield fluorescence microscope records M = 150 distorted images of fluorescent 10 μm-diameter beads through a dynamically rotating scattering diffuser. b Experimental camera frames of fluorescent objects imaged through an optical diffuser, showing distorted images due to scattering. c I-CLASS corrected images reveal fine details and features of the objects. d Reference images of the objects without scattering layers. e–g Same as (b–d) for different target objects. Insets in b–g show zoomed-in areas marked by red-dashed lines. Colormaps are scaled between the minimum and maximum values of each reconstruction. Scale bars, 100 μm.
Figure 4 presents the result of the fluorescence microscopy experiments. Figure 4b, e show sample captured frames of two targets composed of fluorescent beads (Fluoresbrite YG microspheres 10 μm), as conventionally imaged through the optical diffuser. The significant distortion due to scattering can be observed in both the raw frames and the zoomed-in areas marked by red-dashed lines. The I-CLASS reconstructed images (Fig. 4c, f), after multiplying each frame by a fixed scalar value (see “Methods” and Supplementary Section S2), successfully recover fine details and features of the objects. For comparison, Fig. 4d, g display the direct images of the objects as imaged without the scattering present.
Experimental results: holographic coherent imaging
As a final demonstration of the generality of the approach and its applicability to various imaging modalities, we applied it to coherent holographic reflective imaging. The results of this study are presented in Fig. 5. The experimental setup is schematically depicted in Fig. 5a: a reflective target (USAF resolution target) is illuminated by a wide illumination beam through a dynamically rotating diffuser. The illumination is provided by a 632 nm Gaussian beam from a Helium-Neon laser (HNL210L, Thorlabs), which is focused to a tight spot on the diffuser surface to ensure that the object illumination remains relatively constant while the diffuser is varied (see below). An sCMOS camera holographically records M = 180 reflected scattered light fields by imaging the diffuser back surface with a 4f imaging system. A reference beam with a proper optical path delay matching the target distance is used for off-axis holographic acquisition37.
a Experimental setup: a reflective target is illuminated through a dynamically rotating scattering diffuser. M = 180 reflected light fields are holographically recorded in an off-axis holography configuration using a reference arm. b Example of the recorded distorted fields after computational propagation to the object plane. c One example of the recorded field intensity after computational propagation to the object plane. d Reconstructed object intensity, after applying the I-CLASS algorithm to compensate for scattering, followed by numerical propagation to the object plane (see Supplementary Fig. S6). e Complex-valued field amplitude PSFs (APSFs) estimated from each captured field. f Reference intensity image of the object without the diffuser present. Scale bars, 1 mm.
To reconstruct the target, each captured field was digitally propagated to the target plane using Fresnel propagation. As expected, without scattering compensation, the reconstructed fields are highly distorted, and the target object features cannot be observed (Fig. 5b).
If the image-forming equation has the same form as Eq. (4), the CTR-CLASS or I-CLASS algorithms can be directly applied to the holographically captured fields. However, different from the spatially incoherent illumination case (Figs. 2–4), where the illumination at the target plane was homogeneous regardless of the scattering layer dynamics, a significant challenge in the coherent imaging configuration is ensuring a constant illumination of the target despite the dynamic scattering introduced by the rotating diffuser. Specifically, in the coherent illumination case (Fig. 5), the presence of the dynamic scatterer in the illumination path of a wide beam generates a speckle illumination field at the target plane that may be subject to variations between scatterer realizations. Consequently, the measured field at the camera for the mth realization, \({E}_{m}^{{{{\rm{cam}}}}}({{{\bf{r}}}})\), can be expressed as:
Where \({E}_{m}^{{{{\rm{ill}}}}}({{{\bf{r}}}})\) is the illumination field at the target plane at the mth realization, \({P}_{m}^{{{{\rm{coh}}}}}({{{\bf{r}}}})\) denotes the complex-valued field amplitude point spread function (APSF) introduced by the scattering medium in the mth realization, and O(r) is the target object spatial reflectivity. From Eq. (5) it is evident that if the illumination pattern remains unchanged across scattering realizations: \({E}_{m}^{{{{\rm{ill}}}}}({{{\bf{r}}}})={E}^{{{{\rm{ill}}}}}({{{\bf{r}}}})\), then O(r)Eill(r) = Oeff(r), may be considered as an effective static object serving as the single object function that is assumed in the model of Eq. (4). To ensure this is the case, we have focused the Gaussian illumination beam on the scattering layer surface such that the beam waist at the scattering layer surface is sufficiently smaller than the correlation length of the scattering layer. This minimizes variations in \({E}_{m}^{{{{\rm{ill}}}}}({{{\bf{r}}}})\) between realizations, allowing the illumination field to be treated as effectively constant. Under this assumption, the measured fields follow the equation:
Since Equations (6) and (4) share the same form, the CTR-CLASS13 can be applied to the measurements \({E}_{m}^{{{{\rm{cam}}}}}({{{\bf{r}}}})\), allowing the efficient reconstruction of the object field phase at the scattering layer plane. To reconstruct both the phase and amplitude of the object field, we applied the I-CLASS algorithm14, which also estimates the object field amplitude from the M captured fields. The I-CLASS reconstructed object field at the scattering layer plane is numerically propagated to the target plane by Fresnel propagation to reconstruct the target, faithfully retrieving the fine features of the target (Fig. 5d). With the reconstructed target obtained, the APSF for each frame, \({P}_{m}^{{{{\rm{coh}}}}}({{{\bf{r}}}})\), can be estimated. This can be achieved by calculating the phase introduced by the rotating diffuser at each realization, by taking15: \({\hat{P}}_{m}^{{{{\rm{coh}}}}}(r)={{{\mathcal{F}}}}\left\{{e}^{i{\hat{\phi }}_{{{{\rm{diff}}}}}}\right\}={{{\mathcal{F}}}}\left\{{e}^{i\arg \left(\frac{{E}_{m}^{{{{\rm{cam}}}}}({r}_{{{{\rm{cam}}}}})}{{E}_{{{{\rm{CLASS}}}}}({r}_{{{{\rm{cam}}}}})}\right)}\right\}\) are the measured fields at the diffuser plane, and ECLASS is the CLASS reconstructed object field at the diffuser plane. As shown in previous works15,38, the CLASS reconstructed object field also contains the spherical phase from the propagation distance between the object and the scattering layer. Thus, \({\hat{\phi }}_{{{{\rm{diff}}}}}\) provides an estimation of the diffuser phase. Several estimated APSFs are shown in Fig. 5e.
Discussion
We have introduced and experimentally demonstrated a computational matricial framework for imaging through dynamic scattering media. The proposed framework addresses an important challenge in both coherent and incoherent imaging, which conventional matrix-based approaches have difficulty in tackling due to their reliance on multiple measurements within the scattering medium decorrelation time.
Importantly, we have shown that under isoplanatic scattering conditions, the covariance matrix of dynamically scattered light fields has the same mathematical form as that of a conventional reflection matrix, with the roles of the medium and target object replaced. Thus, any matrix-based technique capable of decomposing the “reflection-matrix” to an object field and scattering layer can be applied to reconstruct the hidden target. We chose to use the recently introduced I-CLASS algorithm due to the memory-efficient implementation developed in refs. 14,15, which allows high pixel-count processing and also the estimation of the object Fourier amplitude.
Our analysis revealed that energy conservation in phase-only speckle patterns may introduce spatial correlations that can cause background haze in the reconstructions (see Supplementary Fig. S4). Here, we addressed this by simply applying varying intensity scaling across frames during post-processing (see Supplementary Section S2). It would be interesting to study more advanced processing techniques to identify and filter these correlations and/or their effects. In particular, it would be interesting to study matricial processing approaches, such as singular value decomposition filtering (SVD) of the measurement matrix or the covariance matrix. It was recently shown that SVD-based analysis and filtering of the reflection matrix or the distortion matrix10,39,40, can effectively separate correlated signals, such as those originating from different isoplanatic patches, or that are less affected by noise.
An analysis of the effects of the energy-conservation originated correlations on the reconstruction, and our approach for mitigating them, is detailed in Supplementary Section S2 and Supplementary Fig. S3.
We note that while single-shot speckle-correlation approaches29 can, in principle, be applied to each of the captured frames since they only use the spatial autocorrelation of a single frame (or an estimation of a single autocorrelation from a set of frames as in stellar speckle interferometry23), their performance in terms of reconstruction fidelity for complicated objects and convergence stability are significantly inferior to the proposed covariance-matrix-based approach (see “Numerical Study” in Supplementary Section S3).
We note that for coherent imaging through dynamic scattering, our current approach is limited to scenarios where effectively constant illumination can be maintained at the object plane. While demonstrated here with a thin scattering layer by focusing the illumination to a spot that is smaller than the coherence area of the scattering layer, thick volumetric scattering presents a challenge that requires additional or alternative solutions.
One potential solution for obtaining an effectively homogeneous illumination in the case of coherent illumination through thick dynamic media, while maintaining the capability of coherence gating, without requiring focused illumination, is by digital incoherent compounding the time-gated frames of several illuminations taken within the correlation time of the medium. This utilizes the same principles used in speckle reduction techniques in OCT41, and was demonstrated for imaging through scattering layers via correlography31,42,43, however, without leveraging the potential of matrix-based techniques. In such a solution, for each realization of the dynamic scatterer one would: (I) rapidly acquire multiple (K ≫ 1) coherence-gated holograms under different speckle illuminations (created with an additional diffuser or SLM in the illumination path); (II) Incoherently sum the intensity patterns of the holographically measured fields to incoherently-compounded “macro-frames": \({I}_{m}(x,y,z={z}_{{{{\rm{obj}}}}})=\mathop{\sum }_{k=1}^{K}| {E}_{m,k}(x,y,z={z}_{{{{\rm{obj}}}}}){| }^{2}\) that can then be (III) processed using I-CLASS as in our incoherent experiments. This protocol creates effectively uniform illumination through incoherent compounding, while preserving the important coherence-gating capability of coherent light that is crucial for practical reflection-based microscopy and 3D imaging. The K rapid illumination patterns can be random or complementary speckle illuminations that would result in a more homogeneous illumination distribution44.
Such a hybrid method offers a potential mitigation strategy for thick scattering media at the price of an increase in the number of acquisitions, or equivalently, at the price of acquisition or dynamic scattering speed. Additionally, we highlight the applicability of the tight focusing approach to lensless imaging through highly dynamic flexible multi-core fiber endoscopes, as previously demonstrated only through relatively static fibers38,45. In this system, uniform illumination can be obtained by single-mode excitation of a single-fiber core in a relatively straightforward fashion38,46.
While we have focused our proof-of-principle demonstrations on isoplanatic scattering conditions, extending our approach to thick dynamic scattering media remains an important challenge. Beyond the direct application of mosaicking approaches, which are effective for moderate scattering13,47, applying a multi-conjugate, “multi-slice" correction45,48 would be very attractive. However, since in our formulation the dynamic medium plays the role that would normally be occupied by the object in conventional reflection-matrix implementation, and vice versa, the conventional multi-conjugate approach would only address a thick target object rather than a thick scattering medium. Addressing a thick scattering medium thus requires a solution analogous to addressing a thick target object in conventional reflection matrix imaging. Interestingly, the recent approach of Park et al.49, where thick target objects are considered, may offer a potential path forward. Alternatively, it may be possible to leverage the recent model-based gradient descent optimization approach45, which can flexibly handle a multi-parameter model, to integrate the case of a thick medium in the dynamic measurements formalism.
Finally, while we have focused our proof-of-principle demonstrations on isoplanatic scattering conditions, the field of view can be extended in anisoplanatic scattering conditions in cases of weakly scattering samples by individually reconstructing and mosaicking different isoplanatic patches13,15,47,50,51.
In conclusion, this work demonstrates the versatility and universality of matrix approaches to imaging in complex media, extending their applicability from static to dynamic scattering environments.
Methods
Experimental setup
Figures 2 and 3 shows the experimental imaging configuration that utilizes an incoherent Thorlabs M625L3 LED light source. This LED emits at a central wavelength of λ = 625 nm with an output power of 700 mW and a full width at half maximum (FWHM) bandwidth of Δλ = 17 nm. The emitted light covers an extensive area across the target plane located 3 cm away. Image capture is conducted using an Andor Neo 5.5 sCMOS camera, part of a 4f imaging system equipped with a 10× Mitutoyo objective lens (M Plan APO 10X, NA 0.28) and a Thorlabs LA1256-A tube lens (focal length 300 mm). A Thorlabs FBH630-10 band-pass filter, with a central wavelength of 630 nm and a 10 nm FWHM, is employed to filter the incident light.
During the experiments in Fig. 2, the dynamic media was created using a Thorlabs K10CR1 rotation mount with a holographic diffuser 6 mm from the target. For Fig. 4b–g, a Newport 0.5∘ holographic diffuser was used, and Fig. 2n–q utilized an RPC Photonics EDC-1∘ diffuser. For Fig. 3 experiments, scattering was introduced via a 1 mm path-length cuvette containing Polystyrene beads (Fluoresbrite YG microspheres, 45 μm) in a solution with a concentration variability ≫ 7%. A 0.5∘ holographic diffuser was attached to the cuvette to eliminate the scatterer’s ballistic components. This scatterer was distanced 10 mm for the measurements in Fig. 3b–e and 15 mm in Fig. 3f–i.
The imaging targets varied across experiments. Figure 2b–e, f–i show prepared microscope slides by Maxlapter (Amazon) of willow stem and pine stem respectively. Figures 2j–m and 3f–i show a 3" × 3" Negative 1951 USAF Test Target (R3L3S1N, Thorlabs), and Figs. 2n–q,3b–e featured custom targets on 1.5 mm thick glass slides coated with Ti (20 nm) and Ag (100 nm) layers, created through E-Beam Lithography.
Figure 4 shows the fluorescence experimental configuration, which consists of a pseudothermal source composed of a 200-mW, 488 nm continuous-wave (CW) laser (06-MLD-488, Cobolt) and a rapidly rotating holographic diffuser (EDC-1∘) at a distance of ≈15 cm from the objective lens. The images were distorted by a discretely rotating holographic diffuser (RD, NEWPORT 0.5∘) using a stepper motor rotation mount (K10CR2, Thorlabs) placed at distances of ≈8 mm from the target object. The images were captured using the same Andor Neo 5.5 sCMOS camera, imaged by a 4f imaging system equipped with a 10× Mitutoyo objective lens (M Plan APO 10X, NA 0.28) and a tube lens (focal length 300 mm, Thorlabs). The light was filtered with a dichroic mirror (DMLP505R, Thorlabs) and an emission filter (MF525-39, Thorlabs). The target consisted of fluorescent beads (Fluoresbrite YG microspheres 10 μm) placed on a cover glass at the objective lens’s focal plane.
Figure 5 shows the experimental configuration for the coherent imaging experiments, where holograms were recorded using an off-axis holography setup. A 21-mW polarized CW He-Ne laser (HNL210L, Thorlabs) with a wavelength of 632.8 nm was used for illumination. To split the beam into reference and object paths, a polarizing beam splitter (PBS, PBSW-633, Thorlabs) was employed, with the path difference between the reference and object paths kept within the coherence length of the laser (≈30 cm). After the PBS, the signal beam was rotated using a half-wave plate (WPQ10ME-633, Thorlabs) to match the polarization of the reference beam, allowing them to interfere at the detector. In the object path, the laser beam passed through a Newport 0.5∘ holographic diffuser mounted on a rotating motor (K10CR1, Thorlabs), ensuring uncorrelated scattering patterns due to the diffuser’s rotation. A 10× Mitutoyo objective lens (M Plan APO 10X, NA 0.28) focused the illumination beam, ensuring the illumination spot on the diffuser was smaller than the diffuser’s correlation length (≈70 μm), maintaining consistent illumination. The negative USAF test target (R1DS1N, Thorlabs) was positioned approximately 7 cm behind the diffuser, with a mirror covered by a diffusive slide placed behind it to simulate a diffusive object. The reflected light traveled back through the diffuser to the camera. A 4f imaging system consisting of two lenses, an f = 200 mm (AC508-200-A-ML, Thorlabs) and an f = 125 mm (LA1384-A, Thorlabs), was used to image the field at the diffuser plane onto the camera sensor (Thorlabs 8051M-USB), providing a magnification of ×1.6 . A non-polarizing beam splitter (BS03, Thorlabs) was used to recombine the object and reference beams before interfering at the camera plane. Finally, a band-pass filter (MaxLine Laser Line Filter 633) with a center wavelength of 632.8 nm and a bandwidth of 1 nm was positioned in front of the camera to isolate the laser wavelength and reduce noise.
Experimental parameters
The experimental parameters for the results displayed in Figs. 2–5, including camera exposure times and image pixel counts, are outlined as follows:
The frames in Fig. 2b–e were captured at 1250 × 1250 pixels, each taken at a 0.9 ms exposure time. In Fig. 2f–i, images were captured at a resolution of 1700 × 1700 pixels and then cropped in the Fourier domain to 300 × 300 pixels, with an exposure time of 0.5 ms. In Fig. 2j–m, frames were captured at 700 × 700 pixels with a 1.25 ms exposure time. Figure 2n–q features frames captured at 800 × 800 pixels, cropped in the Fourier domain to 300 × 300 pixels, with an exposure time of 7 ms. For Fig.3b–e, frames were sized at 800 × 800 pixels, each with a 50 ms exposure time. In Fig. 3f–i, images were captured at a resolution of 1800 × 1800 pixels, first cropped in the Fourier domain to 300 × 300 pixels and then further cropped to 350 × 350 pixels for visualization, with an exposure time of 30 ms per frame. In Fig. 4b–e, frames were captured at 750 × 750 pixels and cropped in the Fourier domain to 500 × 500 pixels, with an exposure time of 0.275 s per frame. Similarly, in Fig. 4f–j, frames were captured at 650 × 650 pixels, cropped in the Fourier domain to 300 × 300 pixels, with an exposure time of 0.25 s. In Fig. 5, the object frames were initially captured at 850 × 850 pixels and cropped to 350 × 350 pixels for visualization, with an exposure time of 12 ms per frame. Experiments shown in Figs. 2b–m, 3, and 4 utilized M = 150 realizations for reconstruction, while results in Figs. 2n–q and 5 used M = 180.
We applied intensity modulation by multiplying each camera frame with a fixed scalar factor, linearly varying from 1 to 2 across frames 1–150, to suppress energy conservation-induced correlations in the PSFs for the measurements shown in Figs. 2b–m and 4. This is discussed in more detail in Supplementary Section S2.
The run time of the I-CLASS algorithm on a commercially available GPU (Nvidia RTX4090, 24 GB) was ≈8 ms per iteration for 150 camera frames at a resolution of 300 × 300 pixels and ≈70 ms per iteration for 150 camera frames at a resolution of 850 × 850 pixels. With our standard protocol of 1000 iterations, this yields total processing times of approximately 8 s and 70 s, respectively.
Fresnel propagation via Fourier-domain transfer function
In Fig. 5, we present the object field located at the physical object plane, zobj. However, since, following the principles of conjugate adaptive optics15,52,53,54, we measure the fields at the diffuser plane and input these fields to the I-CLASS algorithm, the I-CLASS algorithm reconstructs the complex object field at the same plane. This is since the object field in our dynamic matrix approach is analogous to the scattering medium phase-function that CLASS/I-CLASS algorithms retrieve in the conventional static medium case. We denote this reconstructed object field at the scattering layer plane as Eo(x, y, zscatt). To visualize the field at the object plane, Eo(x, y, zobj), we back-propagate the reconstructed field from the diffuser plane to the object plane using Fresnel propagation under the paraxial approximation.
This propagation is efficiently implemented in the Fourier domain using the Fresnel transfer function:
Where: − \({\tilde{E}}_{o}(\;{f}_{x},{f}_{y},{z}_{{{{\rm{scatt}}}}})={{{\mathcal{F}}}}\{{E}_{o}(x,y,{z}_{{{{\rm{scatt}}}}})\}\) is the 2D Fourier transform of the reconstructed field, −H(fx, fy; Δz) is the Fresnel transfer function,−Δz = zobj − zscatt is the propagation distance, \({{{\mathcal{F}}}}\) and \({{{{\mathcal{F}}}}}^{-1}\) denote the 2D Fourier and inverse Fourier transforms.
The Fresnel transfer function in terms of spatial frequency is:
Here: −λ is the illumination wavelength, −(fx, fy) are the spatial frequency coordinates corresponding to the real-space axes (x, y).
This formulation supports forward and backward propagation by simply changing the sign of Δz, and is especially suitable for numerical implementation via Fast Fourier transforms.
Data availability
The experimental data generated in this study have been deposited in the Zenodo repository55. All data needed to evaluate the conclusions in the paper are present in the paper and/or the Supplementary Materials.
Code availability
For applying the I-CLASS algorithm, we used the published memory-efficient implementation described in refs. 14,15 and available at ref. 56. The up-to-date version can be found at https://github.com/Imaging-Lab-HUJI/Fluorescence-Computational-Imaging-Through-Scattering-Layers.
References
Yoon, S. et al. Deep optical imaging within complex scattering media. Nat. Rev. Phys. 2, 141–158 (2020).
Bertolotti, J. & Katz, O. Imaging in complex media. Nat. Phys. 18, 1008–1017 (2022).
Mao, Z., Jaiswal, A., Wang, Z. & Chan, S. H. Single frame atmospheric turbulence mitigation: a benchmark study and a new physics-inspired transformer model. In Proc. European Conference on Computer Vision. 430–446 (Springer, 2022).
Cai, H. et al. Temporally consistent atmospheric turbulence mitigation with neural representations. Adv. Neural Inf. Process. Syst. 37, 44554–44574 (2024).
Zhang, X., Mao, Z., Chimitt, N. & Chan, S. H. Imaging through the atmosphere using turbulence mitigation transformer. IEEE Trans. Comput. Imaging 10, 115–128 (2024).
Jang, M. et al. Relation between speckle decorrelation and optical phase conjugation (OPC)-based turbidity suppression through dynamic scattering media: a study on in vivo mouse skin. Biomed. Opt. Express 6, 72–85 (2015).
Horstmeyer, R., Ruan, H. & Yang, C. Guidestar-assisted wavefront-shaping methods for focusing light into biological tissue. Nat. Photonics 9, 563–571 (2015).
Satat, G., Tancik, M. & Raskar, R. Towards photography through realistic fog. In Proc. IEEE International Conference on Computational Photography. 1–10 (IEEE, 2018).
Kang, S. et al. High-resolution adaptive optical imaging within thick scattering media using closed-loop accumulation of single scattering. Nat. Commun. 8, 2157 (2017).
Badon, A. et al. Distortion matrix concept for deep optical imaging in scattering media. Sci. Adv. 6, eaay7170 (2020).
Zhang, Y. et al. Adaptive optical multispectral matrix approach for label-free high-resolution imaging through complex scattering media. Advanced Photonics 7, https://doi.org/10.1117/1.AP.7.4.046008 (2025).
Lim, S. et al. Multiphoton super-resolution imaging via virtual structured illumination. Preprint at https://doi.org/10.48550/arXiv.2404.11849 (2024).
Lee, H. et al. High-throughput volumetric adaptive optical imaging using compressed time-reversal matrix. Light Sci. Appl. 11, 16 (2022).
Weinberg, G., Sunray, E. & Katz, O. Noninvasive megapixel fluorescence microscopy through scattering layers by a virtual incoherent reflection matrix. Sci. Adv. 10, eadl5218 (2024).
Sunray, E., Weinberg, G., Rosenfeld, M. & Katz, O. Beyond memory-effect matrix-based imaging in scattering media by acousto-optic gating. APL Photonics 9, 096112 (2024).
Sun, Y., Shi, J., Sun, L., Fan, J. & Zeng, G. Image reconstruction through dynamic scattering media based on deep learning. Opt. Express 27, 16032–16046 (2019).
Liu, H. et al. Learning-based real-time imaging through dynamic scattering media. Light Sci. Appl. 13, 194 (2024).
Feng, B. Y. et al. Neuws: neural wavefront shaping for guidestar-free imaging through static and dynamic scattering media. Sci. Adv. 9, eadg4671 (2023).
Xie, M. et al. WaveMo: learning wavefront modulations to see through scattering. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition. 25276–25285 (IEEE, 2024).
Valzania, L. & Gigan, S. Online learning of the transmission matrix of dynamic scattering media. Optica 10, 708–716 (2023).
Jiang, W., Boominathan, V. & Veeraraghavan, A. Nert: implicit neural representations for unsupervised atmospheric turbulence mitigation. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition. 4235–4242 (IEEE, 2023).
Zhang, X., Chimitt, N., Chi, Y., Mao, Z. & Chan, S. H. Spatio-temporal turbulence mitigation: a translational perspective. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2889–2899 (IEEE, 2024).
Labeyrie, A. Attainment of diffraction limited resolution in large telescopes by Fourier analysing speckle patterns in star images. Astron. Astrophys. 6, 85 (1970).
Rosenfeld, M. et al. Acousto-optic ptychography. Optica 8, 936–943 (2021).
Wang, D., Sahoo, S. K., Zhu, X., Adamo, G. & Dang, C. Non-invasive super-resolution imaging through dynamic scattering media. Nat. Commun. 12, 3150 (2021).
Zhang, W. et al. High-throughput imaging through dynamic scattering media based on speckle de-blurring. Opt. Express 31, 36503–36520 (2023).
Guo, E., Zhou, C., Zhu, S., Bai, L. & Han, J. Dynamic imaging through random perturbed fibers via physics-informed learning. Opt. Laser Technol. 158, 108923 (2023).
Bertolotti, J. et al. Non-invasive imaging through opaque scattering layers. Nature 491, 232–234 (2012).
Katz, O., Heidmann, P., Fink, M. & Gigan, S. Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations. Nat. Photonics 8, 784–790 (2014).
Edrei, E. & Scarcelli, G. Optical imaging through dynamic turbid media using the fourier-domain shower-curtain effect. Optica 3, 71–74 (2016).
Salhov, O., Weinberg, G. & Katz, O. Depth-resolved speckle-correlations imaging through scattering layers via coherence gating. Opt. Lett. 43, 5528–5531 (2018).
Wang, X., Liu, H., Chen, M., Liu, Z. & Han, S. Imaging through dynamic scattering media with stitched speckle patterns. Chin. Opt. Lett. 18, 042604 (2020).
Zhu, X. et al. Non-invasive super-resolution imaging through scattering media using object fluctuation. Laser Photonics Rev. 18, 2300712 (2024).
Makowski, A. et al. Low photon number non-invasive imaging through time-varying diffusers. Preprint at https://doi.org/10.48550/arXiv.2409.18072 (2024).
Fienup, J. R. Reconstruction of an object from the modulus of its Fourier transform. Opt. Lett. 3, 27–29 (1978).
Lambert, W., Cobus, L. A., Frappart, T., Fink, M. & Aubry, A. Distortion matrix approach for ultrasound imaging of random scattering media. Proc. Natl. Acad. Sci. 117, 14645–14656 (2020).
Cuche, E., Marquet, P. & Depeursinge, C. Spatial filtering for zero-order and twin-image elimination in digital off-axis holography. Appl. Opt. 39, 4070–4075 (2000).
Choi, W. et al. Flexible-type ultrathin holographic endoscope for microscopic imaging of unstained biological tissues. Nat. Commun. 13, 4469 (2022).
Jo, Y. et al. Through-skull brain imaging in vivo at visible wavelengths via dimensionality reduction adaptive-optical microscopy. Sci. Adv. 8, eabo4366 (2022).
Badon, A. et al. Smart optical coherence tomography for ultra-deep imaging through highly scattering media. Sci. Adv. 2, e1600370 (2016).
Liba, O. et al. Speckle-modulating optical coherence tomography in living mice and humans. Nat. Commun. 8, 15845 (2017).
Idell, P. S., Fienup, J. R. & Goodman, R. S. Image synthesis from nonimaged laser-speckle patterns. Opt. Lett. 12, 858–860 (1987).
Metzler, C. A. et al. Deep-inverse correlography: towards real-time high-resolution non-line-of-sight imaging. Optica 7, 63–71 (2020).
Gateau, J., Rigneault, H. & Guillon, M. Complementary speckle patterns: deterministic interchange of intrinsic vortices and maxima through scattering media. Phys. Rev. Lett. 118, 043903 (2017).
Haim, O., Boger-Lombard, J. & Katz, O. Image-guided computational holographic wavefront shaping. Nat. Photonics 19, 44–53 (2025).
Weinberg, G., Kang, M., Choi, W., Choi, W. & Katz, O. Ptychographic lensless coherent endomicroscopy through a flexible fiber bundle. Opt. Express 32, 20421–20431 (2024).
Najar, U. et al. Harnessing forward multiple scattering for optical imaging deep inside an opaque medium. Nat. Commun. 15, 7349 (2024).
Kang, S. et al. Tracing multiple scattering trajectories for deep optical imaging in scattering media. Nat. Commun. 14, 6871 (2023).
Oh, C., Hugonnet, H., Lee, M. & Park, Y. Digital aberration correction for enhanced thick tissue imaging exploiting aberration matrix and tilt-tilt correlation from the optical memory effect. Nat. Commun. 16, 1685 (2025).
Trussell, H. & Hunt, B. Sectioned methods for image restoration. IEEE Trans. Acoust. Speech Signal Process. 26, 157–164 (1978).
Alterman, M., Bar, C., Gkioulekas, I. & Levin, A. Imaging with local speckle intensity correlations: theory and practice. ACM Trans. Graph. 40, 1–22 (2021).
Mertz, J., Paudel, H. & Bifano, T. G. Field of view advantage of conjugate adaptive optics in microscopy applications. Appl. Opt. 54, 3498–3506 (2015).
Kwon, Y. et al. Computational conjugate adaptive optics microscopy for longitudinal through-skull imaging of cortical myelin. Nat. Commun. 14, 105 (2023).
Katz, O., Small, E. & Silberberg, Y. Looking around corners and through thin turbid layers in real time with scattered incoherent light. Nat. Photonics 6, 549–553 (2012).
Sunray, E., Weinberg, G., Laufer, B. & Katz, O. Experimental data for "Matrix-based imaging through dynamic scattering" [Data set]. Zenodo. https://doi.org/10.5281/zenodo.15766625 (2025).
Weinberg, G., Sunray, E., & Katz, O. I-CLASS: Source Code for ‘Noninvasive megapixel fluorescence microscopy through scattering layers by a virtual reflection-matrix’ (v0.1.0). Zenodo. https://doi.org/10.5281/zenodo.11266090 (2024).
Kang, S., Yoon, S. & Choi, W. Implementation of reflection matrix microscopy: an algorithm perspective. J. Phys. Photonics 7, 023002 (2025).
Acknowledgements
This project was supported by the H2020 European Research Council (ERC) grant no. 101002406 (to O.K.). This research was supported by a scholarship sponsored by the Ministry of Science & Technology, Israel (to B.L.).
Author information
Authors and Affiliations
Contributions
O.K. proposed and conceptualized the project with E.S. and G.W.; E.S., G.W., and O.K. designed the incoherent imaging experimental setup. B.L. and O.K. designed the coherent imaging experimental setup. G.W. carried out the incoherent imaging measurements and data analysis with input from E.S.; B.L. carried out the coherent imaging measurements and data analysis. E.S. wrote the reconstruction algorithm code and numerical simulations with input from G.W.; O.K. supervised the project. All authors contributed to the writing of the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Communications thanks Alexandre Aubry and Jacopo Bertolotti for their contribution to the peer review of this work. [A peer review file is available].
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sunray, E., Weinberg, G., Laufer, B. et al. Matrix-based imaging through dynamic scattering. Nat Commun 16, 9413 (2025). https://doi.org/10.1038/s41467-025-64422-x
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41467-025-64422-x




