Abstract
Parallel light detection and ranging (LiDAR) is widely adopted for its low computational burden and rapid three-dimensional (3D) reconstruction. Yet, it remains constrained by inter-channel crosstalk and limited long-distance performance. Here, we introduce a spectrally encoded parallel LiDAR based on super-bunching light, exhibiting negligible cross-correlation between wavelengths, enabling quasi-orthogonal channel division without crosstalk. This approach supports robust parallel ranging, rapid and accurate 3D reconstruction, and effective target classification. Our scheme achieves high-precision ranging with errors as low as 4 mm and can detect targets moving at velocities as low as 5 mm/s. It further enables reliable ranging and 3D imaging beyond 40 m, with exceptional anti-interference performance, even when noise exceeds the echo signal by three orders of magnitude. Combining high precision, sensitivity, long-range detection, dynamic target acquisition, precise 3D reconstruction, and robust anti-interference, our LiDAR offers significant potential for enhancing environmental perception technologies.
Similar content being viewed by others
Introduction
Light detection and ranging (LiDAR) form the backbone of environmental perception1,2,3,4, supporting autonomous capabilities across a wide range of unmanned systems, from autonomous vehicles5,6,7,8,9 and drones to AI-integrated robotics4,5. In contrast to vision-only approaches, LiDAR enables high-precision, real-time three-dimensional (3D) environmental mapping10,11,12,13,14 with low computational overhead15,16,17, ensuring the safety and reliability of autonomous systems. Leveraging its advanced optical ranging and recognition capabilities, LiDAR enhances the operational robustness of autonomous systems18,19, thereby supporting their survivability in complex and rapidly changing environments. A key trend in current LiDAR technology is the adoption of parallelized architectures20,21, which enable multi-channel simultaneous measurements22,23. On the one hand, parallel LiDAR achieves 3D imaging through one-dimensional (1D) scanning24,25,26, significantly improving acquisition rates and angular resolution12,27. On the other hand, the feature of negligible reliance on beam-steering mechanisms28,29,30 ensures robust vibration resistance12,19. Enabled by rapid scanning and precise imaging, parallel LiDAR will catalyze transformative progress in the field of advanced autonomous systems31,32.
Notwithstanding, the performance of parallel LiDAR is significantly hindered by the inter-channel interference in both temporal and spectral domains33,34,35. Several approaches have been developed to solve this issue. For example, time-stretch technology facilitates channel-isolated parallel ranging by temporally separating multi-wavelength pulses36,37,38. However, the intrinsic tradeoff between detection range and channel count prevents time-stretch technology from realizing high resolution and long-distance imaging. Furthermore, these systems exhibit diminished robustness in dynamic object detection. Following time-stretching technology, optical code division multiple access (OCDMA) effectively eliminates channel interference through time-domain orthogonal coding of the light, while also offering considerable refresh rates and image resolution11,21. Yet, the pseudo-random modulation employed in the encoding and decoding process of the OCMDA scheme relies on high-speed devices, which place high demands on both system hardware and software. Chaotic microcombs have recently gained significant attention in parallel LiDAR due to their inherent immunity to channel congestion39,40, which stems from their orthogonal optical channels. Nonetheless, the separate collection of symmetric combs centered at the pump frequency is a necessary process to avoid their interference, thereby imposing stringent demands on the number of detectors. Meanwhile, the capacity of long-distance ranging is hindered by the restricted propagation distance of chaotic lasers. Next-generation parallel LiDAR requires not only to be free from time-frequency congestion across multiple channels, but also to provide high precision, long-distance detection, and robust dynamic object recognition.
In this work, we introduce a parallel LiDAR equipped with spectrally encoded optical channels, where the natural quasi-orthogonality between different wavelengths of a super-bunching light is leveraged to eliminate time-frequency congestion between channels. This spectrally encoded parallel LiDAR achieves millimeter-level ranging accuracy and enables precise velocity measurements for slow-moving targets with speeds as low as 5 mm/s. We further demonstrate high-resolution 3D imaging utilizing 51 channels, with an integration time of just 10 μs for each individual scanning segment. Parallel LiDAR empowered by a super-bunching light overcomes the distance limitation of ranging imposed by pulse repetition periods without intricate encoding and decoding programs11,21, enabling spatially parallel ranging beyond 40 m. In addition, this system demonstrates exceptional noise immunity even under extreme conditions where noise surpasses the echo signal by over three orders of magnitude. The combination of super-bunching light and parallel LiDAR proposed in this work offers a pathway for advancing high-performance autonomous systems.
Results and Discussion
Super-bunching enabled spectrally encoded optical channels
As a new-generation nonclassical light source, the super-bunching light exhibits giant second-order intensity correlation (g2(τ)) and extreme multi-photon emission probabilities. The normalized intensity correlation, g2(0), of the super-bunching light can be as high as 10, significantly stronger than that of thermal light (where g2(0) of thermal light equals 2, known as the bunching effect)41. The natural quasi-orthogonality between different wavelengths of a super-bunching light offers an opportunity to overcome the time-frequency congestion between channels. Figure 1a presents the schematic of the experimental setup for generating a super-bunching light source, which was produced via random nonlinear interactions in an optimized photonic crystal fiber (PCF) pumped by a picosecond (ps) laser with a central wavelength of 1064 nm and a repetition rate of 80 MHz. (See details in Experiment section) Fig. 1b presents the spectrum of super-bunching light, spanning from 480 to 1750 nm, a feature absent in super-bunching light generated via spontaneous parametric down-conversion (SPDC) or spontaneous four-wave mixing (SFWM)42,43, which serves as the foundation for multi-channel division in parallel LiDAR systems. An acousto-optic tunable filter (AOTF) was employed to divide the broadband super-bunching light into spectrally separated, artificially selected optical channels. Each optical channel was split into reference and output paths using a 1:9 beam splitter (BS). Channels in both paths were then spatially separated by a blazed grating and detected using avalanche photodetectors (APDs) for subsequent correlation analysis. The inset of Fig. 1b presents the results of second-order correlation measurement for the channel 633 nm, showcasing g(2)(0) = 5, which confirms the super-bunching nature (g(2)(0) > 2). In addition, the auto-correlation measurement for this channel was conducted, as shown in Fig. 1c. The result exhibits delta-function-like behavior, effectively suppressing self-interference within the channel. The ranging resolution Rres of the spectrally encoded parallel LiDAR is governed by the full width at half maximum (FWHM) of the auto-correlation function, which is fundamentally constrained by the detector’s bandwidth18:
where c is the speed of light. As depicted in the right inset of Fig. 1c, the FWHM of the auto-correlation function for channel 633 nm is 0.6 ns, corresponding to a ranging resolution of 9 cm. (Due to the high uniformity among channels divided by AOTF, the characteristics of channel 633 nm are representative of the entire set. See Supplementary NOTE 1)
a Experiment setup. A broadband super-bunching light was produced by pumping a photonic crystal fiber (PCF) with picosecond pulses centered at 1064 nm. An acousto-optic tunable filter (AOTF) is then used to select individual optical channels. Each channel was divided into reference and output paths via a 1:9 beam splitter (BS), spatially separated by a grating, and detected by avalanche photodetectors (APDs) for correlation analysis. b Spectral of the super-bunching light. The arrow indicates the wavelength of the pump. The inset demonstrated the second-order correlation results of channel 633 nm, yielding g2(0) = 5.02. c The auto-correlation function of channel 633 nm. Magnified views of the autocorrelation function’s sidelobes and central peak are shown in the left and right insets, respectively. The full width at half maximum (FWHM) of the autocorrelation peak is 0.6 ns. d The cross-correlation coefficients between each pair of the eight individual channels. The data are normalized to the global maximum across all measured data. e The cross-correlation functions between channels 532/532 nm and 532/542 nm. The extremum correlation peak of the 532/532 nm channels shows a delay time of 133.3 ns, which is greater than the pulse period (12.5 ns).
To validate the quasi-orthogonality among the spectrally separated optical channels, we conducted cross-correlation function measurements. Figure 1e shows an extremum correlation peak (quantified by a peak-to-side lobe ratio (PSLR) greater than 2, see Supplementary NOTE 3) in the cross-correlation of channels 532 nm and 532 nm. In contrast, no such peak is observed in the cross-correlation between channels 532 nm and 542 nm. Additionally, Fig. 1d demonstrates pairwise cross-correlation measurements among 8 individual channels, with all values normalized to the global maximum. Notably, the presence of extremum correlation peaks in the measurements between identical channels (orange pixels in Fig. 1d) and their absence between different channels (blue/green pixels in Fig. 1d) confirms negligible inter-channel crosstalk. Consequently, the time-domain photon bunching effect of super-bunching light enables the division of frequency-quasi-orthogonal, crosstalk-free optical channels, which serve as spectrally encoded channels. The AOTF imposes a limitation on spectral resolution, resulting in a minimum crosstalk-free channel spacing of 8 nm (See Supplementary NOTE 4). During the ranging operation, as shown in Fig. 1e, the extremum correlation peak will shift from the zero-delay point, due to the differing optical path lengths between the output and reference loops. While Fig. 1e exhibits results beyond the pulse period, the ranging capability of conventional time-of-flight (ToF) LiDAR, which is driven by a pulsed laser, is strictly limited to the pulse repetition, making it difficult to perform measurements beyond the pulse period36. Although frequency-modulated continuous-wave (FMCW) LiDAR driven by a continuous laser is not limited by pulse period, it imposes stringent requirements on laser stability, detector sensitivity, and signal processing, all of which substantially increase system complexity, escalate costs, and hinder practical implementation. Leveraging the temporal photon bunching effect of super-bunching light, our system achieves precise distance measurements beyond the pulse period, for example, with a delay time of 133.3 ns for channel 532 nm, as shown in Fig. 1e, corresponding to a distance of 40.0 m. In contrast to optical code division multiple access (OCDMA) systems requiring intricate encoding and decoding11,21, our scheme enables ranging beyond the pulse periods with a simplified design. Moreover, OCDMA loses this capability when its channels share the same temporal code. The ability to perform light ranging beyond the pulse period paves the way for long-distance LiDAR applications.
Spectrally encoded parallel ranging
With spectrally encoded optical channels derived from super-bunching light, we present a parallel LiDAR. Figure 2a is a schematic of the spectrally encoded parallel LiDAR. In the output path, channels are spatially separated by a blazed grating (600 lines/mm) and then steered toward the target by a scanning galvanometer. The multiplexed echo signal reflected by the target is captured through a lens and focused onto a single APD. In the reference path, channels separated by the blazed grating are guided into the photodetectors (PDs) array, with each channel individually detected by one detector (See Supplementary NOTE 7). The ranging accuracy of the LiDAR system relies on detectors with superior performance in both the echo and reference paths, aligning with the setup used in ghost imaging44,45. Cross-correlation is performed between the mixed echo signal and each channel’s reference signal to retrieve individual range information22,40:
where xi(t) is the reference signal of channel i, y(t+Δti) is the echo signal, and R(Δti) is the cross-correlation function of channel i. Given that the extremum correlation peak appears exclusively between matching channels, as depicted in Fig. 2b, the delay time Δti represents the round-trip time of flight for channel i, traveling from the galvanometer to the target and returning to the detector. Therefore, the corresponding target distance Li is calculated as:
a Experiment setup. In the output path, different channels were spatially separated by a blazed grating (600 lines/mm) and subsequently directed to distinct positions on the target by a galvanometer. Vertical scanning was achieved through a galvanometer. The target is a whiteboard placed on a motorized translation stage. b Schematic of the multi-channel correlation detection. Only the corresponding channels in the echo and reference signals yield the extremum value of the cross-correlation function. c Ranging results of channel 633 nm and corresponding errors. d Results of speed measurements for the target moving at different speeds. The black dashed line represents the diagonal line of the axis box. The error bars result from multiple measurements. e Parallel ranging results based on 16 individual channels. The panels, from bottom to top, display the spectral data of 16 channels, the optical power of each channel, the ranging results of 5 measurements, and the corresponding SNR, respectively. The error bars result from multiple measurements.
Performance tests of the LiDAR’s ranging capability were conducted using channel 633 nm. As shown in Fig. 2a, a whiteboard serves as the target, whose horizontal position is controlled by a high-precision stepper motor, moving away from the detector with a step size of 2 cm. The starting position of the whiteboard was set as the reference point to assess its relative distance in the test. Figure 2c presents the results of 22 tests, demonstrating a ranging error of up to 4 mm, representing an advanced level for parallel LiDAR systems. The ranging error was defined as the deviation between the ranging result and the known actual distance of the whiteboard. Moreover, with an integration time of just 10 μs per test, the system is capable of completing high-precision ranging tasks in rapidly changing environments. High-precision ranging capability allows for the reliable detection of low-speed targets. In low-speed target detection, the whiteboard was reciprocated at a constant velocity, and echo signals were continuously sampled using a high-speed oscilloscope. The corresponding results in Fig. 2d show an average speed measurement error of 4.1%, which is defined as the percentage deviation between the measured value and the preset speed of the stepper motor. Even when the target moves at a constant speed as low as 5 mm/s, the measured speed is 4.6 mm/s, resulting in a measurement error of 0.4 mm/s. With its precision ranging accuracy and dynamic detection capability, the spectrally encoded parallel LiDAR offers significant advantages in detecting low-speed dynamic targets.
Furthermore, we conducted measurements to assess the parallel performance of our scheme. A total of 16 optical channels were configured over the 530–680 nm spectral range with 10 nm spacing. Since all channels performed distance measurements of the whiteboard simultaneously, the ranging results of all channels should exhibit excellent consistency. The ranging results from the 5 tests shown in Fig. 2e are consistent with theoretical expectations, while the signal-to-noise ratio (SNR) for all 16 channels remains above 15 dB. These results demonstrate that our spectrally encoded parallel LiDAR achieves high-precision parallel ranging using only a single detector to collect the echo signals, while maintaining a high SNR.
3D imaging based on the spectrally encoded parallel LiDAR
The 3D imaging capabilities of spectrally encoded parallel LiDAR have also been explored. The 3D imaging targets, as shown in Fig. 3a, included a pedestrian, a cybertruck, and a helicopter, which were spatially arranged in sequence with 10 cm and 20 cm gaps between them (Fig. S8). Each pair of targets was spaced beyond the LiDAR’s ranging resolution, facilitating their distinction in 3D reconstruction. Figure 3b presents the 3D reconstruction results of the three targets based on vertical scanning data (Fig. S10). A total of 51 optical channels and 88 vertical scan segments were employed for 3D imaging, resulting in an image resolution of 51 × 88 pixels to recover finer target details. The horizontal and vertical errors introduced during the merging of multiple scans are negligible. (See Supplementary Note 9) The integration time for each scan segment is 10 μs. Considering the constraint imposed by the minimum allowable spectral spacing of 8 nm between adjacent channels, the spectral range of 530–680 nm cannot accommodate 51 mutually independent channels simultaneously. Hence, the 3D reconstruction is achieved through three successive scans, each utilizing 17 channels spaced at 9 nm intervals, as depicted in Fig. 3c. By interleaving and merging the three scans, a composite set of 51 channels with an effective spectral interval of 3 nm is constructed. Although state-of-the-art AOTFs are capable of achieving channel separation below 8 nm without crosstalk46, our approach offers a practical strategy for realizing multi-channel 3D imaging beyond the limitation of minimum channel intervals. While the OCDMA method must contend with a reduction in detection distance as channels are increased21,36, our approach delivers increased channels without any associated loss. Figure 3d presents the vertical scanning results for the 596 nm, 632 nm, and 659 nm channels. The distinct separation of data points corresponding to different targets in distance confirms that our LiDAR successfully achieves accurate 3D reconstruction of the three spatially distinct objects. This capacity is further supported by the photon-counting statistics of the echo signals from different distances, as illustrated in Fig. 3e.
a Schematic of imaging targets, including a pedestrian, a cybertruck, and a helicopter, which were spatially arranged in sequence with 10 cm and 20 cm gaps between them. b Imaging results based on 51 individual channels. The vertical direction contains 88 segments, and the resolution is 51 × 88 pixels. Three dashed lines represent the-^# corresponding positions of channel 596 nm, 632 nm, and 659 nm, respectively. The dots in the XY plane represent the projections of three objects. c Schematic of the division of 51 channels. The 51-channel imaging was performed in three successive scans, each covering 17 channels with a spacing of 9 nm, resulting in an adequate channel spacing of 3 nm after combining the scan results. d Vertical scanning results of channel 596 nm, 632 nm, and 659 nm, respectively. e The statistical histogram of the pixels contained in three objects in (b). f Statistical box plot of the correlation intensity between the reference and the echo signals corresponding to each pixel of three objects.
Beyond accurate 3D reconstruction, spectrally encoded parallel LiDAR can also perform target-specific identification by analyzing the correlation features in the echo signals. (See Supplementary Note 10) In the box plot shown in Fig. 3f, we statistically analyzed the maximum correlation coefficients from the unnormalized cross-correlation functions among all pixels of the three objects in Fig. 3b. The variation in object reflectivity results in differing statistical distributions of the correlation coefficients. Due to the low reflectivity of the pedestrian’s polyester clothing, the correlation coefficients remain below 1. The elevated surface reflectivity of the Cybertruck and helicopter contributes to stronger correlation coefficients, and the Cybertruck’s comparatively smoother surface results in the highest correlation coefficient among the targets. The integration of correlation-based target classification and identification with deep learning will lay the foundation for the advancement of intelligent parallel LiDAR systems.
Long-distance and interference-free parallel LiDAR
Despite growing demand, parallel LiDAR systems operating beyond 10 m remain unreported, even with approaches such as chaotic micro-combs and time-stretching22,36,40. Based on the spectrally encoded parallel LiDAR, we demonstrate high-precision ranging at distances exceeding 20 m, addressing this critical challenge. Figure 4a presents the parallel ranging results for 20 whiteboards evenly spaced between 21 m and 40 m, whose positions were set and verified using an electoral total station (ETS) (See Experiment Section). A dual-grating configuration is employed to spatially separate the optical channels and suppress angular dispersion over extended propagation distances (Fig. S11). Each optical channel was directed to a distinct whiteboard, enabling all parallel ranging results shown in Fig. 4a to be captured simultaneously in a single acquisition based on 20 channels. Remarkably, with an integration time of 10 μs, the ranging error remains below 3.5 cm, even at distances of up to 40 m. A performance comparison of advanced LiDAR systems was conducted, revealing that spectrally encoded parallel LiDAR achieves long-distance ranging beyond the pulse period while maintaining a lower system cost. (See Supplementary Note 15). In long-distance ranging, the triangulation error introduced by the optical path deviation results in all errors in Fig. 4a being non-negative (Fig. S12)47,48. We further demonstrated the 3D imaging capability of the spectrally encoded parallel LiDAR at distances beyond 10 m. Using 16 channels, a vertical scan was performed on an opaque, cross-shaped target located approximately 11 m away, with a whiteboard placed about 10 cm behind the target. The imaging results presented in Fig. 4b, together with the spatial pixel distribution across multiple spectral channels (Fig. S14), validate the system’s ability to perform high-precision 3D reconstruction of targets located over 10 m away.
a Long-distance ranging results of 20 targets based on 20 channels and corresponding errors. The inset is the schematic of the setup. The 20 targets were arranged at equal intervals within a distance range of 21 m to 40 m from the LiDAR. Each channel was assigned to a specific target. b Long-distance imaging results based on 16 channels. The vertical direction consists of 16 segments, resulting in a resolution of 16 × 16 pixels. The inset shows the imaging target composed of a cross placed in front of a background plate, with a 10 cm spacing between them. c, d The anti-interference performance of the spectrally encoded parallel LiDAR. SNR with variations of ISR under the interference of CW laser (c) and pulsed laser (d) signals. The inset shows an experimental setup where the echo and interference signals are combined by a beam combiner before being injected into the detector. The error bars result from multiple measurements.
Owing to the temporal photon bunching effect of super-bunching light and the mutual quasi-orthogonality of its optical channels, spectrally encoded parallel LiDAR demonstrates robust anti-interference performance. To assess the anti-interference performance, we employed two Gaussian-distributed noise sources: a continuous-wave (CW) laser and a pulsed laser with the same repetition rate as the super-bunching light. The noise signal was mixed with the echo and entered the same APD simultaneously, as illustrated in Fig. 4c, d (See Supplementary NOTE 13). By adjusting the noise intensity using a variable optical attenuator, we investigated the relationship between the interference-to-signal ratio (ISR) and the SNR of the cross-correlation function. The ISR is defined as the power ratio between the interference signal and the echo signal received by the APD22. As illustrated in Fig. 4c, under CW noise, the dynamic detection range of the correlation peak is broad. The SNR remains essentially constant when the ISR exceeds 25 dB. Beyond this threshold, further increases in noise lead to a linear degradation of the SNR. Even at an ISR of 30 dB, the SNR remains above the 3 dB detection threshold, demonstrating that our parallel LiDAR can effectively suppress CW noise exceeding the echo signal by over a thousandfold, thereby significantly reducing the required laser radiation power. In the case of pulsed laser noise, as shown in Fig. 4f, although the dynamic detection range is relatively narrow, our parallel LiDAR still demonstrates robust anti-interference capability, maintaining effective detection even when the noise is over 125 times stronger than echoes. The strong anti-interference capability ensures the reliable operation of the spectrally encoded parallel LiDAR in complex environments.
In summary, we demonstrate a spectrally encoded parallel LiDAR system enabled by a super-bunching light source. Harnessing the temporal photon bunching effect of super-bunching light, our spectrally encoded channels overcome the inter-channel interference in both temporal and spectral domains, and surpass the inherent pulse-period constraints of coherent detection. Our parallel LiDAR system demonstrates a ranging error of 4 mm and enables the accurate tracking of low-speed targets with a speed measurement error of 4.1%. By integrating dispersive elements and scanning galvanometers, our parallel LiDAR achieves rapid and high-precision 3D reconstruction of objects. In addition, we determine the long-distance ranging and imaging capabilities of this parallel LiDAR, along with its robust anti-interference performance. Further progress in system integration and extending the system’s capabilities to a broader range of spectral bands will speed up its practical application. With its high sensitivity, precision, adaptability to dynamic targets, extended range, and robust noise immunity, the spectrally encoded parallel LiDAR paves the way for the development of next-generation high-performance parallel LiDAR.
Methods
Schematic diagrams of the experimental setups are shown in Figs. 1a, 2a of the main text. A homemade picosecond laser with a center wavelength of 1064 nm was used to pump the PCF (SC-PRO, YSL Photonics). The pump pulse has an energy fluence of 123 nJ, a pulse duration of ~136 ps, a repetition of 80 MHz, and is coupled into a 10 m-long PCF. An AOTF (AOTF0311, YSL Photonics) was employed to divide the optical channels. The bandwidth of the individual channel output by the AOTF is 3 nm. The minimum channel spacing without crosstalk is 8 nm, all the channels are spatially combined and output simultaneously. The configurations for optical channel divisions of the AOTF are controlled by electrical signals, with a switching time of a few microseconds. A blazed grating (600 lines/mm, BG25-600-500) was used to spatially separate channels in both the output and reference paths. A scanning galvanometer (FSM-300-01, Newport) was used to direct channels toward the target. The mix echoes were collected by a lens (f = 50 mm) and then focused onto an APD (APD210, Thorlabs). The PDs array (DET10A2, Thorlabs) was used in the reference path. A high-speed oscilloscope (MSO58B, Tektronix) was employed to sample the signals continuously.
In the test for Fig. 2c, a linear stepper motor stage (LTS150/M, Thorlabs) was used to move the whiteboard, with a positioning error of ±0.6 μm, which is negligible compared to our measurement errors. In the test for Fig. 4a, an electoral total station (Leica, TZ08) was employed to verify the positions of the whiteboards. The electoral total station has a precision of 1 mm (Measuring distances within 1000 m), far smaller than the error in ranging measurements.
Data availability
All data generated in this study are provided in the paper or the supplementary materials.
References
LiDAR drives forwards. Nat. Photonics 12, 441–441 (2018).
Kim, I. et al. Nanophotonics for light detection and ranging technology. Nat. Nanotechnol. 16, 508–524 (2021).
Takeuchi, N., Sugimoto, N., Baba, H. & Sakurai, K. Random modulation cw lidar. Appl. Opt. 22, 1382–1386 (1983).
Liu, Z. et al. Positive and negative obstacles detection based on Dual-LiDAR in field environments. IEEE Robot. Autom. Lett. 9, 6768–6775 (2024).
Schmitt, R. H. et al. Advances in Large-Scale Metrology–Review and future trends. CIRP Ann. 65, 643–665 (2016).
Behroozpour, B., Sandborn, P. A. M., Wu, M. C. & Boser, B. E. Lidar System Architectures and Circuits. IEEE Commun. Mag. 55, 135–142 (2017).
Chen, Z. et al. Predicting driving comfort in autonomous vehicles using road information and multi-head attention models. Nat. Commun. 16, 2709 (2025).
Coffey, V. C. Integrated Lidar: Transforming Transportation. Opt. Photonics N. 30, 40–47 (2019).
Liang, D. et al. Evolution of laser technology for automotive LiDAR, an industrial viewpoint. Nat. Commun. 15, 7660 (2024).
Ho, H. L. et al. High-speed 3D imaging using a chaos lidar system. Eur. Phys. J. Spec. Top. 231, 435–441 (2022).
Kim, G. & Park, Y. LIDAR pulse coding for high resolution range imaging at improved refresh rate. Opt. Express 24, 23810–23828 (2016).
Mahjoubfar, A. et al. High-speed nanometer-resolved imaging vibrometer and velocimeter. Appl. Phys. Lett. 98, 101107 (2011).
Paynter, I. et al. Observing ecosystems with lightweight, rapid-scanning terrestrial lidar scanners. Remote Sens. Ecol. Conserv. 2, 174–189 (2016).
Trocha, P. et al. Ultrafast optical ranging using microresonator soliton frequency combs. Science 359, 887–891 (2018).
Feldmann, J. et al. Parallel convolutional processing using an integrated photonic tensor core. Nature 589, 52–58 (2021).
Xu, X. et al. 11 TOPS photonic convolutional accelerator for optical neural networks. Nature 589, 44–51 (2021).
Solli, D. R. & Jalali, B. Analog optical computing. Nat. Photonics 9, 704–706 (2015).
Axelsson, S. R. J. Noise radar using random phase and frequency modulation. In: IGARSS 2003. 2003 IEEE International Geoscience and Remote Sensing Symposium. Proceedings (IEEE Cat. No. 03CH37477) (2003).
Tsai, C. M. & Liu, Y. C. Anti-interference single-photon LiDAR using stochastic pulse position modulation. Opt. Lett. 45, 2 (2020).
Shangguan, M., Liang, Y., Li, Y. & Mo, Y. Time-multiplexing single-photon imaging lidar with single-pixel detector. Appl. Phys. Lett. 124, 051104 (2024).
Zang, Z. et al. Ultrafast parallel single-pixel LiDAR with all-optical spectro-temporal encoding. APL Photonics 7, 046102 (2022).
Chen, R. et al. Breaking the temporal and frequency congestion of LiDAR by parallel chaos. Nat. Photonics 17, 306–314 (2023).
Yurtsever, E., Lambert, J., Carballo, A. & Takeda, K. A survey of autonomous driving: common practices and emerging technologies. IEEE Access 8, 58443–58469 (2020).
Chen, J. D. et al. 3-D Multi-Input Multi-Output (MIMO) pulsed chaos LIDAR based on time-division multiplexing. IEEE J. Sel. Top. Quantum Electron. 28, 1–9 (2022).
Schwarz, B. Mapping the world in 3D. Nat. Photonics 4, 429–430 (2010).
Sun, M. J. et al. Single-pixel three-dimensional imaging with time-based depth resolution. Nat. Commun. 7, 12010 (2016).
Lukashchuk, A., Riemensberger, J., Karpov, M., Liu, J. & Kippenberg, T. J. Dual chirped microcomb based parallel ranging at megapixel-line rates. Nat. Commun. 13, 3280 (2022).
Hutchison, D. N. et al. High-resolution aliasing-free optical beam steering. Optica 3, 887–890 (2016).
Lukashchuk, A., Riemensberger, J., Stroganov, A., Navickaite, G. & Kippenberg, T. J. Chaotic microcomb inertia-free parallel ranging. APL Photonics 8, 056102 (2023).
Xiong, W. et al. 3D parallel pulsed chaos LiDAR system. Opt. Express 32, 11763–11773 (2024).
Hulme, J. C. et al. Fully integrated hybrid silicon two dimensional beam scanner. Opt. Express 23, 5861–5874 (2015).
Poulton, C. V. et al. Coherent solid-state LIDAR with silicon photonic optical phased arrays. Opt. Lett. 42, 4091–4094 (2017).
Brinon-Arranz, L., Rakotovao, T., Creuzet, T., Karaoguz, C.& El-Hamzaoui O. A methodology for analyzing the impact of crosstalk on LIDAR measurements. In: 2021 IEEE Sensors) (2021).
Li, H. et al. Noise-tolerant LiDAR approaching the standard quantum-limited precision. Light Sci. Appl. 14, 138 (2025).
Zhang, M. & Wang, Y. Review on Chaotic Lasers and Measurement Applications. J. Lightwave Technol. 39, 3711–3723 (2021).
Jiang, Y., Karpf, S. & Jalali, B. Time-stretch LiDAR as a spectrally scanned time-of-flight ranging camera. Nat. Photonics 14, 14–18 (2019).
Wu, J. L. et al. Ultrafast laser-scanning time-stretch imaging at visible wavelengths. Light Sci. Appl. 6, e16196 (2017).
Mahjoubfar, A. et al. Time stretch and its applications. Nat. Photonics 11, 341–351 (2017).
Li, P. et al. Scalable parallel ultrafast optical random bit generation based on a single chaotic microcomb. Light Sci. Appl. 13, 66 (2024).
Lukashchuk, A., Riemensberger, J., Tusnin, A., Liu, J. & Kippenberg, T. J. Chaotic microcomb-based parallel ranging. Nat. Photonics 17, 814–821 (2023).
Qin, C. B. et al. Super-bunching light with giant high-order correlations and multi-photon events, Preprint at https://doi.org/10.48550/arXiv.2409.05419 (2024).
Kim, J. H. et al. Noise-resistant quantum communications using hyperentanglement. Optica 8, 1524–1531 (2021).
Mika, J. et al. Generation of ideal thermal light in warm atomic vapor. N. J. Phys. 20, 093002 (2018).
Faccio, D. Temporal ghost imaging. Nat. Photonics 10, 150–152 (2016).
Gianani, I., Sánchez-Soto, L. L., Goldberg, A. Z. & Barbieri, M. Efficient line shape estimation by ghost spectroscopy. Opt. Lett. 48, 3299–3302 (2023).
YSL Photonics, https://www.yslphotonics.com/Index/Product/details/id/82.html.
Ye, G. et al. Improving measurement accuracy of laser triangulation sensor via integrating a diffraction grating. Opt. Lasers Eng. 143, 106631 (2021).
Markus-Christian, A., Thierry, M. B., Marc, L., Risto, A. M. & Marc, R. Laser ranging: a critical review of unusual techniques for distance measurement. Opt. Eng. 40, 10–19 (2001).
Acknowledgements
The authors gratefully acknowledge support from the National Key Research and Development Program of China (Grant No. 2022YFA1404201, L.X.), National Natural Science Foundation of China (Nos. 62222509, C.Q.; U22A2091, C.Q.; U23A20380, L.X.; 62127817, L.X.; 62205187, Z.Y.; and 62575162, G.Z.), the Fundamental Research Program of Shanxi Province (202403021212018, X.H.L.), and 111 projects (Grant No. D18001, L.X.).
Author information
Authors and Affiliations
Contributions
Conceptualization: X.Z., X.H.L., C.Q., and L.X. Methodology: X.Z., X.H.L., J.H., G.Z., C.Q., L.X., and S.J. Validation: X.Z., X.H.L., W.Z., and J.H. Formal analysis: X.D.L., Z.Y., X.H.L., R.C., and C.Q. Investigation: Y.S., Y.L., A.W., and K.H. Resources: A.W., X.D.L., and G.Z. Data curation: X.Z., X.H.L., J.M., and Y.S. Writing - original draft: X.Z., X.H.L., C.Q., and S.J. Writing - review & editing: X.Z., X.H.L., and C.Q. Visualization: X.Z., and X.H.L. Supervision: X.H.L., C.Q., and L.X. Funding acquisition: C.Q., and L.X.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Peer review
Peer review information
Nature Communications thanks Chung-Hyun Lee, Alexander Mrokon, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. A peer review file is available.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Zhang, X., Liu, X., Li, X. et al. Spectrally encoded parallel LiDAR driven by super-bunching light. Nat Commun 17, 1161 (2026). https://doi.org/10.1038/s41467-025-67926-8
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41467-025-67926-8






