Introduction

Light detection and ranging (LiDAR) form the backbone of environmental perception1,2,3,4, supporting autonomous capabilities across a wide range of unmanned systems, from autonomous vehicles5,6,7,8,9 and drones to AI-integrated robotics4,5. In contrast to vision-only approaches, LiDAR enables high-precision, real-time three-dimensional (3D) environmental mapping10,11,12,13,14 with low computational overhead15,16,17, ensuring the safety and reliability of autonomous systems. Leveraging its advanced optical ranging and recognition capabilities, LiDAR enhances the operational robustness of autonomous systems18,19, thereby supporting their survivability in complex and rapidly changing environments. A key trend in current LiDAR technology is the adoption of parallelized architectures20,21, which enable multi-channel simultaneous measurements22,23. On the one hand, parallel LiDAR achieves 3D imaging through one-dimensional (1D) scanning24,25,26, significantly improving acquisition rates and angular resolution12,27. On the other hand, the feature of negligible reliance on beam-steering mechanisms28,29,30 ensures robust vibration resistance12,19. Enabled by rapid scanning and precise imaging, parallel LiDAR will catalyze transformative progress in the field of advanced autonomous systems31,32.

Notwithstanding, the performance of parallel LiDAR is significantly hindered by the inter-channel interference in both temporal and spectral domains33,34,35. Several approaches have been developed to solve this issue. For example, time-stretch technology facilitates channel-isolated parallel ranging by temporally separating multi-wavelength pulses36,37,38. However, the intrinsic tradeoff between detection range and channel count prevents time-stretch technology from realizing high resolution and long-distance imaging. Furthermore, these systems exhibit diminished robustness in dynamic object detection. Following time-stretching technology, optical code division multiple access (OCDMA) effectively eliminates channel interference through time-domain orthogonal coding of the light, while also offering considerable refresh rates and image resolution11,21. Yet, the pseudo-random modulation employed in the encoding and decoding process of the OCMDA scheme relies on high-speed devices, which place high demands on both system hardware and software. Chaotic microcombs have recently gained significant attention in parallel LiDAR due to their inherent immunity to channel congestion39,40, which stems from their orthogonal optical channels. Nonetheless, the separate collection of symmetric combs centered at the pump frequency is a necessary process to avoid their interference, thereby imposing stringent demands on the number of detectors. Meanwhile, the capacity of long-distance ranging is hindered by the restricted propagation distance of chaotic lasers. Next-generation parallel LiDAR requires not only to be free from time-frequency congestion across multiple channels, but also to provide high precision, long-distance detection, and robust dynamic object recognition.

In this work, we introduce a parallel LiDAR equipped with spectrally encoded optical channels, where the natural quasi-orthogonality between different wavelengths of a super-bunching light is leveraged to eliminate time-frequency congestion between channels. This spectrally encoded parallel LiDAR achieves millimeter-level ranging accuracy and enables precise velocity measurements for slow-moving targets with speeds as low as 5 mm/s. We further demonstrate high-resolution 3D imaging utilizing 51 channels, with an integration time of just 10 μs for each individual scanning segment. Parallel LiDAR empowered by a super-bunching light overcomes the distance limitation of ranging imposed by pulse repetition periods without intricate encoding and decoding programs11,21, enabling spatially parallel ranging beyond 40 m. In addition, this system demonstrates exceptional noise immunity even under extreme conditions where noise surpasses the echo signal by over three orders of magnitude. The combination of super-bunching light and parallel LiDAR proposed in this work offers a pathway for advancing high-performance autonomous systems.

Results and Discussion

Super-bunching enabled spectrally encoded optical channels

As a new-generation nonclassical light source, the super-bunching light exhibits giant second-order intensity correlation (g2(τ)) and extreme multi-photon emission probabilities. The normalized intensity correlation, g2(0), of the super-bunching light can be as high as 10, significantly stronger than that of thermal light (where g2(0) of thermal light equals 2, known as the bunching effect)41. The natural quasi-orthogonality between different wavelengths of a super-bunching light offers an opportunity to overcome the time-frequency congestion between channels. Figure 1a presents the schematic of the experimental setup for generating a super-bunching light source, which was produced via random nonlinear interactions in an optimized photonic crystal fiber (PCF) pumped by a picosecond (ps) laser with a central wavelength of 1064 nm and a repetition rate of 80 MHz. (See details in Experiment section) Fig. 1b presents the spectrum of super-bunching light, spanning from 480 to 1750 nm, a feature absent in super-bunching light generated via spontaneous parametric down-conversion (SPDC) or spontaneous four-wave mixing (SFWM)42,43, which serves as the foundation for multi-channel division in parallel LiDAR systems. An acousto-optic tunable filter (AOTF) was employed to divide the broadband super-bunching light into spectrally separated, artificially selected optical channels. Each optical channel was split into reference and output paths using a 1:9 beam splitter (BS). Channels in both paths were then spatially separated by a blazed grating and detected using avalanche photodetectors (APDs) for subsequent correlation analysis. The inset of Fig. 1b presents the results of second-order correlation measurement for the channel 633 nm, showcasing g(2)(0) = 5, which confirms the super-bunching nature (g(2)(0) > 2). In addition, the auto-correlation measurement for this channel was conducted, as shown in Fig. 1c. The result exhibits delta-function-like behavior, effectively suppressing self-interference within the channel. The ranging resolution Rres of the spectrally encoded parallel LiDAR is governed by the full width at half maximum (FWHM) of the auto-correlation function, which is fundamentally constrained by the detector’s bandwidth18:

$${R}_{{res}}=\frac{c\times {FWHM}}{2}.$$
(1)

where c is the speed of light. As depicted in the right inset of Fig. 1c, the FWHM of the auto-correlation function for channel 633 nm is 0.6 ns, corresponding to a ranging resolution of 9 cm. (Due to the high uniformity among channels divided by AOTF, the characteristics of channel 633 nm are representative of the entire set. See Supplementary NOTE 1)

Fig. 1: Super-bunching enabled spectrally encoded optical channels.
Fig. 1: Super-bunching enabled spectrally encoded optical channels.
Full size image

a Experiment setup. A broadband super-bunching light was produced by pumping a photonic crystal fiber (PCF) with picosecond pulses centered at 1064 nm. An acousto-optic tunable filter (AOTF) is then used to select individual optical channels. Each channel was divided into reference and output paths via a 1:9 beam splitter (BS), spatially separated by a grating, and detected by avalanche photodetectors (APDs) for correlation analysis. b Spectral of the super-bunching light. The arrow indicates the wavelength of the pump. The inset demonstrated the second-order correlation results of channel 633 nm, yielding g2(0) = 5.02. c The auto-correlation function of channel 633 nm. Magnified views of the autocorrelation function’s sidelobes and central peak are shown in the left and right insets, respectively. The full width at half maximum (FWHM) of the autocorrelation peak is 0.6 ns. d The cross-correlation coefficients between each pair of the eight individual channels. The data are normalized to the global maximum across all measured data. e The cross-correlation functions between channels 532/532 nm and 532/542 nm. The extremum correlation peak of the 532/532 nm channels shows a delay time of 133.3 ns, which is greater than the pulse period (12.5 ns).

To validate the quasi-orthogonality among the spectrally separated optical channels, we conducted cross-correlation function measurements. Figure 1e shows an extremum correlation peak (quantified by a peak-to-side lobe ratio (PSLR) greater than 2, see Supplementary NOTE 3) in the cross-correlation of channels 532 nm and 532 nm. In contrast, no such peak is observed in the cross-correlation between channels 532 nm and 542 nm. Additionally, Fig. 1d demonstrates pairwise cross-correlation measurements among 8 individual channels, with all values normalized to the global maximum. Notably, the presence of extremum correlation peaks in the measurements between identical channels (orange pixels in Fig. 1d) and their absence between different channels (blue/green pixels in Fig. 1d) confirms negligible inter-channel crosstalk. Consequently, the time-domain photon bunching effect of super-bunching light enables the division of frequency-quasi-orthogonal, crosstalk-free optical channels, which serve as spectrally encoded channels. The AOTF imposes a limitation on spectral resolution, resulting in a minimum crosstalk-free channel spacing of 8 nm (See Supplementary NOTE 4). During the ranging operation, as shown in Fig. 1e, the extremum correlation peak will shift from the zero-delay point, due to the differing optical path lengths between the output and reference loops. While Fig. 1e exhibits results beyond the pulse period, the ranging capability of conventional time-of-flight (ToF) LiDAR, which is driven by a pulsed laser, is strictly limited to the pulse repetition, making it difficult to perform measurements beyond the pulse period36. Although frequency-modulated continuous-wave (FMCW) LiDAR driven by a continuous laser is not limited by pulse period, it imposes stringent requirements on laser stability, detector sensitivity, and signal processing, all of which substantially increase system complexity, escalate costs, and hinder practical implementation. Leveraging the temporal photon bunching effect of super-bunching light, our system achieves precise distance measurements beyond the pulse period, for example, with a delay time of 133.3 ns for channel 532 nm, as shown in Fig. 1e, corresponding to a distance of 40.0 m. In contrast to optical code division multiple access (OCDMA) systems requiring intricate encoding and decoding11,21, our scheme enables ranging beyond the pulse periods with a simplified design. Moreover, OCDMA loses this capability when its channels share the same temporal code. The ability to perform light ranging beyond the pulse period paves the way for long-distance LiDAR applications.

Spectrally encoded parallel ranging

With spectrally encoded optical channels derived from super-bunching light, we present a parallel LiDAR. Figure 2a is a schematic of the spectrally encoded parallel LiDAR. In the output path, channels are spatially separated by a blazed grating (600 lines/mm) and then steered toward the target by a scanning galvanometer. The multiplexed echo signal reflected by the target is captured through a lens and focused onto a single APD. In the reference path, channels separated by the blazed grating are guided into the photodetectors (PDs) array, with each channel individually detected by one detector (See Supplementary NOTE 7). The ranging accuracy of the LiDAR system relies on detectors with superior performance in both the echo and reference paths, aligning with the setup used in ghost imaging44,45. Cross-correlation is performed between the mixed echo signal and each channel’s reference signal to retrieve individual range information22,40:

$$R\left({\Delta t}_{i}\right)=\frac{{\int }_{0}^{T}{x}_{i}\left(t\right)y\left(t+{\Delta t}_{i}\right){dt}}{\sqrt{\left({\int }_{0}^{T}{\left|{x}_{i}\left(t\right)\right|}^{2}{dt}\right)\left({\int }_{0}^{T}{\left|y\left(t+{\Delta t}_{i}\right)\right|}^{2}{dt}\right)}},$$
(2)

where xi(t) is the reference signal of channel i, y(tti) is the echo signal, and Rti) is the cross-correlation function of channel i. Given that the extremum correlation peak appears exclusively between matching channels, as depicted in Fig. 2b, the delay time Δti represents the round-trip time of flight for channel i, traveling from the galvanometer to the target and returning to the detector. Therefore, the corresponding target distance Li is calculated as:

$${L}_{i}=\frac{c\times {\Delta t}_{i}}{2}.$$
(3)
Fig. 2: Spectrally encoded parallel ranging.
Fig. 2: Spectrally encoded parallel ranging.
Full size image

a Experiment setup. In the output path, different channels were spatially separated by a blazed grating (600 lines/mm) and subsequently directed to distinct positions on the target by a galvanometer. Vertical scanning was achieved through a galvanometer. The target is a whiteboard placed on a motorized translation stage. b Schematic of the multi-channel correlation detection. Only the corresponding channels in the echo and reference signals yield the extremum value of the cross-correlation function. c Ranging results of channel 633 nm and corresponding errors. d Results of speed measurements for the target moving at different speeds. The black dashed line represents the diagonal line of the axis box. The error bars result from multiple measurements. e Parallel ranging results based on 16 individual channels. The panels, from bottom to top, display the spectral data of 16 channels, the optical power of each channel, the ranging results of 5 measurements, and the corresponding SNR, respectively. The error bars result from multiple measurements.

Performance tests of the LiDAR’s ranging capability were conducted using channel 633 nm. As shown in Fig. 2a, a whiteboard serves as the target, whose horizontal position is controlled by a high-precision stepper motor, moving away from the detector with a step size of 2 cm. The starting position of the whiteboard was set as the reference point to assess its relative distance in the test. Figure 2c presents the results of 22 tests, demonstrating a ranging error of up to 4 mm, representing an advanced level for parallel LiDAR systems. The ranging error was defined as the deviation between the ranging result and the known actual distance of the whiteboard. Moreover, with an integration time of just 10 μs per test, the system is capable of completing high-precision ranging tasks in rapidly changing environments. High-precision ranging capability allows for the reliable detection of low-speed targets. In low-speed target detection, the whiteboard was reciprocated at a constant velocity, and echo signals were continuously sampled using a high-speed oscilloscope. The corresponding results in Fig. 2d show an average speed measurement error of 4.1%, which is defined as the percentage deviation between the measured value and the preset speed of the stepper motor. Even when the target moves at a constant speed as low as 5 mm/s, the measured speed is 4.6 mm/s, resulting in a measurement error of 0.4 mm/s. With its precision ranging accuracy and dynamic detection capability, the spectrally encoded parallel LiDAR offers significant advantages in detecting low-speed dynamic targets.

Furthermore, we conducted measurements to assess the parallel performance of our scheme. A total of 16 optical channels were configured over the 530–680 nm spectral range with 10 nm spacing. Since all channels performed distance measurements of the whiteboard simultaneously, the ranging results of all channels should exhibit excellent consistency. The ranging results from the 5 tests shown in Fig. 2e are consistent with theoretical expectations, while the signal-to-noise ratio (SNR) for all 16 channels remains above 15 dB. These results demonstrate that our spectrally encoded parallel LiDAR achieves high-precision parallel ranging using only a single detector to collect the echo signals, while maintaining a high SNR.

3D imaging based on the spectrally encoded parallel LiDAR

The 3D imaging capabilities of spectrally encoded parallel LiDAR have also been explored. The 3D imaging targets, as shown in Fig. 3a, included a pedestrian, a cybertruck, and a helicopter, which were spatially arranged in sequence with 10 cm and 20 cm gaps between them (Fig. S8). Each pair of targets was spaced beyond the LiDAR’s ranging resolution, facilitating their distinction in 3D reconstruction. Figure 3b presents the 3D reconstruction results of the three targets based on vertical scanning data (Fig. S10). A total of 51 optical channels and 88 vertical scan segments were employed for 3D imaging, resulting in an image resolution of 51 × 88 pixels to recover finer target details. The horizontal and vertical errors introduced during the merging of multiple scans are negligible. (See Supplementary Note 9) The integration time for each scan segment is 10 μs. Considering the constraint imposed by the minimum allowable spectral spacing of 8 nm between adjacent channels, the spectral range of 530–680 nm cannot accommodate 51 mutually independent channels simultaneously. Hence, the 3D reconstruction is achieved through three successive scans, each utilizing 17 channels spaced at 9 nm intervals, as depicted in Fig. 3c. By interleaving and merging the three scans, a composite set of 51 channels with an effective spectral interval of 3 nm is constructed. Although state-of-the-art AOTFs are capable of achieving channel separation below 8 nm without crosstalk46, our approach offers a practical strategy for realizing multi-channel 3D imaging beyond the limitation of minimum channel intervals. While the OCDMA method must contend with a reduction in detection distance as channels are increased21,36, our approach delivers increased channels without any associated loss. Figure 3d presents the vertical scanning results for the 596 nm, 632 nm, and 659 nm channels. The distinct separation of data points corresponding to different targets in distance confirms that our LiDAR successfully achieves accurate 3D reconstruction of the three spatially distinct objects. This capacity is further supported by the photon-counting statistics of the echo signals from different distances, as illustrated in Fig. 3e.

Fig. 3: 3D imaging based on the spectrally encoded parallel LiDAR.
Fig. 3: 3D imaging based on the spectrally encoded parallel LiDAR.
Full size image

a Schematic of imaging targets, including a pedestrian, a cybertruck, and a helicopter, which were spatially arranged in sequence with 10 cm and 20 cm gaps between them. b Imaging results based on 51 individual channels. The vertical direction contains 88 segments, and the resolution is 51 × 88 pixels. Three dashed lines represent the-^# corresponding positions of channel 596 nm, 632 nm, and 659 nm, respectively. The dots in the XY plane represent the projections of three objects. c Schematic of the division of 51 channels. The 51-channel imaging was performed in three successive scans, each covering 17 channels with a spacing of 9 nm, resulting in an adequate channel spacing of 3 nm after combining the scan results. d Vertical scanning results of channel 596 nm, 632 nm, and 659 nm, respectively. e The statistical histogram of the pixels contained in three objects in (b). f Statistical box plot of the correlation intensity between the reference and the echo signals corresponding to each pixel of three objects.

Beyond accurate 3D reconstruction, spectrally encoded parallel LiDAR can also perform target-specific identification by analyzing the correlation features in the echo signals. (See Supplementary Note 10) In the box plot shown in Fig. 3f, we statistically analyzed the maximum correlation coefficients from the unnormalized cross-correlation functions among all pixels of the three objects in Fig. 3b. The variation in object reflectivity results in differing statistical distributions of the correlation coefficients. Due to the low reflectivity of the pedestrian’s polyester clothing, the correlation coefficients remain below 1. The elevated surface reflectivity of the Cybertruck and helicopter contributes to stronger correlation coefficients, and the Cybertruck’s comparatively smoother surface results in the highest correlation coefficient among the targets. The integration of correlation-based target classification and identification with deep learning will lay the foundation for the advancement of intelligent parallel LiDAR systems.

Long-distance and interference-free parallel LiDAR

Despite growing demand, parallel LiDAR systems operating beyond 10 m remain unreported, even with approaches such as chaotic micro-combs and time-stretching22,36,40. Based on the spectrally encoded parallel LiDAR, we demonstrate high-precision ranging at distances exceeding 20 m, addressing this critical challenge. Figure 4a presents the parallel ranging results for 20 whiteboards evenly spaced between 21 m and 40 m, whose positions were set and verified using an electoral total station (ETS) (See Experiment Section). A dual-grating configuration is employed to spatially separate the optical channels and suppress angular dispersion over extended propagation distances (Fig. S11). Each optical channel was directed to a distinct whiteboard, enabling all parallel ranging results shown in Fig. 4a to be captured simultaneously in a single acquisition based on 20 channels. Remarkably, with an integration time of 10 μs, the ranging error remains below 3.5 cm, even at distances of up to 40 m. A performance comparison of advanced LiDAR systems was conducted, revealing that spectrally encoded parallel LiDAR achieves long-distance ranging beyond the pulse period while maintaining a lower system cost. (See Supplementary Note 15). In long-distance ranging, the triangulation error introduced by the optical path deviation results in all errors in Fig. 4a being non-negative (Fig. S12)47,48. We further demonstrated the 3D imaging capability of the spectrally encoded parallel LiDAR at distances beyond 10 m. Using 16 channels, a vertical scan was performed on an opaque, cross-shaped target located approximately 11 m away, with a whiteboard placed about 10 cm behind the target. The imaging results presented in Fig. 4b, together with the spatial pixel distribution across multiple spectral channels (Fig. S14), validate the system’s ability to perform high-precision 3D reconstruction of targets located over 10 m away.

Fig. 4: Long-distance and interference-free parallel LiDAR.
Fig. 4: Long-distance and interference-free parallel LiDAR.
Full size image

a Long-distance ranging results of 20 targets based on 20 channels and corresponding errors. The inset is the schematic of the setup. The 20 targets were arranged at equal intervals within a distance range of 21 m to 40 m from the LiDAR. Each channel was assigned to a specific target. b Long-distance imaging results based on 16 channels. The vertical direction consists of 16 segments, resulting in a resolution of 16 × 16 pixels. The inset shows the imaging target composed of a cross placed in front of a background plate, with a 10 cm spacing between them. c, d The anti-interference performance of the spectrally encoded parallel LiDAR. SNR with variations of ISR under the interference of CW laser (c) and pulsed laser (d) signals. The inset shows an experimental setup where the echo and interference signals are combined by a beam combiner before being injected into the detector. The error bars result from multiple measurements.

Owing to the temporal photon bunching effect of super-bunching light and the mutual quasi-orthogonality of its optical channels, spectrally encoded parallel LiDAR demonstrates robust anti-interference performance. To assess the anti-interference performance, we employed two Gaussian-distributed noise sources: a continuous-wave (CW) laser and a pulsed laser with the same repetition rate as the super-bunching light. The noise signal was mixed with the echo and entered the same APD simultaneously, as illustrated in Fig. 4c, d (See Supplementary NOTE 13). By adjusting the noise intensity using a variable optical attenuator, we investigated the relationship between the interference-to-signal ratio (ISR) and the SNR of the cross-correlation function. The ISR is defined as the power ratio between the interference signal and the echo signal received by the APD22. As illustrated in Fig. 4c, under CW noise, the dynamic detection range of the correlation peak is broad. The SNR remains essentially constant when the ISR exceeds 25 dB. Beyond this threshold, further increases in noise lead to a linear degradation of the SNR. Even at an ISR of 30 dB, the SNR remains above the 3 dB detection threshold, demonstrating that our parallel LiDAR can effectively suppress CW noise exceeding the echo signal by over a thousandfold, thereby significantly reducing the required laser radiation power. In the case of pulsed laser noise, as shown in Fig. 4f, although the dynamic detection range is relatively narrow, our parallel LiDAR still demonstrates robust anti-interference capability, maintaining effective detection even when the noise is over 125 times stronger than echoes. The strong anti-interference capability ensures the reliable operation of the spectrally encoded parallel LiDAR in complex environments.

In summary, we demonstrate a spectrally encoded parallel LiDAR system enabled by a super-bunching light source. Harnessing the temporal photon bunching effect of super-bunching light, our spectrally encoded channels overcome the inter-channel interference in both temporal and spectral domains, and surpass the inherent pulse-period constraints of coherent detection. Our parallel LiDAR system demonstrates a ranging error of 4 mm and enables the accurate tracking of low-speed targets with a speed measurement error of 4.1%. By integrating dispersive elements and scanning galvanometers, our parallel LiDAR achieves rapid and high-precision 3D reconstruction of objects. In addition, we determine the long-distance ranging and imaging capabilities of this parallel LiDAR, along with its robust anti-interference performance. Further progress in system integration and extending the system’s capabilities to a broader range of spectral bands will speed up its practical application. With its high sensitivity, precision, adaptability to dynamic targets, extended range, and robust noise immunity, the spectrally encoded parallel LiDAR paves the way for the development of next-generation high-performance parallel LiDAR.

Methods

Schematic diagrams of the experimental setups are shown in Figs. 1a, 2a of the main text. A homemade picosecond laser with a center wavelength of 1064 nm was used to pump the PCF (SC-PRO, YSL Photonics). The pump pulse has an energy fluence of 123 nJ, a pulse duration of ~136 ps, a repetition of 80 MHz, and is coupled into a 10 m-long PCF. An AOTF (AOTF0311, YSL Photonics) was employed to divide the optical channels. The bandwidth of the individual channel output by the AOTF is 3 nm. The minimum channel spacing without crosstalk is 8 nm, all the channels are spatially combined and output simultaneously. The configurations for optical channel divisions of the AOTF are controlled by electrical signals, with a switching time of a few microseconds. A blazed grating (600 lines/mm, BG25-600-500) was used to spatially separate channels in both the output and reference paths. A scanning galvanometer (FSM-300-01, Newport) was used to direct channels toward the target. The mix echoes were collected by a lens (f = 50 mm) and then focused onto an APD (APD210, Thorlabs). The PDs array (DET10A2, Thorlabs) was used in the reference path. A high-speed oscilloscope (MSO58B, Tektronix) was employed to sample the signals continuously.

In the test for Fig. 2c, a linear stepper motor stage (LTS150/M, Thorlabs) was used to move the whiteboard, with a positioning error of ±0.6 μm, which is negligible compared to our measurement errors. In the test for Fig. 4a, an electoral total station (Leica, TZ08) was employed to verify the positions of the whiteboards. The electoral total station has a precision of 1 mm (Measuring distances within 1000 m), far smaller than the error in ranging measurements.