Abstract
Augmented reality near eye display (AR NED) is highly promising for applications that integrate real-world environments with virtual imagery. However, their commercialization is currently limited by the vergence and accommodation conflict (VAC), which arises from the mismatch between binocular vergence depth and monocular focal depth. In this study, we show a holographic-like AR optical system that generates full parallax multi-viewpoint (FPMVP) with an extended depth of field, achieving sharp virtual images at the observer’s gaze depth and optical blur at other depths, without mechanical actuation or modification of optical components. This holographic-like visual effect effectively reduces VAC by providing monocular depth cues. Simultaneously, the extended depth of field of the FPMVP optical system enables resolution that satisfies the Rayleigh criterion over a broad range. The simple optical structure and excellent scalability of this technology demonstrate significant potential for future commercialization in the AR NED market.
Similar content being viewed by others
Introduction
Near eye display (NED) technologies are expected to play a pivotal role in a wide range of future industries and have accordingly attracted extensive research attention1. Among NED technologies, augmented reality (AR) is anticipated to be especially useful in scenarios that require simultaneous viewing of the real environment and virtual presentation of information, such as entertainment content, military tactics, surgical procedures, image-guided manufacturing, and interactive education. Early-stage researches and most of the currently commercialized AR NED products have employed binocular disparity to generate three-dimensional (3D) effects. However, these systems are perceived as unsuitable for widespread adoption due to the persistent vergence and accommodation conflict (VAC) arising from the mismatch between binocular convergence and monocular focus2,3.
Holographic techniques that reconstruct and reproduce wavefronts to create 3D images in real space inherently avoid VAC and are regarded as the ultimate goal for 3D display technologies, including AR. Despite significant research efforts4,5, the limited capability of current devices to modulate both phase and amplitude of light suggests that commercialization of holography will take considerable time. To overcome these challenges, geometric optics-based approaches that provide monocular depth cues by converging and diverging light at a virtual image depth have been explored to address VAC in NED. Technical methods such as pinhole aperture arrays6, super multi-view7,8, integral imaging9, and light field displays10,11 have been actively investigated.
Among them, integral imaging―first introduced by Lippmann―has been widely studied across various applications including NEDs, due to its ability to support full parallax and motion parallax, compatibility with conventional flat displays12. In addition, this concept has further inspired and development in various studies based on light field displays. However, since its resolution is determined by the number of lenses, increasing resolution requires smaller lenses, which in turn degrades lens quality and increases optical aberration and distortion. This presents a fundamental limitation for commercialization, where both high resolution and a large depth range must be simultaneously achieved. Thus, numerous studies have been conducted to enhance resolution and depth range, such as dynamically adjusting the position or aperture size of the micro lens array13,14,15, generating multiple focal planes16, applying convolution neural network-based data processing17, and using dual displays with a slit mirror array18. However, these methods rely on additional optical actuation, dynamical or electronical modification of components, or computational processing rather than optical principles, potentially introducing secondary issues such as increased design complexity and reduced usability in commercial implementations.
To address this problem, an optical system was devised to generate full parallax multi-viewpoints (FPMVP) with an extended depth of field (EDOF) in human pupil, enabling holographic-like visualization with a resolution that meets the Rayleigh criterion across a wide depth range19,20,21. To verify that the proposed FPMVP optical system has relatively few hardware constraints, we designed it using only a lens array and a simple AR optical structure based on a birdbath configuration composed of a concave mirror and a beam splitter, without incorporating any tunable elements or structural modifications. The optical performance of the FPMVP AR prototype was then analyzed through Zemax OpticStudio simulations, followed by the fabrication of a physical FPMVP AR prototype for experimental validation.
The FPMVP AR prototype’s lens array was designed with lens array interval (LAI) optimized to utilize the full pixel array of a full high definition (FHD) organic light-emitting diode (OLED) panel and to form at least two viewpoints (VPs) within the human pupil. Furthermore, in FPMVP AR prototype, the optical paths emitted from the sub displays were designed to form a common optical path at the main optical element, a concave mirror. This optical arrangement enhances the uniformity of optical path while also enabling a compact main optics volume and an increased field of view (FOV) for each VP.
From the Zemax OpticStudio simulations, the proposed FPMVP AR system was found to satisfy the Rayleigh criterion, maintaining an MTF contrast value of 0.14 or higher at a spatial frequency of 12 cycles per degree (cpd) within a depth range of 0.0 to 3.3D 19. Additionally, it was confirmed that the FPMVP AR prototype could provide holographic-like effects from 0.1 to 3.3 D. Subsequently, to experimentally validate the FPMVP AR prototype, a physical prototype was fabricated using real lenses. As a result, the physical FPMVP AR prototype was confirmed to satisfy the Rayleigh criterion within a range of 0.0 D to 2.44 D and to implement holographic-like effects for objects at depths from 0.3 to 3.3 D in a real environment.
This FPMVP AR NED leverages a commercially available flat micro-OLED panel, offering advantages in terms of practical implementation. In addition, since the resolution of the virtual image is determined by the resolution of the segmented display rather than the number of lenses, future improvements in micro-OLED display’s resolution are expected to further enhance the performance of the optical system while minimizing image degradation caused by lens size reduction. Consequently, the FPMVP optical system, which enables holographic-like imaging with both high resolution and reduced VAC across a wide depth range, holds strong potential to significantly contribute to the commercialization of AR NED.
Results
Monocular accommodation with full parallax over extended depth of field
As shown in Fig. 1a, rays emitted from the display are directed to converge at object depths On, Om, and Of, forming images that overlap at their respective focal planes. Here, On, Om, and Of represent near, middle, and far object depth, respectively. When the monocular eye focuses at one of these depths, only the rays corresponding to that depth converge accurately on the retina, resulting in a sharp image, while rays from other depths diverge, producing a blurred image. This geometric optical mechanism enables monocular focus adjustment and provides holographic-like effects. Although integral imaging is widely used to produce this effect, increasing the resolution of each VP requires a large number of lenses. This, in turn, reduces the size of each lens element, leading to increased optical aberrations and distortion, ultimately degrading image quality.
(a) Geometrical optical path difference depending on different monocular focusing depths (On, Om, and Of). (b) Horizontal cross-section and (c) vertical cross-section schematic of the FPMVP AR optics with a paraxial lens. (d) Illustration of the reconstruction method in an FPMVP optical system. All figure images were created by the authors using Microsoft PowerPoint (Microsoft Office LTSC Professional Plus 2021, Version 2108, https://www.microsoft.com/).
To address this issue, we devised an optical system that generates multiple VPs while preserving the resolution of the segmented display, as shown in Fig. 1b, c. Full parallax can be achieved when virtual images are formed within the full common area (FCA), where all VPs’ screens overlap, providing a visual effect similar to that of a hologram. This FPMVP AR prototype was designed so that the optical paths of all VPs overlap at the concave mirror of the birdbath-structured main optics in the AR optical system. This configuration allows the size of the AR optics to be reduced and improves optical path uniformity, thereby minimizing differences in image quality between VPs. In addition, to extend the depth of field of the FPMVP AR optical system, the aperture stop of the lens array in the FPMVP AR prototype was set to 1.4 mm, and the exit pupil was set to 0.94 mm. These values were chosen to consider the optimal balance between geometric and wave-optical blur. Furthermore, the best screen distance was set to 666.7 mm (1.5 D) to ensure depth symmetry between the near and far ends of the depth of focus. The detailed configuration method can be found in our previous work, “Extended Depth of Field in Augmented Reality”21.
The pupil size varies with ambient illumination and is assumed to be approximately 3 ~ 4 mm under typical laboratory conditions22. Based on this assumption, the viewpoint interval (VPI) was determined to ensure that, among six VPs, a minimum of two and a maximum of four VPs can be accommodated within the pupil. Accordingly, the VPI was calculated using Eq. (1), which expresses a geometric proportional relationship of the sub-display interval (SDI) of the 3 × 2 segmented display and the distance of the FPMVP AR prototype components. As a result, the horizontal viewpoint interval (H_VPI) and vertical viewpoint interval (V_VPI) were determined to be 2.61 mm and 2.20 mm, respectively. The LAI was designed to utilize all pixels of the micro-display panel and was calculated based on Eq. (2), which incorporates the VPI. As a result, the horizontal lens array interval and vertical lens array interval (H_LAI and V_LAI) were determined to be 4.04 mm and 3.41 mm, respectively. This indicates the VP array, also referred to as the eye box, of the proposed system has a size of approximately 5.22 mm × 4.4 mm. Therefore, when the pupil moves, there may be regions where only one VP is present within the pupil.
Here, dDP-LA, dMO-VP, and dLA-MO denotes the distance between the display and the lens array, the distance from the main optics to the VP, and the lens array to the main optics, respectively.
Figure 1d presents a 3D schematic diagram that illustrates how virtual objects are reconstructed within the FCA. This reconstruction is depicted as being achieved through six virtual screens generated by a 3 × 2 segmented display and lens array configuration. The VP numbers were assigned sequentially from the top-left to the bottom-right based on the viewing direction of the user observing the AR virtual image, and were labeled as VP 1 through VP 6. This naming convention is consistently applied to the fabricated FPMVP AR prototype as well. Based on this figure, the difference between the FPMVP and integral imaging can be visually explained as follows. In integral imaging, light rays passing through each lens of the micro-lens array propagate parallel to one another, whereas in the FPMVP design, all VP rays are directed to form a common region. Within this common region, there exist the FCA where the virtual screens of all VPs overlap. Equations and detailed descriptions of image generation method for the FPMVP AR prototype, including depth-dependent disparity and virtual screen shifts due to optical path matching in the main optics, are provided in Supplementary Notes 1, 2 and 3 and Supplementary Figs. 1, 2 and 3.
FPMVP AR prototype implementation and its simulation
Figure 2a shows the 3D layout of the FPMVP AR prototype, designed using Zemax OpticStudio. A 16 mm fixed focal length paraxial lens was used to mimic the pupil of the human eye. Based on the footprint and spot diagram simulations shown in Fig. 2b, the final H_VPI and V_VPI were found to be 2.56 mm and 2.19 mm, respectively, and the exit pupil diameter (PD) was found to be 0.94 mm. The resulting horizontal and vertical fields of view (H_FOV and V_FOV) of each VP were 26.4° and 22.4°, respectively, forming an angular frequency of 12 cpd based on the LP 1 pattern. In addition, the FCA provides H_FOV and V_FOV of 19.5° and 21.0°, respectively. These values represent suitable viewing angles for applications in environments such as interactive education, entertainment industry, military task and manufacturing, where visual information must be integrated with real-world visual tasks. Details of the FPMVP AR prototype, including the components used for its fabrication and the optical specifications, are summarized in Table 1.
(a) 3D layout of the FPMVP AR prototype. (b) Footprint and spot diagram of the simulated prototype. Figures were generated by the authors using Zemax OpticStudio (Version 18.9, Zemax LLC, https://www.zemax.com/).
To assess the DOF range satisfying Rayleigh criterion for each VP, the focus depth was varied in 0.1 D steps, and the square wave MTF contrast values of all VPs at 12 cpd were analyzed. Figure 3a, b show the results of the sagittal and tangential component for the all VPs, respectively. The blue markers in both graphs represent the MTF contrast values of central VPs (VP 2 and VP 5), which are located near the center of VP array. The orange markers represent the values of peripheral VPs (VP 1, VP 3, VP 4, and VP 6), which are relatively farther from the center of the VP array. Additionally, the fitted line is included to show the overall trend. For the central VPs, the best focus was found to be at 1.8 D and 1.9 D for the sagittal and tangential components, respectively, with corresponding ranges of 4.54 and 4.62 D satisfying the Rayleigh criterion. In contrast, the peripheral VPs had best focus positions at 1.2 D and 0.7 D, with narrower ranges of 3.92 and 3.31 D, which also satisfy the Rayleigh criterion. Therefore, all VPs achieved a resolution satisfying the Rayleigh criterion, with MTF contrast values exceeding 0.14 across the depth range of 0.0 to 3.3 D for both sagittal and tangential components. Further improvements can be achieved by increasing the focal lengths of the lenses corresponding to peripheral VPs, thereby aligning the best focus positions and expanding the range satisfying the Rayleigh criterion. To represent the lowest image quality that still satisfies the Rayleigh criterion, the result with an MTF contrast value closest to 0.14 was selected. As confirmed in the simulation image of VP 1, where the sagittal component was focused at 3.9 D and the tangential component at 3.3 D, it was demonstrated that the LP 1 pattern, which corresponds to the smallest pixel size on the display, could still be distinguished by the naked eye.
Square wave MTF contrast values of all VPs at different VP positions with 0.1 D intervals, and the image simulation result at an MTF contrast value of 0.14. (a) Sagittal and (b) tangential components. Monocular accommodation simulation at depths of (c) 0.1 D, (d) 1.5 D, and (e) 3.3 D.
The following simulation qualitatively verifies whether a holographic-like effect can be provided by the overlap and spread of images in FPMVP AR optics system. Assuming a pupil size of approximately 3 ~ 4 mm under typical indoor laboratory lighting conditions, up to four VPs can fall within the pupil at a given H_VPI and V_VPI. Therefore, to evaluate the full parallax performance, the simulation was conducted using four VPs: VP 2, VP 3, VP 5, and VP 6. For each VP, a white window-shaped image with LP 1 thickness and the word “KIST” were combined into one set and placed at depths of 3.3 D, 1.5 D, and 0.1 D from left to right, with the focus adjusted accordingly for each depth. As shown in Fig. 3c–e, when the focus depth was set to 0.1 D, the window-shaped image and the word “KIST,” both located at the 0.1 D depth (rightmost), appeared in sharp focus, while the images at other depths became blurred. Similarly, when the focus depth was set to 1.5 D and 3.3 D, the corresponding window-shaped image and “KIST” text at each depth (center and leftmost, respectively) came into sharp focus, whereas the images located at different depths appeared blurred. This result confirms that holographic-like effects can be geometrically achieved over a wide depth range from 0.1 to 3.3 D, allowing for monocular focus adjustment. The original image used for the simulation and the simulation results for each VP are provided in Supplementary Notes 4–5 and Supplementary Figs. 4 and 5.
Experimental result
Figure 4a shows the physical FPMVP AR prototype and the experimental setup. As shown in Fig. 4b, the measured average H_VPI, V_VPI, and PD were 2.64 mm, 2.25 mm, and 0.97 mm, respectively—representing increases of 3.13%, 2.74%, and 4.25% compared to the designed values. Nevertheless, under laboratory illumination where the pupil size is assumed to be 3 ~ 4 mm, up to four VPs can still be exited within the pupil, thereby enabling the provision of full parallax. In addition, Fig. 4b shows the exit pupils from peripheral VPs appeared blurred, visually confirming the best focus mismatch, when the camera was focused on central VPs’ exit pupil. To determine the DOF range satisfying the Rayleigh criterion, MTF contrast value were measured for LP 1 patterns at VP 1 through VP 6 by adjusting the focus in 0.1 D steps. The resulting MTF contrast values for the sagittal and tangential components, along with their fitted curves, are shown in Fig. 4c, d. The best focus positions for the sagittal and tangential components were found to be VP 1 (0.93 D/0.35 D), VP 3 (0.72 D/0.59 D), VP 4 (1.41 D/0.65 D), and VP 6 (1.45 D/0.91 D). These best focus values are located farther than those of VP 2 (1.90 D/1.96 D) and VP 5 (1.87 D/1.89 D), which is consistent with the simulation results. However, the best focus positions of peripheral VPs were not consistent among themselves, nor did they match those of central VPs. In addition, the depth range over which both sagittal and tangential MTF contrast values remained above 0.14 were measured as 2.44 D, 3.86 D, 2.88 D, 2.74 D, 3.85 D, and 3.24 D for VP 1 through VP 6, respectively. These values were reduced by 26%, 15%, 13%, 17%, 15%, and 2%, respectively, compared to the simulation results. These discrepancies are presumed to result from mechanical tolerances, optical axis misalignment, and a real PD that is larger than the designed value. In both measurements, the standard deviation of the sagittal component was much greater than that of the tangential component, which is presumed to be caused by vertical vibrations of the laboratory building. The best focus values for each VP and the depth ranges meeting the Rayleigh criterion in the simulated and fabricated FPMVP AR prototypes are summarized in Table 2. From the images captured from VP 1 at the 3.0 D focus for the sagittal component and the 2.4 D focus for the tangential component—both showing MTF contrast values closest to 0.14—it can be seen that, as in the simulation, the smallest pixel patterns are distinguishable by the naked eye.
(a) Fabricated FPMVP AR prototype with real lens. (b) VP array at the pupil position of the prototype. MTF characteristics of all VPs and image results at an MTF contrast value of 0.14 for (c) sagittal and (d) tangential components.
Next, the holographic-like effect was experimentally reproduced with the external view blocked. For evaluation, a set consisting of a window-shaped image and the word “KIST” was assigned to three different depths—3.3 D, 1.5 D, and 0.1 D from left to right—and the evaluation was conducted by positioning four VPs (VP 2, VP 3, VP 5, and VP 6) within the entrance pupil size of the camera. Supplementary Note 6 and Supplementary Fig. 6 show the full-resolution original image displayed on the micro-OLED panel. As shown in Fig. 5a–c, when the camera focus was set to 0.1 D, 1.5 D, and 3.3 D, only the corresponding virtual object appeared overlapped, while the others were dispersed. For precise quantitative evaluation, the focus of the 16 mm focal lens was adjusted in 0.1 D steps, and the diagonal size of the window-shaped image at each depth was measured. Figure 5d–f demonstrate that the four VP images overlap most precisely at 3.0 D, 1.5 D, and 0.5 D, leading to the smallest image size. This indicates that, unlike the simulation, the overlap positions of the images in the actual experiment did not exactly match the intended image depths. This error is presumed to be caused by the mechanical tolerances of the physical FPMVP AR prototype and misalignment of the optical axis. Nevertheless, both qualitative and quantitative experiments confirmed that holographic-like effects, capable of providing resolution sufficient to discern fine patterns over a wide depth range with the naked eye, can be achieved without any mechanical actuators or structural modifications of optical system components.
Evaluation of hologram-like effects in the FPMVP AR prototype. Qualitative assessment at object depths of (a) 0.1 D, (b) 1.5 D, and (c) 3.3 D. Quantitative assessment at object depths of (d) 0.1 D, (e) 1.5 D, and (f) 3.3 D.
Subsequently, the hologram-like effect was evaluated in a real-world setting by displaying four VPs within the human pupil diameter, placing real objects alongside virtual images. A nearby virtual object (a spacecraft) and a mid-range virtual object (an astronaut) were positioned at 3.3 D and 1.5 D, respectively, matching the depths used in the previous experiments. However, the farthest virtual object (Saturn) was located at 0.3 D, limited by the physical constraints of the lab environment. Supplementary Note 7 and Supplementary Fig. 7 show the full-resolution original image used for verifying the holographic-like effect in real-world experiments. Paper depth markers were placed beneath each virtual image to indicate their respective depths, and for comparison with real objects, a pink Loopy figure was placed at 3.3 D and a star illustration at 0.3 D. All virtual 3D objects successfully maintained high image quality, as shown in Fig. 6a–c. In these photographs, a green arrow indicates the virtual image that matches the camera’s depth of focus, whereas a blue arrow marks the real object. When the camera was focused at 3.3 D, the text labeled “3.3 D” and the Loopy figure appeared sharp, also the spacecraft images generated from the four VPs overlapped clearly, while the virtual images at other depths appeared misaligned and blurred, resembling the appearance of defocused real objects. Similarly, when the focus was set to 1.5 D, the “1.5 D” label came into clear focus, and the astronaut images converged. In contrast, content at other depths became scattered. At a focus setting of 0.3 D, the “0.3 D” label and the star illustration appeared distinctly sharp, and the Saturn images well aligned, while the other real and virtual objects at unmatched depths were perceptually blurred and misaligned. (See. Supplementary Movie 1 with explanation in Supplementary Note 8 and Supplementary Fig. 8). These results demonstrate that the FPMVP AR prototype can provide holographic-like effects over a wide depth range, even in environments where virtual images and real-world objects are viewed simultaneously. Slight degradation in the clarity of the overlapping virtual images, compared to real objects, was observed due to the misalignment of their VP’s best focus depths. Nevertheless, this limitation could be mitigated by optimizing the focal length of each lens element in the lens array, potentially enabling a more natural viewing experience. Finally, to compare the visual experience provided by a single VP with that of multiple VPs, measurements were performed using only VP2 under the same conditions, but with 20% higher brightness. As shown in Figs. 6d–f, good image quality is maintained across all depths as the camera focus is adjusted from 0.0 to 3.3 D. However, it does not allow for monocular accommodation. (See supplementary Movie 2 with explanation in Note 9 and Supplementary Fig. 9 for real time experiment). This comparison suggests that the FPMVP AR optical system, which enables monocular accommodation, can provide a more immersive visual experience than techniques that merely deliver high-quality images over a wide depth range. However, as shown in the camera-captured image of Fig. 6a, the high-quality virtual images produced by the FPMVP optical system may appear dispersed on the retina, potentially being perceived as multiple vision or fragmented images. Nevertheless, because the human visual system processes visual information such that only the foveal region is perceived sharply while peripheral vision remains relatively blurred, this apparent separation of fine image details is likely to be perceived similarly to natural retinal blur23,24.
Qualitative evaluation of hologram-like performance of the FPMVP AR prototype in real environments with focused depths at (a) 0.3 D, (b) 1.5 D, and (c) 3.3 D. Single-viewpoint evaluation of VP 2 in real environments with focused depths at (d) 0.3 D, (e) 1.5 D, and (f) 3.3 D.
Discussion
In this study, we proposed a simple FPMVP AR technology that achieves both EDOF and full parallax using only a lens array and a birdbath-structured AR optical system. Simulation results of the proposed FPMVP AR prototype demonstrated the potential to satisfy the Rayleigh criterion across a depth range from 0.0 to 3.3 D for all VPs and to reduce VAC by generating holographic-like effects without the need for additional mechanical actuators or optical modifications. Although the physical prototype satisfied the Rayleigh criterion over a narrower range compared with the simulation, it was experimentally confirmed that holographic-like effects could still be realized in practical scenarios without any additional mechanical driving or optical control.
In the proposed FPMVP AR prototype, the best focus positions were not perfectly aligned across all VPs. This issue can be resolved by optimizing the effective focal length of each lens in the array, thereby improving the alignment and extending the depth of field that meets the Rayleigh criterion. Furthermore, beyond the current design approach, matching the optical path at the virtual screen’s best focus position instead of the concave mirror could further enhance the resolution of the FCA.
Although this proposed FPMVP AR prototype cannot fully covered the pupil vertical movement, this is not a fundamental limitation of the optical principle of FPMVP system, but rather a hardware constraint of this prototype, which can be readily addressed. By employing a higher-resolution and larger micro-display to increase the number of VPs, it is possible to ensure that multiple VPs exist simultaneously within the pupil, thereby accommodating pupil movements in all directions.
In addition, considering the perceptual characteristics of human vision, it would be necessary to conduct follow-up human-factor studies to quantitatively evaluate visual and perceptual responses associated with the virtual image separation and retinal blur in the FPMVP system. Such studies would provide valuable insights for optimizing the VPI design and enhancing both visual comfort and perceptual realism in future AR NED systems.
Consequently, the proposed FPMVP AR optical system exhibits high manufacturability and operational stability by directly utilizing a commercial flat micro-OLED display as the image source. Moreover, because the resolution at each VP is determined by the segmented resolution of the display rather than the number of lenses, improvements in micro-OLED display resolution can directly enhance the overall image quality. This contributes to achieving high-precision holographic-like images while minimizing aberrations and distortions that commonly occur in conventional integral imaging systems. Therefore, this technology shows strong potential to facilitate the widespread adoption of AR optical systems by reducing VAC across a wide depth range using micro-OLED display–based AR NEDs.
Materials and methods
Implementation of prototype and eye mimicking system
In our FPMVP AR prototype, we used Sony’s ECX335SL (0.71” FHD) for the micro-OLED display, and Thorlab’s DT12/M was used to control the micro-OLED display position. Each lens in the lens array employed the Edmund Optics 45-262 model, which has a clear diameter of 3.0 mm and satisfies the LAI conditions with a focal length of 12 mm. In addition, an aperture stop was individually placed on top of each of the lens in the lens array to achieve the EDOF condition. For the birdbath type AR optical system, we fabricated the FPMVP AR prototype using a concave mirror with a focal length of 24.8 mm and a 50/50 beam splitter that divides the incident light into transmitted and reflected beams. The focal length of the AR optics was designed to be 24.8 mm in order to provide an eye relief of approximately 18 mm, defined as the distance from the end of the optical housing to the eye. Considering the housing used in this study, this corresponds to a distance of about 40 mm from the main optics to the eye. To experimentally replicate the eye mimicking system modeled in the simulation, we used a combination of an image sensor (Basler Model No. acA2440-35um) and a 16 mm focal length lens (Edmund optics Model No. 59-870). This setup was used for MTF measurements and for experiments verifying the holographic-like effect using window patterns and the text “KIST”. However, for verifying the holographic-like effect under real-world conditions, the same image sensor was equipped with a 12 mm focal length lens (Edmund Optics Model No. 58-001) to account for the H_FOV and V_FOV of the FCA during imaging.
Measurement with precise depth adjustment
In the simulated FPMVP AR prototype, a 16 mm fixed focal length paraxial lens and an image plane were to mimic the refractive power of the human eye and the crystalline lens. Squared MTF contrast values were measured at 0.1 D depth intervals by shifting the image plane of a 16 mm fixed focal length paraxial lens. Holographic-like effects verification was similarly performed by simulating images at various focal depths for all VPs in Zemax OpticStudio, incorporating disparity information from the VPI, and compositing these images in MATLAB to confirm overlay.
For the fabricated FPMVP AR prototype, squared MTF contrast values were measured at 0.1 D depth steps by positioning the entrance pupil of the eye simulating camera lens at a distance corresponding to dMO-VP and finely adjusting the focal depth at 0.1 D intervals using the Edmund Optics rotation mount (Model No. PRM2/M). (See Supplementary Notes 10 and Supplementary Fig. 10.)
Data availability
All data generated or analyzed during this study are included in this published article and its supplementary information files.
References
Kim, N. et al. 3D display technology. Disp. Imaging 1, 73–95 (2014).
Shibata, T., Kim, J., Hoffman, D. M. & Banks, M. S. The zone of comfort: Predicting visual discomfort with stereo displays. J. Vis. 11, 11. https://doi.org/10.1167/11.8.11 (2011).
Dargan, S. et al. Augmented reality: A comprehensive review. Arch. Comput. Methods Eng. 30, 1057–1080. https://doi.org/10.1007/s11831-022-09829-6 (2023).
Pi, D., Liu, J. & Wang, Y. Review of computer-generated hologram algorithms for color dynamic holographic three-dimensional display. Light Sci. Appl. 11, 231. https://doi.org/10.1038/s41377-022-00916-3 (2022).
Park, J.-H. & Lee, B. Holographic techniques for augmented reality and virtual reality near-eye displays. Light Adv. Manuf. 3, 137–150. https://doi.org/10.37188/lam.2022.137 (2022).
Akşit, K., Kautz, J. & Luebke, D. Slim near-eye display using pinhole aperture arrays. Appl. Opt. 54, 3422–3427 (2015).
Ueno, T. & Takaki, Y. Super multi-view near-eye display to solve vergence–accommodation conflict. Opt. Express 26, 30703–30715. https://doi.org/10.1364/OE.26.030703 (2018).
Bolotova, A., Putilin, A. & Druzhin, V. Super multi-view augmented reality glasses. Proc. SPIE 10676, 1067605. https://doi.org/10.1117/12.2314165 (2018).
Li, Q. et al. A reflective augmented reality integral imaging 3D display by using a mirror-based pinhole array. Appl. Sci. 9, 3124. https://doi.org/10.3390/app9153124 (2019).
Lee, H. & Choi, H.-J. Luminance-enhanced pinhole-based light field near eye display using polarization-multiplexed retinal imaging. IEEE J. Sel. Top. Quantum Electron. https://doi.org/10.1109/JSTQE.2024.3388860 (2024).
Lanman, D. & Luebke, D. Near-eye light field displays. ACM Trans. Graph. 32, 1–10 (2013).
Lippmann, G. Epreuves reversibles donnant la sensation du relief. J. Phys. Theor. Appl. 7, 821–825. https://doi.org/10.1051/jphystap:019080070082101 (1908).
Wang, X. & Hua, H. Depth-enhanced head-mounted light field displays based on integral imaging. Opt. Lett. 46, 985–988. https://doi.org/10.1364/OL.416421 (2021).
Lee, B., Jung, S., Min, S.-W. & Park, J.-H. Three-dimensional display by use of integral photography with dynamically variable image planes. Opt. Lett. 26, 1481–1482. https://doi.org/10.1364/OL.26.001481 (2001).
Kim, Y. et al. Point light source integral imaging with improved resolution and viewing angle by the use of electrically movable pinhole array. Opt. Express 15, 18253–18267. https://doi.org/10.1364/OE.15.018253 (2007).
Park, J.-H. et al. Integral imaging with multiple image planes using a uniaxial crystal plate. Opt. Express 11, 1862–1875. https://doi.org/10.1364/OE.11.001862 (2003).
Zhou, R. et al. Depth of field expansion method for integral imaging based on diffractive optical element and CNN. Opt. Express 31, 38146–38164. https://doi.org/10.1364/OE.493024 (2023).
Kheibarihafshejani, S. & Park, J.-H. Integral imaging 3D display with enhanced depth range using slit mirror array and dual display systems. J. Opt. https://doi.org/10.1088/2040-8986/ad4bcb (2025).
Kim, S. & Park, S. Full parallax multi-focus three-dimensional display. US Patent 10,666,930 (2020).
Kim, S.-K., Kim, E.-H. & Kim, D.-W. Full parallax multifocus three-dimensional display using a slanted light source array. Opt. Eng. 50, 114001. https://doi.org/10.1117/1.3655963 (2011).
Kim, S. K., Kwon, Y. & Yoon, K.-H. Extended depth of field in augmented reality. Sci. Rep. 13, 8786. https://doi.org/10.1038/s41598-023-36098-9 (2023).
Winn, B., Whitaker, D., Elliott, D. B. & Phillips, N. J. Factors affecting light-adapted pupil size in normal human subjects. Invest. Ophthalmol. Vis. Sci. 35, 1132–1137 (1994).
Yarbus, A. L. Eye Movements and Vision (Springer, 2013).
Hirsch, J. & Curcio, C. A. The spatial resolution capacity of human foveal retina. Vision Res. 29, 1095–1101 (1989).
Funding
This research was supported by Korea Institute of Science and Technology (KIST) Institutional Program (Project No. 2E33842).
Author information
Authors and Affiliations
Contributions
Ki-hyuk Yoon performed basic optical system design and simulations. Jinho Yoon designed and experimented on the core optical system, verified the optical structure and extracted data, and wrote the manuscript. Sung Kyu Kim conceived and designed the basic optical structure, supervised the project, and is the corresponding author. Wonshik Choi provided valuable advice and guidance. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Ethics approval
This study did not involve human participants or animals.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic Supplementary Material
Below is the link to the electronic supplementary material.
Supplementary Material 1
Supplementary Material 2
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Yoon, J., Yoon, Kh., Choi, W. et al. Holographic effect through full parallax multi-viewpoint augmented reality with extended depth of field. Sci Rep 16, 440 (2026). https://doi.org/10.1038/s41598-025-29734-4
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1038/s41598-025-29734-4








