Abstract
While epitaphs recording tomb occupants’ identities and biographies provide critical insights for archeological discoveries, their cinnabar inscriptions often suffer severe degradation during prolonged burial. To address this challenge, this study proposes a hyperspectral processing framework designed to enhance the readability of degraded cinnabar inscriptions on epitaphs. The framework quantifies sample spectral curves using Euclidean distance, enabling classification into high-contrast and low-contrast groups. For high-contrast groups (Branch 1), optimal spectral bands are selected through the optimum index factors, with subsequent integration of imaging differentials and edge detection to enhance text contrast and boundary definition. Branch 2 employs pseudo-color image synthesis, dark channel prior-based feature restoration, and bilateral filtering Retinex algorithms to achieve noise reduction and feature enhancement for low-contrast groups. Experimental results demonstrate the framework’s superiority over conventional methods in significantly improving the readability of cinnabar inscriptions, highlighting its potential as a valuable tool for archeological epigraphy.
Similar content being viewed by others
Introduction
As invaluable cultural heritage, epitaphs provide crucial primary materials for studying specific historical periods and ancient civilizations1. The underground epitaph tradition in China originated in the Eastern Han Dynasty, gradually proliferated during the Northern and Southern Dynasties, and reached its zenith in the Tang and Song periods. Typically, they consist of two parts: “Zhi” (prose) and “Ming” (verse), which respectively record the deceased’s information and convey mourning sentiments. Before engraving, craftsmen would draft the text using cinnabar—a mineral pigment believed to protect the dead from evil—creating distinctive cinnabar inscriptions2. During the Han-Tang era, due to cinnabar’s scarcity and cultural significance, such epitaphs often served as markers of social status, resulting in their current rarity. Presently, these artifacts face critical conservation challenges3,4: pigment fading, substrate erosion, and structural cracking caused by environmental and anthropogenic factors, leading to irreversible progressive deterioration. These degradation processes severely hinder academic analysis and historical interpretation, urgently necessitating the development of effective preservation and information extraction methods5,6,7.
The cinnabar-written texts convey the most historically significant information in the epitaph. These information narratives serve as primary sources that complement and revise official historical records, offering unique perspectives on individual lives, social hierarchies, and religious customs of bygone eras. Current enhancement approaches for this information can be classified into three primary categories: manual restoration techniques8, chemical conservation methods9, and digital imaging analyses10,11,12,13. Manual restoration methods, while benefiting from expert knowledge, are inherently subjective and labor-intensive. Chemical conservation approaches, involving reagent-based treatments14, risk causing secondary damage to the fragile cinnabar inscriptions. Digital imaging techniques, though non-invasive, often require specialized equipment and may necessitate moving the subject from the original position10,11.
The development of portable hyperspectral imaging systems has revolutionized field-based archeological analysis15,16,17. These advanced devices enable comprehensive spectral data acquisition across continuous wavelength ranges while maintaining complete non-invasiveness18. Previous applications of hyperspectral technology in cultural heritage preservation have demonstrated remarkable success: By using hyperspectral techniques, Wang et al.19 effectively eliminated mold-related information from hyperspectral images (HSIs) using hyperspectral techniques, whereas Fu et al.20 also successfully revealed the painted image on the rock surface. Notably, Cao et al.21 achieved significant advancements in mural restoration by employing hyperspectral techniques to recover original coloration from smoke-damaged surfaces. Yu et al.22 automatically labeled the diseased areas on the surface of ancient murals by hyperspectral technology. However, cinnabar inscriptions present unique analytical challenges distinct from other cultural artifacts such as murals or pottery. In the epitaph, although cinnabar and the stone substrate (background) initially show a vivid color contrast, due to different degradation processes and protection treatment methods, cinnabar inscriptions visually become blurred23. However, the fundamental material differences between the cinnabar inscriptions and background manifest as distinct spectral signatures. This inherent spectral divergence enables precise material discrimination and facilitates targeted enhancement of cinnabar-specific information through advanced spectral processing techniques.
It is evident that specific information from cinnabar inscriptions can be recovered and enhanced in the spectral dimension, and spectral techniques have proven effective in information extraction and enhancement applications for other cultural relics19,24,25. However, to the best of our knowledge, most existing research on cinnabar inscriptions still uses natural images or applies chemical methods26. In addition, spectral-based processing often relies on matrix transformations, whose accuracy remains insufficient10. To address these problems, this paper proposes a novel spectral-based framework for enhancing cinnabar inscriptions, aimed at improving the interpretation of inscriptions and the analysis of their content. The framework processes images through three sequential stages: data pre-processing, data classification, and image enhancement. During the data classification stage, samples with a high difference are defined as the high-contrast group, while samples with a low difference are defined as the low-contrast group. In the enhancement stage, targeted enhancement is applied to different sample groups based on classification results to improve their visibility. The main innovations of this paper can be summarized as follows:
First, we propose a hyperspectral enhancement framework tailored for cinnabar inscriptions with varying degrees of degradation, achieving end-to-end enhancement of cinnabar inscriptions.
Second, Branch 1 (for the high-contrast group) employs a band selection mechanism based on the optimum index factor (OIF)27 for feature space selection, combined with a Laplacian-based edge enhancement operator for refining text regions.
Third, Branch 2 (for the low-contrast group) integrates red-channel remap, the dark channel prior model, and an improved Retinex algorithm to enhance the visibility of inscriptions.
Methods
The overall workflow of this study is shown in Fig. 1, which outlines the entire process from data collection to the final enhancement results.
Equipment details and software environment
The equipment is shown in Fig. 2, with the detailed information presented in Table 1. During the collection process, halogen lamps (100–1000 W) are used as the primary lighting equipment to simulate daylight for the acquisition of indoor illumination.
Radiation correction during data pre-processing was automatically performed by Specim IQ equipment using reflective whiteboard data. ENVI software (version 5.3) was primarily utilized for band clipping and the minimum noise fraction forward and inverse transformations. All subsequent data classification and the operations in Branch 1 and Branch 2 were performed in the Python 3.11 environment.
Experimental materials
The investigated cinnabar inscriptions derive from two archeological sources shown in Fig. 3.
Data pre-processing
Using a spectroscopic imager for data acquisition requires specific calibration due to the non-uniform spectral distribution of the incoming light, the spectral response of the dispersive element and detector, and the presence of dark current noise7,28. This calibration process can be executed through the application of the following equation29:
where X represents reflectance-corrected data, Xraw is the original hyperspectral data of the epitaph, Xwhite refers to the standard reflectance data obtained on-site, and Xdark pertains to the dark current data acquired when the light source is turned off and the lens is covered. The reflectance value of the standard reflector is 99%.
Then, the data X underwent preliminary denoising using the MNF forward and inverse transform.
Data classification
While cinnabar and stone have distinct material compositions that should allow for spectral differentiation in theory, the degraded states of these materials cause their spectral curves to overlap, thus complicating practical distinction. Although various classification and spectral unmixing methods have been proposed for material distribution estimation30,31,32, this framework adopts the Euclidean distance method—the simplest approach—to differentiate between the two materials for computational efficiency. The formula for Euclidean distance is as follows:
where xi and \({x}_{i}^{{\prime} }(i=1,2,...,n)\) are the reflectance spectra of the i-th sampled pixels in the cinnabar inscription (target region) and background stone tablet (background region), respectively. A and B denote the two classes of images, and a is the threshold value, which is set to 0.5. When d>a, the spectral curves are significantly different, and the image is classified as class A. Otherwise, it is classified as class B.
Branch 1 in the proposed framework
Branch 1 contains three components: OIF for feature band selection, high-differentiated image acquisition in feature enhancement, and image sharpening. The following is a detailed introduction.
Optimum index factor for feature band selection: OIF33,34 is a model used for selecting spectral bands with the aim of identifying the optimal combination that maximizes information retention while minimizing inter-band correlation. In other words, the ideal spectral band combination should have the highest standard deviation for each band and the lowest correlation coefficients among the bands. Through the OIF model, not only can the computational load required by the framework be reduced, but redundant bands in hyperspectral data can also be eliminated.
In Branch 1, all possible three-band combinations are derived from the pre-screened spectral range R of image X (denoted as \(X\mathop{\to }\limits^{R}M\)). Subsequently, the OIF values of each combination in set M are calculated using a predefined formula. Finally, the combination \({X}^{{\prime} }\) with the highest OIF value is selected as the optimal feature band combination for image X it can be expressed as:
Within Branch 1, Eq. (4) respectively selects the first 10 spectral bands preceding 519.25 nm and the subsequent 10 bands following 620.30 nm for computational processing, yielding two feature band combinations \({X}_{1}^{{\prime} }\) and \({X}_{2}^{{\prime} }\). Specifically, \({X}_{1}^{{\prime} }\) represents the feature combination demonstrating no spectral separability between cinnabar and background materials, whereas \({X}_{2}^{{\prime} }\) corresponds to the combination exhibiting maximum spectral differentiation.
High-differentiated images acquisition in feature enhancement: Principal component analysis (PCA) is a core processing technique in image analysis, leveraging orthogonal linear transformation to project high-dimensional data into a low-dimensional feature space10. By effectively suppressing random noise while maximizing variance retention, PCA has become indispensable in extracting meaningful information from complex datasets.
To isolate primary information and eliminate irrelevant noise, PCA is applied to \({X}_{1}^{{\prime} }\) and \({X}_{2}^{{\prime} }\), with the first principal component extracted from each. Subsequently, to enhance the distinguishability between cinnabar and the background, the two results are subjected to differential processing, yielding a highly differentiated outcome denoted as \(\hat{X}\) (Fig. 4e). It can be expressed as:
Image sharpening: While differential processing enhances image details, it compromises the clarity and completeness of cinnabar inscription edges (Fig. 4e). To address this, the Laplacian operator is employed to sharpen the image, which is represented as \(Y=Lap(\hat{X})\), The final result Y is shown in Fig. 4f.
Branch 2 in the proposed framework
Branch 2 contains three components: using NIR image for red-channel remapping, using the dark channel prior model for initial enhancement, and using the Retinex model for detail enhancement. The following is a detailed introduction.
Using NIR image for red-channel remapping: Mapping more distinctive features to the RGB channels can significantly enhance the discriminability of target features, although this may introduce color distortion. As shown in Fig. 5b, while there is spectral overlap between the text and background regions beyond the 900 nm wavelength, the text spectrum exhibits a significant response at 929.68 nm. Therefore, in Branch 2, we replace the R channel in RGB with the 929.68 nm near-infrared (NIR) channel (Fig. 5c) to generate the pseudo-color image Xre (Fig. 5d).
Dark channel prior model for initial enhancement: The dark channel prior model35 has been successfully applied in the field of image enhancement. By analyzing the dark channel of an image, this model can estimate atmospheric light and transmission, which are then used to enhance the image, so as to effectively improve its contrast, visual appeal, and overall quality. In Branch 2, it can be expressed as:
where Dark( ⋅ ) denotes the dark channel prior model, and Xd represents the enhanced image.
Retinex model for detail enhancement: The Retinex theory36 aims to simulate the way the human visual system perceives light and color, with its model designed to enhance the color and contrast of images.
The model can be expressed as:
where T(Xd) and L(Xd) represent the reflection component and the incident component of the image, respectively. The imaging quality of T(Xd) determines the image quality of Xd.
In Branch 2, we first map T(Xd) to the HSV domain. Then, we apply bilateral filtering to the value channel V, which can be denoted as:
Finally, we convert the filtered HSV image back to the RGB domain to obtain the final image Y, represented as Y = RGBdomain ← HSVdomain ← Vf.
Evaluation indicators
In this work, objective evaluation of image quality is performed using non-reference metrics37,38, including average gradient, edge strength, contrast, and information entropy. These metrics are detailed in Table 2.
The average gradient quantifies changes in the sharpness and texture of the image. The edge strength corresponds to the magnitude of the gradient at the edge locations. The contrast reflects image sharpness, with higher values indicating superior quality. Information entropy serves as a statistical measure of image features, reflecting the average content of information within an image.
Results
Comparative methods and other settings
As comparative methods, this paper selects three classic component analysis approaches: PCA performs linear dimensionality reduction by maximizing data variance; MNF integrates noise suppression and dimensionality reduction through signal-to-noise ratio-based component ranking; Independent Component Analysis (ICA) separates statistically independent components under the non-Gaussianity assumption. Three methods construct feature representations from variance optimization, independence constraints, and noise minimization perspectives.
All the parameters of these methods are set to the default values, and the number of components is set to 30.
Experimental results in Branch 1
Apply the method of Branch 1 to Fig. 3a. From the spectral curve of Fig. 4b, it can be observed that the difference between the cinnabar inscription and the background is concentrated within the range of 519.25–620.30 nm. Next, PCA transformation is applied to the obtained images to extract the first principal component of the text and background, resulting in Fig. 4c, d. Finally, by using subtract calculation, the maximum difference information of Fig. 4c, d is obtained as shown in Fig. 4e. It can be observed from Fig. 4e that the text structure is more complete and there is less noise in the background.
Experimental results in Branch 2
Apply the method of Branch 2 to Fig.3b. From Fig. 3b, beyond the 900 nm wavelength, the spectral values of the text region and the background region mix with each other, indicating that after 900 nm, the information between the two regions is relatively consistent across different bands. Therefore, in Branch 2, the NIR channel at 929.68 nm (Fig. 5c) is selected to replace the red channel in RGB to generate a pseudo-color image (Fig. 5d). For Fig. 5d, the background information is first removed using the dark channel prior, resulting in Fig. 5e. Then, the Retinex model is employed to estimate T(x) in Eq. (7) from Fig. 5e. Next, bilateral filtering is applied to T(x) for noise reduction in Fig. 5e. Finally, T(x) is restored to obtain the final result in Fig. 5f.
Apply to other types of cultural relics
We also apply this method to other hyperspectral data of cultural relics. The Zhushu pottery data provided by the Shaanxi Academy of Archeology. The Tang Dynasty data provided by the School of Cultural Heritage, Northwest University.
The specific effects can be found in Supplementary Fig. 1 to Supplementary Fig. 4 of Supplementary Information.
Comparison between different filters in Branch 1
There are various filtering methods, including high-pass filtering, directional filtering, Sobel operator39, etc. To find the appropriate filter, we applied different methods to process Fig. 4e and evaluated their effects, as shown in Table 3. Directional filtering can enhance specific directions. Prewitt operator40 can reduce false edges and noise. Sobel operator can smooth the noise and provide information on the direction of the edge. Roberts operator can precisely edge localization, but is sensitive to noise. However, as shown in Table 3, the Laplacian filter demonstrated superior performance due to its ability to highlight areas with rapid intensity changes, which makes it particularly suitable for sharpening the subtraction image (Fig. 4e). Therefore, in Branch 1, we opted for the Laplacian filter as the sharpening filter.
Comparison between different methods and Branch 1
Figure 6 shows the results obtained by different methods. From a visual perspective, the text area extracted with ICA (Fig. 6b) is almost invisible, while with PCA (Fig. 6a), some text is missing. Although MNF (Fig. 6c) can effectively extract text, its edge clarity is not as good as that of Branch 1 (Fig. 6d).
Table 4 presents the evaluation metrics of different methods. It can be observed that, except for the information entropy, where Branch 1 slightly underperforms PCA, Branch 1 achieves the best performance in the other three indicators. The improvement in edge strength aligns with the clear preservation of image edges in the visual results (Fig. 6), indicating that the method of Branch 1 can enhance the visual effect of text.
Comparison between RGB and pseudo-color images in Branch 2
As discussed previously, NIR images in HSIs typically contain more information. To demonstrate the additional information provided by NIR images, we applied the same processing workflow to RGB images, and the results are shown in Fig. 7. Despite the high resolution of Fig. 7b, the enhancement effect of it is compromised due to the interference of background information in Fig. 7a. In contrast, the information introduced by NIR in Fig. 7c significantly reduces background interference, thus improving the overall visual quality of Fig. 7d.
Comparison between different filters in Branch 2
To demonstrate the effectiveness of the selected filtering methods in Branch 2, this study compared the performance indicators with other filters. Gaussian filtering, as a commonly used linear smoothing filter, is capable of removing high-frequency noise from images but results in a loss of image detail. Therefore, the results obtained using Gaussian filtering have the lowest values in terms of both information entropy and edge strength. Homomorphic filtering, as a method to enhance image brightness and contrast, can make images clearer and more analyzable. However, in the case of this method, it is more crucial to enhance edge information than to contrast. As a filter that preserves edge information, bilateral filtering can avoid the problem of edge blurring that may occur in traditional filters, while accurately removing noise and preserving image details. Therefore, according to Table 5, the indicators of bilateral filtering are the best, making it the preferred choice for Branch 2.
Comparison between different methods and Branch 2
Figure 8 shows the color display results of different methods. To ensure effective visualization, the three most relevant components were selected for RGB mapping. As shown in Fig. 8, the text information in PCA, ICA, and MNF is not intuitively displayed. Although the colors in MNF (Fig. 8c) are closer to reality and text information can be observed, they are not as distinct as those in Branch 2 (Fig. 8d). This indicates that Branch 2 can effectively enhance text information. Table 6 lists the evaluation indicators of different methods. Similar to Branch 1, Branch 2 yields good results in all indicators except for slightly lower information entropy. Furthermore, the visualizations in Fig.8 demonstrate the effectiveness of the Branch 2 method for cinnabar inscription enhancement.
Discussion
We propose a cinnabar inscription enhancement method via spectral curve classification (CIEM-SCC), which has made significant progress in improving the readability and interpretability of faded cinnabar inscriptions using hyperspectral imaging. Moreover, the application of this method to Zhushu pottery and epitaphs from different dynasties has further confirmed its excellent visualization results, demonstrating that the method is not limited to cinnabar inscriptions but also has broad applicability to other cinnabar-inscribed artifacts. This discovery highlights the potential application value of the method in archeological research, particularly in the preservation and interpretation of cultural heritage.
The method employs a dual-branch strategy to address two different types of images: (1) for images with high contrast, an optimized process that includes band selection, noise suppression, and edge enhancement effectively restores textual details; (2) for images with low contrast, a refined technique that integrates pseudo-color composition, dark channel optimization, and Retinex enhancement successfully reveals obscured information. Experimental results in Zhushu pottery and epitaphs from various dynasties have further confirmed the significant effectiveness of the method in improving text visibility and feature extraction, showing notable improvements in both quantitative metrics and visual quality.
Despite its outstanding performance, the method does have limitations, such as the impact of residual noise in severely degraded areas and the dependence on pigment composition for spectral band selection. Future research will focus on refining noise reduction techniques and expanding the application of spectral analysis in archeological research to include the preservation and study of Zhushu pottery and epitaphs from diverse historical periods.
Data availability
The inscription data is available at https://github.com/kiferpan/datasets. All data presented in the manuscript are original and do not involve any copyright conflicts.
References
Hsu, Y.-h Searching for traces of the shi in tombs of the jin dynasty. J. Chin. Hist. 8, 269–296 (2024).
Minami, T. et al. Identification of source mine using sulfur, mercury, and lead isotope analyses of vermilion used in three representative tombs from the Kofun period in Japan. J. Archaeol. Sci. Rep. 37, 102970 (2021).
Hou, M.-l et al. Review of hyperspectral imaging technology in analyzing painted artifacts. Spectrosc. Spectr. Anal. 37, 1852–1860 (2017).
Zheng, L., Liang, X., Li, S., Li, Y. & Hu, D.Fading and showing mechanisms of ancient color relics based on light scattering induced by particles. RSC Adv. 8, 1124–1131 (2018).
Peng, L., Bo, W., Yang, H. & Li, X. Deep learning-based image compression for enhanced hyperspectral processing in the protection of stone cultural relics. Expert Syst. Appl. 271, 126691 (2025).
Wu, J., Xia, Z., Zhang, H. & Li, H. Blind quality assessment for screen content images by combining local and global features. Digit.Signal Process. 91, 31–40 (2019).
Qiao, K., Hou, M., Lyu, S. & Li, L. Extraction and restoration of scratched murals based on hyperspectral imaging-a case study of murals in the east wall of the sixth grotto of Yungang grottoes, Datong, China. Herit. Sci. 12, 123 (2024).
Li, Q., Wang, H. & Zou, Q. A murals inpainting algorithm based on sparse representation model. Geomat. Inf. Sci. Wuhan. Univ. 43, 1847–1853 (2018).
Chen, X., Zhu, Q., Zhang, N. & Chen, Q. Selection of cleaning and restoration techniques for calligraphy and painting relics—application of the principle of minimal intervention in the restoration of calligraphy and painting relics. Sci. Conserv. Archaeol. 29, 56–64 (2017).
Pei, Z., Huang, Y. M. & Zhou, T. Review on analysis methods enabled by hyperspectral imaging for cultural relic conservation. Photonics 10, (2023).
Zhang, Y. et al. Using infrared photography to study the smoky murals in muti-temple. Spectrosc. Spectr. Anal. 40, 3628 (2020).
Qiu, S., Zhang, P., Li, S. & Hu, B. Extraction and analysis algorithms for Sanxingdui cultural relics based on hyperspectral imaging. Comput. Electr. Eng. 111, 108982 (2023).
Yu, Z. et al. Agd-gan: adaptive gradient-guided and depth-supervised generative adversarial networks for ancient mural sketch extraction. Expert Syst. Appl. 255, 124639 (2024).
Artioli, G. Scientific Methods and Cultural Heritage: an Introduction to the Application of Materials Science to Archaeometry and Conservation Science (Oxford University Press, 2010).
Schmitt, B., Souidi, Z., Duquesnoy, F. & Donzé, F.-V. From RGB camera to hyperspectral imaging: a breakthrough in Neolithic rock painting analysis. Herit. Sci. 11, 91 (2023).
Behmann, J. et al. Specim IQ: evaluation of a new, miniaturized handheld hyperspectral camera and its application for plant phenotyping and disease detection. Sensors 18, 441 (2018).
Donzé, F.-V., Audin, L., Schmitt, B., Taillefer, A. & Truche, L. Contribution of hyperspectral analysis in relative dating applied to miculla petroglyphs (Peru). J. Archaeol. Sci. 169, 106023 (2024).
Picollo, M., Cucci, C., Casini, A. & Stefani, L. Hyper-spectral imaging technique in the cultural heritage field: new possible scenarios. Sensors 20, 2843 (2020).
Wang, S. et al. Virtual restoration of ancient mold-damaged painting based on 3d convolutional neural network for hyperspectral image. Remote Sens. 16, 2882 (2024).
Fu, Y. et al. Secrets on the rock: analysis and discussion of the Dunde Bulaq rock art site. Herit. Sci. 12, 38 (2024).
Cao, N. et al. Restoration method of sootiness mural images based on dark channel prior and retinex by bilateral filter. Herit. Sci. 9, 1–19 (2021).
Yu, K. et al. Automatic labeling framework for paint loss disease of ancient murals based on hyperspectral image classification and segmentation. Herit. Sci. 12, 192 (2024).
Khalid, S. et al. A review on traditional and artificial intelligence-based preservation techniques for oil painting artworks. Gels 10, 517 (2024).
Macalintal, J. N., Hassanzadeh, A. & Messinger, D. Hyperspectral VNIR-SWIR image fusion on cultural heritage and remote sensing datasets using image sharpening techniques. In Proc. SPIE 13031, Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imaging XXX, 130310O (7 June 2024); https://doi.org/10.1117/12.3012711.
Grabowski, B., Masarczyk, W., Głomb, P. & Mendys, A. Automatic pigment identification from hyperspectral data. J. Cult. Herit. 31, 1–12 (2018).
Pannocchia, C. P. et al. New evidence reveals the earliest use of cinnabar in the western mediterranean: the Neolithic settlement of La Marmotta (Lazio, Italy). Quat. Sci. Rev. 335, 108746 (2024).
Yousif, S. R. & Shneen, W. F. Using optimum index factor and determinant covariance methods and compare to PCA on satellite images to determine the earth’s landmarks (part of Tar- an najaf and its neighbours). J. Phys. Conf. Ser. 1660, 012077 (2020).
Gao, Z. et al. Application of hyperspectral imaging technology to digitally protect murals in the Qutan Temple. Herit. Sci. 11, 8 (2023).
Li, X., Yang, H., Chen, C., Zhao, G. & Ni, J. Deterioration identification of stone cultural heritage based on hyperspectral image texture features. J. Cult. Herit. 69, 57–66 (2024).
Feng, X.-R. et al. Hyperspectral unmixing based on nonnegative matrix factorization: a comprehensive review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 15, 4414–4436 (2022).
Bhatt, J. S. & Joshi, M. V. Deep learning in hyperspectral unmixing: a review. In Proc. IGARSS International Geoscience and Remote Sensing Symposium 2189–2192 (IEEE, 2020).
Kumar, V., Singh, R. S., Rambabu, M. & Dua, Y. Deep learning for hyperspectral image classification: a survey. Comput. Sci. Rev. 53, 100658 (2024).
Patel, N. & Kaushal, B. Classification of features selected through optimum index factor (oif) for improving classification accuracy. J. For. Res. 22, 99–105 (2011).
Wang, Y. & Song, B. et al. Study on the application of the optimum index factor in the electric power benchmarking. J. Phys. Conf. Ser. 1346, 012002 (2019).
Lee, S., Yun, S., Nam, J.-H., Won, C. S. & Jung, S.-W. A review on dark channel prior-based image dehazing algorithms. EURASIP J. Image Video Process. 2016, 1–23 (2016).
Hussein, R. R., Hamodi, Y. I. & Rooa, A. S. Retinex theory for color image enhancement: a systematic review. Int. J. Electr. Comput. Eng. 9, 5560 (2019).
Deng, C.-X., Wang, G.-B. & Yang, X.-R. Image edge detection algorithm based on improved Canny operator. In Proc. International Conference on Wavelet Analysis and Pattern Recognition 168–172 (IEEE, 2013).
Zhang, R., Xiao, Q., Du, Y. & Zuo, X. Dspi filtering evaluation method based on Sobel operator and image entropy. IEEE Photonics J. 13, 1–10 (2021).
Vairalkar, M. K. & Nimbhorkar, S. Edge detection of images using Sobel operator. Int. J. Emerg. Technol. Adv. Eng. 2, 291–293 (2012).
Saadatmorad, M., Rezaei-Sedehi, E. & Khatir, S. Crack detection in historic structures using an improved Prewitt filter: case study of the historic si-o-se-pol bridge in Iran. HCMCOU J. Sci.–Adv. Comput. Struct. 14, 3–9 (2024).
Acknowledgements
The authors extend special thanks to Professor Yang Lu of the School of Cultural Heritage at Northwest University and Associate Researcher Geng Qinggang of the Shaanxi Academy of Archeology for their assistance in data collection. This work is supported by the National Key Research and Development Program of China (No. 2022YFE0203800); the National Natural Science Foundation of China (62101446); the Key Research and Development Program of Shaanxi (No. 2024GH-ZDXM-44); Shaanxi Postdoctoral Research Grant (234012300004).
Author information
Authors and Affiliations
Contributions
Y.F. and J.W. conceived the research concept, with Y.F. developing the methodology and preparing visualizations. Y.F. wrote the original draft, which was critically revised and edited by J.W. S.P. implemented the software, performed validation, and conducted formal analyses. S.Q. carried out investigation activities while J.Y. managed resources and curated datasets. J.W., S.Q., and J.P. jointly secured funding for the project, with J.P. additionally overseeing project administration. All authors contributed to the manuscript review and approved the final version.
Corresponding authors
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Wang, J., Fu, Y., Yan, J. et al. CIEM-SCC: cinnabar inscription enhancement method via spectral curve classification. npj Herit. Sci. 13, 416 (2025). https://doi.org/10.1038/s40494-025-01972-9
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s40494-025-01972-9