Abstract
With the rapid development of 3D scanning technologies, high-density point clouds of cultural heritage artifacts such as stone carvings, statues pose significant challenges in storage, processing, and accurate reconstruction. This paper proposes a point cloud simplification method tailored for cultural heritage applications, combining clustering and saliency analysis to preserve intricate surface details critical for archaeological studies. By segmenting point clouds into clusters with normal vector constraints and evaluating saliency through roughness and curvature metrics, our method adaptively retains primary features including engraved patterns weathered textures while simplifying non-feature regions. Experiments on stone carvings from the Northern Song Imperial Mausoleum, Terracotta Warriors, and Stanford datasets demonstrate that the algorithm effectively avoids mesh holes and maintains geometric fidelity, enabling efficient 3D reconstruction for heritage conservation. This work bridges advanced point cloud processing with practical archaeological needs, offering a robust tool for digitizing and analyzing cultural relics with minimal loss of historically significant details.
Similar content being viewed by others
Introduction
Cultural heritage serves as a vital witness to human civilization, and its digital preservation1 and reconstruction have emerged as a central focus in archaeology, cultural heritage conservation, and digital humanities2. Three-dimensional (3D) scanning technologies3, such as laser scanning and photogrammetry4, provide non-contact, high-fidelity digital archiving solutions for cultural relics through high-precision point cloud data5. However, while high-density point clouds capture intricate surface details of artifacts6—such as stone carving patterns, textures of terracotta warrior armor, and bronze inscriptions—they also introduce challenges like storage redundancy and computational inefficiency. For instance, the point cloud of a stone-carved artifact can reach tens of millions of points, leading to time-consuming 3D reconstruction processes and potential masking of critical historical information by redundant data7. Traditional point cloud simplification methods (e.g., uniform sampling, curvature filtering) effectively reduce data volume but struggle to balance the trade-off between simplification rate and detail preservation. Uniform sampling8 indiscriminately removes points regardless of local geometric significance, often eroding fine-grained features in complex regions. Curvature-based9 approaches tend to oversimplify planar areas and generate holes in low-saliency regions due to abrupt sampling density changes. Such over-simplification risks eroding archaeologically critical features like weathering traces and engraved boundaries, thereby compromising artifact authenticity and research credibility.
Point cloud simplification is a crucial preprocessing step in 3D reconstruction, particularly for cultural heritage artifacts10. Existing point cloud simplification methods can be broadly categorized into four groups: mesh-based, clustering-based, point-based, and deep learning-based approaches.
Common approaches include the uniform mesh method11, which employs a uniform grid (i.e., equidistant spatial partitioning) to downsample points while maintaining structural regularity. Octree downsampling hierarchically partitions space into voxels of varying resolutions. Herraez et al. 12 performed a two-stage simplification of the point cloud within the grid. In addition, Lv et al. 13,14 introduced the Approximate Intrinsic Voxel Structure (AIVS), a method that preserves geometric features by approximating the intrinsic geometric properties of surfaces. Their two-stage framework first performs intrinsic resampling to retain curvature-aware features, followed by isotropic resampling to ensure uniform point distribution. However, voxel-based methods often neglect fine-grained geometric details critical for cultural relics. In contrast, polygonal mesh-based simplification15 methods simplify the original point cloud by constructing polygonal meshes and removing redundant meshes according to certain rules to achieve simplification, but the mesh generation process is very time consuming16.
Clustering-based methods partition the point cloud into groups of similar points17, from which representative points are selected. Shi et al. 18 utilized the maximum deviation of normal vectors to segment the point cloud into subclusters. Sachdeva et al. 19 employed the k-means algorithm to partition the point cloud and used entropy values to identify and remove low-information clusters. Yanget al. 20 enhanced clustering-based approaches by optimizing fuzzy c-means (FCM) algorithms using a gravity search strategy. While these methods can retain significant structural information, their performance heavily depends on the initial cluster center selection, making them susceptible to noise and local feature loss.
Point-based approaches directly evaluate and retain points with higher importance based on geometric properties. Ji et al. 21 and Chen et al. 22 applied multiple geometric operators to assess point significance, while Xuan et al. 23 proposed a method based on local entropy of normal angles. Zhang et al. 24 introduced a simplification entropy measure to prioritize key points based on geometric features. Although point-based methods provide fine-grained control over simplification, they often struggle to maintain a balance between data reduction and feature preservation. In cases involving highly detailed cultural relics, these methods may inadvertently discard subtle but important surface details, compromising the accuracy of subsequent reconstructions.
Recent advances in deep learning have led to the development of data-driven point cloud simplification techniques. Nezhadarya et al. 25 introduced a hierarchical sampling framework using critical points extracted from a max-pooling operation. Lang et al. 26 proposed Samplenet, which generates reduced point clouds by learning a sampling matrix through feature-aware modules. Yuanqi et al. 27 further refined this approach with Pcs-net, which optimizes sampling through a feature-preserving mechanism. While deep learning-based methods demonstrate impressive performance on small datasets, they face challenges in scalability and generalization when applied to large, detail-rich point clouds, such as those of cultural heritage artifacts. Additionally, the lack of large-scale annotated datasets limits their practical applicability.
Although the above methods have achieved good results for some point cloud models, problems such as low accuracy, loss of information, high complexity of the algorithm, and high time cost may occur when dealing with large data volumes and detail-rich models. For example, when simplifying point clouds of highly detailed stone carvings, traditional point cloud simplification algorithms may significantly lose important features. While Arav et al. 28 proposed a saliency measure for natural scenes based on center-surround contrast, their method lacks explicit clustering to isolate structural features. To address these challenges, this paper proposes a novel point cloud simplification algorithm that combines clustering and saliency analysis. By integrating Euclidean clustering with normal vector constraints and utilizing roughness and curvature metrics for saliency evaluation, our approach preserves critical surface details while reducing computational complexity. This method is specifically tailored for the 3D reconstruction of cultural heritage artifacts, ensuring the retention of historically significant features while improving processing efficiency.
This paper presents a novel point cloud simplification algorithm based on clustering and saliency, which we abbreviate as CSS, aiming to balance data reduction and detail preservation in 3D reconstruction of cultural Heritage. The main contributions of this work include:
-
(1)
The introduction of a new point cloud simplification algorithm that can retain detailed features on complex models.
-
(2)
By combining adaptive region partitioning with hierarchical voxel sampling, the algorithm ensures the uniformity of local regions while maintaining essential surface details.
-
(3)
Experimental results show the effectiveness of the proposed algorithm on point clouds acquired from different sensors and on point cloud models with complex details, such as stone carvings and public point cloud datasets.
The remaining sections of this paper are organized as follows: Section “Methods”, we introduce the proposed method and provide a detailed explanation of the specific principles underlying each process. Section “Result” presents in detail the experimental data used in this paper, the experimental results, and a comparative analysis of other methods. In addition, the advantages and limitations of the algorithm are discussed. Finally, in Section “Discussion”, we summarize the features of the proposed method and provide an outlook for future work.
Methods
In this section, we will systematically present our proposed method, and its flowchart is shown in Fig. 1. The process mainly includes five parts: The proposed method comprises five core stages (Fig. 1): Initial Simplification: Voxel downsampling to reduce computational load; Saliency Calculation: Joint roughness-curvature metric for feature importance; Point cloud clustering, Adaptive thresholding: Cluster-wise saliency thresholding for feature classification; Hierarchical Simplification: Region-specific voxel sampling.
Point cloud initial simplification
Cultural heritage scan point clouds often contain tens of millions or even hundreds of millions of points. Directly performing subsequent clustering segmentation, and saliency calculation on such high-density data will bring huge burdens in storage and calculation. Initial voxel downsampling can remove more redundant points, significantly reduce the number of points, and improve the running speed and memory utilization of subsequent algorithms (such as normal estimation and KNN search) without repeatedly calculating highly similar neighborhood information. Herráez et al. 12 also use voxel downsampling to reduce the number of points, and then perform secondary simplification on this basis. The voxel size we chose is in the range of 1–2 mm, while the width of the texture inscriptions of stone carvings is usually more than a few millimeters. Therefore, downsampling mainly removes redundant points within nearly overlapping or small distances, and does not destroy the overall outline and key features. In the subsequent normal normal clustering and saliency evaluation, the detailed point cloud can still be accurately identified and retained. With this approach allows us to overcome the difficulties of handling large and complex datasets and facilitate subsequent data analysis.
Saliency calculation
In this paper, the saliency value of each point is calculated by combining two characteristic parameters, roughness and surface curvature, to reflect its importance.
Roughness, defined as the Euclidean distance between a point and its locally fitted plane, is critical for preserving fine surface irregularities in cultural relics. For example, weathered textures or engraved boundaries exhibit high roughness due to abrupt elevation changes. Combined with curvature, which highlights broader geometric transitions, roughness ensures a holistic representation of both micro- and macro-features.
Roughness Calculation: In this paper, roughness is defined as the distance between each point \({p}_{i}\) and the best-fit plane of its nearest neighbor and recorded as \({R}_{s}\). As shown in Fig. 2, a spherical neighborhood of radius \(r\) with the point as its center is created and then the equation of a best-fit plane is calculated for the neighborhood of the point. The distance between the point and the plane is then determined using the plane equation, and this value is taken as the roughness of the point. The specific steps are as follows:
-
(1)
Create a sphere neighborhood of radius \(r\) and point \({p}_{i}\) as center and use kd-tree object to find the index and distance of the points in the neighborhood.
-
(2)
Judge the number of points in the neighborhood; if the number of points in the neighborhood is less than 3, the point is skipped. Otherwise, proceed to step 3.
-
(3)
Calculate the center of mass \({p}_{c}\) of the point cloud data in the neighborhood of the point \({p}_{i}\); where k is the number of points in the point’s neighborhood, \({x}_{i}\), \({y}_{i}\), \({z}_{i}\) are the coordinates of the point.
-
(4)
Use Eq. (1) to compute the covariance matrix of the neighboring points. Then, apply eigen decomposition as shown in Eq. (2) to obtain the eigenvalues and eigenvectors. The eigenvector corresponding to the smallest eigenvalue indicates the normal direction of the best-fit plane. Using this normal vector and the centroid of the neighborhood, the coefficients of the local fitted plane equation (Eq. (3)) can be determined, where the eigenvector defines the normal vector (a, b, c) and the constant term d is computed using the dot product of the normal vector and the centroid.
$$M=\frac{1}{k}\mathop{\sum }\limits_{i=1}^{k}\left({p}_{i}-{p}_{c}\right){\left({p}_{i}-{p}_{c}\right)}^{T}$$(1)$$M\cdot {\vec{V}}_{j}={\lambda }_{j}\cdot {\vec{V}}_{j},j\in 0,1,2$$(2)$${aX}+{bY}+{cZ}+d=0$$(3) -
(5)
Finally, the roughness \({R}_{s}\) at this point is calculated using Eq. (4):
Point cloud curvature is a property that describes the local geometry of each point in the point cloud and can be used to describe the degree of local curvature variation around each point in the point cloud. Curvature varies less in flat or smooth surface regions and more in irregularities, depressions, boundaries, vertices, etc29. As shown in Fig. 3, curvature is often used as a metric to evaluate the importance of each point when simplifying the point cloud to better preserve the geometric features and structure of the original point cloud30. However, directly using curvature to simplify point clouds often leads to data holes in regions with low curvature. Therefore, in this work, curvature is used as an important parameter for calculating the saliency of each point.
The surface curvature is calculated as follows: first, the covariance matrix is constructed from Eq. (1), and then perform eigenvalue decomposition on the covariance matrix M in formula (2) to obtain the eigenvalues of M. If the eigenvalues satisfy \({\lambda }_{0}\le {\lambda }_{1}\le {\lambda }_{2}\), then the surface curvature of point is:
Saliency value calculation: This paper uses roughness and curvature to comprehensively calculate the significance value of each point. In order to avoid the influence of dimension on the significance value of each point, the two calculated characteristic parameters are standardized. This article uses Eqs. (6) and (7) to dimensionless the two characteristic parameters and map their values to[0-1].
Where \({R}_{i}^{* }\) and \({C}_{i}^{* }\) are the standardized roughness and curvature, \({R}_{i}\) and \({C}_{i}\) are the unstandardized roughness and curvature, \({R}_{\max }\) and \({C}_{\max }\) correspond to the maximum roughness value and curvature value, respectively, and \({R}_{\min }\) and \({C}_{\min }\) are the corresponding minimum values.
Finally, the saliency value of each point is calculated based on the normalized roughness and surface curvature, and the saliency value of each point is defined as Eq.(8).
Where a, b are the weight adjustment coefficients and \(a+b=1\).
Point cloud clustering
For an object, there are significant differences in the complexity of the different parts, so a direct simplification of the whole point cloud can easily lead to the loss of a many critical features. To overcome this problem, this study uses the cluster method to divide the original point cloud into multiple clusters.
Euclidean clustering31 of point clouds is a commonly used method for segmenting point clouds, primarily dividing them into different clusters by calculating the Euclidean distance between points. However, in some cases, solely relying on Euclidean distance may not accurately segment the point cloud. Euclidean clustering based on constraints of normal vector angles adds considerations for point cloud normal vectors on top of traditional Euclidean clustering, aiming to better distinguish points with different normal vector directions.
Normal vector angle constraint32 is a similarity measurement method based on the direction of normal vectors. To determine whether two neighboring points belong to the same planar or structural cluster, we compute the angle between their respective normal vectors. Specifically, for a point p and a neighboring point q, their normal \({\vec{n}}_{p}\) and \({\vec{n}}_{q}\) are both estimated using the same neighborhood parameters (radius r) via PCA. The angle \(\theta\) between \({\vec{n}}_{p}\) and \({\vec{n}}_{q}\) is calculated as Eq. (9). A smaller \(\theta\) angle indicates higher similarity, implying p and q lie on the same local surface or cluster.
Figure 4 shows the flowchart of Euclidean clustering based on normal vector angle constraints. Specifically, for a point P in space, its k-nearest neighbors are searched, and the angle between the normal vectors of this point and the point within its neighborhood is calculated. If the angle is smaller than a threshold, the point is clustered into the set Q. If the number of points in Q no longer increases, the entire clustering process ends; otherwise, points other than P need to be selected from set Q, and the process is repeated until the number of points in Q no longer increases.
Adaptive threshold setting and region division
The feature region mainly contains detailed characteristics of objects, encompassing complex geometric designs, while the non-feature region primarily consists of the planar parts of objects. The above process segments the original point cloud into multiple point cloud clusters. To determine the feature and non-feature regions of each point cloud cluster, the average saliency value of each point cloud cluster is calculated and used as the threshold for partitioning the region of that point cloud cluster.
If a single threshold is set for the entire point cloud, some feature points will inevitably be classified into non-feature regions. In this work, the strategy described above can adaptively set different thresholds for different parts of the point cloud, making the distinction between feature regions and non-feature regions more effective and effectively avoiding the division of feature points into non-feature regions.
Although each point cloud cluster can be divided into feature and non-feature regions based on their average saliency values, considering that the point clouds of stone-carved cultural relics are obtained through high-precision scanning, with very high point cloud density, after region partitioning, the feature region may still contain a large number of points. Therefore, to further subdivide the feature region, we calculate the average significance value of the divided feature region again. Points with significance values higher than this value are divided into primary feature regions, and points with significance values lower than this value but higher than the original cluster mean are divided into secondary feature regions. This two-level adaptive threshold strategy allows our algorithm to flexibly deal with cultural relic point clouds with high-density fine structures, while avoiding the misjudgment problem that may be caused by using a fixed threshold over a large range. In this way, each point cloud cluster is subdivided into primary feature regions, secondary feature regions, and non-feature regions. Figure 5 shows the result of partitioning one of the point cloud clusters.
Hierarchical simplification
The core idea of the region partitioning simplification strategy is to streamline the primary feature regions, secondary feature regions, and non-feature regions obtained from the above process. In this process, it is essential to ensure the uniformity of point clouds in the simplified results locally to ensure better model reconstruction outcomes.
By adopting a voxel downsampling scheme, potential hole issues in the point cloud can effectively be avoided. Specifically, different voxel sizes are set for each region to control the sampling quantity. Three different voxels are set for sampling in the primary feature region, secondary feature region, and non-feature region, respectively. Finally, the sampling results from the three regions are merged to obtain the simplified point cloud. Figure 6 shows the partitioning and simplification results of a point cloud cluster, demonstrating a significant improvement achieved by the region partitioning simplification strategy. It is evident that while retaining the main feature points, the point cloud data volume has been effectively reduced.
Results
Experiment settings and dataset
To test the applicability of the algorithm, we acquired different types of point clouds using different devices. Specifically, we acquired point clouds of several stone carvings in the imperial tombs of the Northern Song Dynasty using the Structure from Motion (SfM) technique33 (civil officials, auspicious poultry, stone elephant, and stone horse), Fig. 7 shows the appearance of the selected stone carvings. Point clouds of terracotta warriors and stelae acquired with a structured light scanner, and point clouds of stone carvings (civil officials, auspicious poultry, and stone horse) acquired with a terrestrial laser scanner (FARO Focus Premium 350). We also selected Dragon and Lucy from the Stanford public dataset as experimental data. Table 1 shows the point cloud data sources used for the experiments.
For the software platform, the entire algorithm programming was implemented in C + + under Windows 10 OS. The specific hardware parameters of the computer were: AMD R7-5800H CPU, 16 GB RAM.
Experimental parameter setting
The parameters that need to be manually set in this paper are the neighborhood search radius r and the weight value in the saliency calculation process, but they are fixed values in the subsequent simplification process of different point cloud models in this paper and do not need to be changed.
The radius r in the sphere neighborhood is a key parameter in the KNN (K-nearest neighbors) algorithm, directly determining the number of neighboring points and affecting the calculation of roughness and curvature. If the search radius is too large, the model may underfit and fail to capture the complex structure of the data, while also increasing computational load. Conversely, selecting a smaller radius makes the model more sensitive, potentially leading to overfitting and significant discrepancies between the fitted surface and the actual surface. Hence, choosing an appropriate search radius is crucial.
Since both roughness and curvature calculations use the same neighborhood, this study opted to experiment with curvature to find an appropriate search radius. The experiment focused on a stone carving of a civil official from the Northern Song Dynasty Imperial Mausoleum, with approximately 760,000 points. Radius values were set at 0.01 m, 0.02 m, 0.04 m, and 0.08 m, and the curvature values were used as gradients for rendering. The distribution of different colors was observed to determine if the radius was suitable.
Figure 8 illustrates the curvature rendering results obtained under different radii, where blue indicates lower curvature values and red indicates higher curvature values. From the figure, it can be observed that as the radius increases, the differences in curvature values gradually decrease. At radius of 0.01 m, 0.04 m, and 0.08 m, the obtained curvatures fail to capture the details of the model effectively. However, at a radius of 0.02 m, the curvature obtained can better capture the details of the model. Therefore, the curvature rendering effect is best when the radius is set to 0.02 m, and the calculation speed is also faster at this time.
Equation (8) is the core formula for saliency calculation with two parameters, \(a\) and \(b\). Different parameters will affect the simplification effect. We set different values of \(a\) and b and compare the results for the model’s primary feature region. The parameters corresponding to a, b, c, d and e in Fig. 9; From the visual results, the feature regions of the model in Figs. 8a, 9b are severely missing, while Fig. 9d, e contain many non-feature points; Fig. 9c shows the best results. However, different models may require different parameter settings, and to avoid the complicated parameter tuning process, we set the parameters \(a\) and \(b\) to 0.5 in the experiments in this paper.
Experimental results
To demonstrate the validity of the method in this paper, we also introduced four other methods for comparison, including Poisson Disk Sampling (PDS)34 and Montecarlo Sampling (MS) implemented using Meshlab 2022, and Curvature Sampling (CS) and Voxel Sampling (VS) implemented using Geomagic 2021. In addition, the clustering- and saliency-based simplification method proposed in this paper is abbreviated as CSS. The experimental results of some models are shown in the Supplementary Material. The same simplification rate is required to do the comparison experiments, defining the simplification rate35 as:
Where \({N}_{r}\) and \({N}_{s}\) are the number of simplified point clouds, and original point clouds.
Simplification results of the SfM point cloud: Table 2 shows the details of the point clouds based on the SfM. Figure 10 shows the simplification results of the Civil officials' point cloud with the same or approximate simplification rates for these five methods. Additional experimental results can be found in Figures A.1 to A.3 in the Supplementary Material. Even with a very high simplification rate of the point cloud, we can see that the method in this work shows a significant difference between the feature and non-feature regions. In the regions with features such as object protrusion and depression, the number of point clouds retained by the method in this work is much higher than that in the non-featured regions. Moreover, this method achieves a good balance between highlighting feature regions and maintaining the integrity of the entire point cloud. In contrast, the curvature-based algorithm retains too many points in the feature regions, resulting in the creation of some holes and an imbalance between the saliency of feature regions and the integrity of the entire point cloud. Although the three methods of Poisson slice sampling, Monte Carlo sampling, and voxel sampling do not produce holes in the point cloud, they do not show a clear distinction between feature and non-feature regions. They cannot preserve the protrusion of feature regions.
Simplification results of structured light point cloud: Table 3 shows the details of the point clouds based on the structured light. Figure 11 shows the simplification results of the No.1 Terra-cotta warriors point cloud with the same or approximate simplification rates for these five methods. Additional experimental results can be found in Figures B.1 to B.3 in the Supplementary Material. It can be clearly seen that the simplified method proposed in this work has significant advantages in terms of feature retention. A large number of point clouds are preserved, whether in the hat, texture, or eyes, while many features are lost in the other methods. Although the curvature-based method can also retain more feature points, its visualization is much less effective than the method in this work.
Simplification results of terrestrial laser scanning point cloud: Table 4 shows the details of the point clouds based on terrestrial laser scanner (TLS). There are obvious differences between TLS point clouds, SfM point clouds, and structured light point clouds. As shown in Fig. 12, TLS point cloud has non-uniform density and point cloud holes, while the SfM point cloud and the structured light point cloud do not have these problems. The simplified results of these five methods for the Civil official point cloud from TLS are shown in Fig. 13. Additional experimental results are presented in Supplementary Material Figs. C.1 and C.2. The method in this paper still has significant advantages even in the case of non-uniform density and point cloud holes.
Simplified results for other data: Fig. 14 demonstrates the simplification results of our method on other datasets, proving the robustness of our approach.
Visual and quantitative evaluation
Visual evaluation: To visually compare the simplified results of point clouds, we used Magic3D software to generate surface encapsulations of the simplified models. For the simplified results obtained by different methods on the same model, we use Magic3D software to reconstruct the mesh under the same parameter settings to ensure a fair comparison (shown in Figs. 15–17). The blue area represents the surface of the encapsulated model, while the yellow area represents the holes. We evaluated the simplified models using both visual and quantitative methods. For the visual evaluation, we compared the simplified models generated by different methods side-by-side.
The Civil officials and Terra-cotta warriors all have rich textures and grooves, displaying complex detailed features. By comparing the simplified models in Figs. 16–18, we can see the effects of different methods. Both the MS and CS methods have different degrees of holes in the simplified models, and the CS method has the most obvious holes. The PDS and VS methods do not produce holes, but the contours in the detail parts are not clear enough. Compared with the other methods, CSS retains more points in the detail part, and thus can show a clear contour. This means that in the modeling results, the method of this paper performs better in handling rich detail features such as textures and grooves. Additional experimental results are presented in Supplementary Material Figs. A.4 to A.6, Figs. B.4 to B.6, Figs. C.3 to C.4 and Figs. D.2.
Quantitative indicators: In the current studies on point cloud simplification, the accuracy of simplification is usually evaluated by comparing the grid models. In this work, we also use the grid model to compare the error of the simplified point cloud with the original point cloud. The point clouds are wrapped with the same parameters using Magic3D software. Then, the area, maximum error, average geometric error, RMS and Hausdorff distance are used as evaluation metrics based on Geomagic Studio and Metro tools. The details are described below.
Surface Area Deviation: We compute the total surface area of the reconstructed mesh from the simplified point cloud and compare it with that of the original. A smaller area deviation indicates that the overall geometry has been better preserved.
Where A is the surface area of the original point cloud, \({A}^{* }\) is the surface area of the simplified point cloud, and \(\varDelta A\) is the area change.
Maximum Error (Max): This measures the largest point-wise deviation between the simplified and original point clouds. It reflects the worst-case distortion caused by simplification. Average Geometric Error (Avg): The mean of all point-wise distances between the simplified and original models. It provides a general indication of overall accuracy. For each point p, the geometric error \(d\left(p,{S}^{{\prime} }\right)\) can be defined as the Euclidean distance between the sampled point p and its projection point on the simplified surface \({S}^{{\prime} }\). Equations (12) and (13) are the formulas for the maximum error and the average geometric error:
Where S is the original point cloud surface, and \({S}^{{\prime} }\) is the simplified point cloud surface.
Hausdorff Distance: This is a global metric that quantifies the maximum geometric discrepancy between the surfaces of the simplified and original models. The approximation error between two meshes can be defined as the distance between the corresponding sections of the mesh28. Given a point p and a surface S, the distance \(e\left(p,s\right)\) is defined as Eq. (14), where \(d\left(\right)\) is the Euclidean distance between the two points. The one-sided distance between two surfaces, \({S}_{1}\) and \({S}_{2}\), is defined as Eq. (15), and the Hausdorff distance takes the maximum of \(E\left({S}_{1},{S}_{2}\right)\) and \(E\left({S}_{2},{S}_{1}\right)\).
RMS: The RMS of triangle edge lengths is a concise indicator of mesh balance. For well-balanced point sets, the area and side lengths of the generated mesh should be distributed within a small range of values, while the opposite is true for unbalanced point sets; therefore, in this paper, we use the root mean square error (RMS) of the triangle side lengths to evaluate the imbalance of the simplified point surface19. The RMS is defined as Eq. 16.
Where \({l}_{i}\) is the side length of the triangle, and \(\bar{l}\) is the average of the side lengths of the triangle.
Quantitative evaluation results: For quantitative evaluation, we used area, maximum error, and average geometric error as metrics. In addition, we evaluated the imbalance of the simplified point surface using the root mean square error (RMS) of the triangle edge length. Since the original point cloud data of some models are too large, we use the model generated from the original simplified point cloud as the reference model.
To analyze the differences between the simplified models, the quantitative metrics mentioned earlier were calculated. Figure 18 shows the amount of area deviation for different methods at the same or approximate simplification rates. The method in this paper is closest to the original model area compared to the other methods, regardless of whether it is the stone carving model or the model in the Stanford dataset.
Figures 19, 20, and 21 show the histograms of maximum error, average geometric error, and RMS, respectively. As can be seen from the graphs, the maximum error, average geometric error, and RMS, derived from the method presented here, are the smallest compared to other algorithms, which proves the excellent performance of the method presented here in point cloud simplification. As can be seen from the histogram of Hausdorff distance in Fig. 22, the Poisson disk method also achieves good results, but overall, the performance of the method of this paper is better.
Discussion
This paper introduces a new point cloud simplification method specifically for 3D reconstruction of cultural relics. The core innovation lies in the combination of normal vector constrained Euclidean clustering and saliency analysis based on comprehensive roughness and curvature metrics. The method adaptively identifies the main and secondary features in each cluster. A hierarchical voxel sampling strategy is then applied to significantly reduce the amount of data while retaining fine surface details such as incisions and weathering textures that are crucial in archaeological research. The effectiveness and superiority of the proposed method are confirmed by experiments and evaluations on different point cloud data.
Future work will focus on enhancing the algorithm’s efficiency for even larger datasets, improving its robustness against noise, and exploring the integration of additional geometric descriptors to further refine the saliency evaluation process. In addition, further research will be conducted on the basis of the feature regions extracted by the method in this paper to achieve the generation of line drawings of cultural relics.
Data availability
Part of the data supporting the findings of this study is available from Zhengzhou University; however, access to these data is restricted. The data were used under license for the current study and are not publicly accessible. Interested researchers may request the data from the authors, subject to reasonable requests and approval from Zhengzhou University. The remaining datasets generated during the study are available in the Stanford 3D Scanning repository, [https://graphics.stanford.edu/data/3Dscanrep/].
Code availability
The source code developed and utilized in this study is not publicly available at this time. However, the authors plan to release the complete codebase in a public repository upon final publication. Researchers interested in accessing the code prior to its public release may submit a reasonable request to the corresponding author. All requests will be reviewed, and access will be granted subject to any necessary institutional or licensing approvals.
References
Cameron, F. R. The future of digital data, heritage and curation: in a more-than-human world. (Routledge, 2021).
Brusaporci S. Digital Innovations in Architectural Heritage Conservation: Emerging Research and Opportunities [Internet]. IGI Global; 2017. https://books.google.com/books?id=iG5xDgAAQBAJ
Georgopoulos, A. Data Acquisition for the Geometric Documentation of Cultural Heritage. In: Ioannides, M., Magnenat-Thalmann, N., Papagiannakis, G. (eds) Mixed Reality and Gamification for Cultural Heritage. (Springer, 2017).
Shao, J. et al. Automated markerless registration of point clouds from TLS and structured light scanner for heritage documentation. J. Cult. Herit. 35, 16–24 (2019).
Spring, A. P. History of laser scanning, part 2: the later phase of industrial and heritage applications. Photogrammetric Eng. Remote Sens. 86, 479–501 (2020).
Sulzer, R. et al. A survey and benchmark of automatic surface reconstruction from point clouds. IEEE Transactions on Pattern Analysis and Machine Intelligence, (IEEE, 2024).
Luo, H. et al. Large-scale 3d reconstruction from multi-view imagery: a comprehensive review. Remote Sens. 16, 773 (2024).
Martin, R., Stroud, I. & Marshall, A. Data reduction for reverse engineering. RECCAD, Deliverable Doc. 1, 111 (1997).
Kim, S., Kim, C. & Levin, D. Surface simplification using a discrete curvature norm. Comput. Graph. 26, 657–663 (2002).
Yang, S., Hou, M. & Li, S. Three-dimensional point cloud semantic segmentation for cultural heritage: a comprehensive review. Remote Sens. 15, 548 (2023).
Chen, G. et al. Automatic schelling point detection from meshes. IEEE Trans. Vis. Comput. Graph. 29, 2926–2939 (2022).
Herráez, J. et al. Optimal modelling of buildings through simultaneous automatic simplifications of point clouds obtained with a laser scanner. Measurement 93, 243–251 (2016).
Lv, C., Lin, W. & Zhao, B. Intrinsic and isotropic resampling for 3d point clouds. IEEE Trans. Pattern Anal. Mach. Intell. 45, 3274–3291 (2022).
Lv, C., Lin, W. & Zhao, B. Approximate intrinsic voxel structure for point cloud simplification. IEEE Trans. Image Process. 30, 7241–7255 (2021).
Zhou, G., Yuan, S. & Luo, S. Mesh simplification algorithm based on the quadratic error metric and triangle collapse. IEEE Access 8, 196341–196350 (2020).
Li, M. & Nan, L. Feature-preserving 3D mesh simplification for urban buildings. ISPRS J. Photogramm. Remote Sens. 173, 135–150 (2021).
Aljumaily, H., Laefer, D. F. & Cuadra, D. Urban point cloud mining based on density clustering and MapReduce. J. Comput. Civ. Eng. 31, 04017021 (2017).
Shi, B., Liang, J. & Liu, Q. Adaptive simplification of point cloud using k-means clustering. Comput. Aided Des. 43, 910–922 (2011).
Sachdeva, I. et al. Computational AI models in VAT photopolymerization: a review, current trends, open issues, and future opportunities. Neural Comput. Appl. 34, 17207–17229 (2022).
Yang, Y., Li, M. & Ma, X. A point cloud simplification method based on modified fuzzy C-means clustering algorithm with feature information reserved. Mathematical Problems Eng 2020, 5713137 (2020).
Ji, C. et al. A novel simplification method for 3D geometric point cloud based on the importance of point. IEEE Access 7, 129029–129042 (2019).
Chen, H. et al. Point cloud simplification for the boundary preservation based on extracted four features. Displays 78, 102414 (2023).
Xuan, W. et al. A new progressive simplification method for point cloud using local entropy of normal angle. J. Indian Soc. Remote Sens. 46, 581–589 (2018).
Zhang, K. et al. Feature-preserved point cloud simplification based on natural quadric shape models. Appl. Sci. 9, 2130 (2019).
Nezhadarya, E. et al. Adaptive hierarchical down-sampling for point cloud classification. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. (IEEE, 2020).
Lang, I., A. Manor and S. Avidan. Samplenet: Differentiable point cloud sampling. in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. (IEEE, 2020).
Li, Y. et al. Deep point cloud simplification for high-quality surface reconstruction. arXiv Prepr. arXiv 2203, 09088 (2022).
Arav, R., Filin, S. & Pfeifer, N. Content-aware point cloud simplification of natural scenes. IEEE Trans. Geosci. Remote Sens. 60, 1–12 (2022).
Gatzke, T. M. & Grimm, C. M. Comparing features of three-dimensional object models using registration based on surface curvature signatures [Internet]. 2007 May. Report No.: WUCS-2007-7. https://openscholarship.wustl.edu/cse_research/921.
Nguyen, V-S., Bac, A. & Daniel, M. Simplification of 3D point clouds sampled from elevation surfaces. 21th International Conference on Computer Graphics, Visualization and Computer Vision, WSCG2013 [Internet]. Plenz, Czech Republic; 2013. p. 60–9. https://hal.science/hal-01311430.
Liu, H. et al. Point cloud segmentation based on Euclidean clustering and multi-plane extraction in rugged field. Meas. Sci. Technol. 32, 095106 (2021).
Liu, M. Robotic online path planning on point cloud. IEEE Trans. Cybern. 46, 1217–1228 (2015).
Nyimbili, P. H. et al. Structure from motion (SfM)-approaches and applications. in Proceedings of the international scientific conference on applied sciences, Antalya, Turkey. (2016).
Corsini, M., Cignoni, P. & Scopigno, R. Efficient and flexible sampling with blue noise properties of triangular meshes. IEEE Trans. Vis. Comput. Graph. 18, 914–924 (2012).
Wang, G. et al. Point cloud simplification algorithm based on the feature of adaptive curvature entropy. Meas. Sci. Technol. 32, 065004 (2021).
Acknowledgements
The work is supported by the National Natural Science Foundation of China (No. 42241759, No. 42001405), Key R & D and promotion project of Henan Province (CN) (No. 232102240017), the Natural Science Foundation of Henan Province (CN) (No. 242300420212), China Postdoctoral Science Foundation (No. 2024M752938), Archaeological Innovation and Enhancement Plan 2024 of the Archaeological Innovation Center of Zhengzhou University.
Author information
Authors and Affiliations
Contributions
Jian Li: Conceptualization, Investigation, Writing—review and editing. Chenyang Peng: Methodology, Software, Data processing, Writing—original draft. Wanfa Gu: Resources, Supervision. Guohe Han: Resources, Revision. Jin Zhu: Revision, Supervision. Yiwen Tao: Revision, Software. Hao Cui: Methodology, Supervision, Writing—review and editing. Xiaoqian Jin: Resources.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary information
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Li, J., Peng, C., Gu, W. et al. A point cloud simplification method using clustering and saliency for cultural heritage reconstruction. npj Herit. Sci. 13, 445 (2025). https://doi.org/10.1038/s40494-025-02016-y
Received:
Accepted:
Published:
DOI: https://doi.org/10.1038/s40494-025-02016-y