Table 9 The importance of the three parts of the background attention module (BAM) was evaluated using mIou (%) on the SAA dataset

From: Learning discriminative universal background knowledge for few-shot point cloud semantic segmentation of architectural cultural heritage

Different combinations

S0

S1

Mean

BAM

71.04

66.29

68.66

Only BMM

58.72

60.88

59.80 ( − 8.86)

Only BFM

56.34

59.21

57.77 ( − 10.89)

Only BMM + ABL

68.16

63.98

66.07 ( − 2.59)

Only BMM + BFM

54.87

58.02

56.44 ( − 12.22)

  1. “BMM” represents the background modeling module, “ABL” represents the adaptive background loss function, and “BFM” represents the background filtering module.