Table 4 Comparison of existing methods and our proposed approach.
From: Graph-enhanced implicit aspect-level sentiment analysis based on multi-prompt fusion
Method name | Core mechanism/improvement | Advantages | Limitations |
|---|---|---|---|
Extraction Models (DP-ACOS/Extract-Classify-ACOS/SGTS) | Multi-stage extraction | Intuitive method, stable training, effective for explicit information extraction | Multi-step structure prone to error propagation, relies on explicit annotations, difficult to handle implicit content |
Seq2Path | Path sequence generation | Suitable for explicit information, clear structure | High miss rate when path clues are absent, difficult to handle implicit quadruples |
BART-CRN | Revisiting mechanism | Can correct some initial wrong predictions | Implicit quadruples lack annotation, revisiting mechanism struggles to perceive them |
ILO + UAUL | Hard sample optimization + Unlikelihood learning | Reduces repeated errors, focuses on complex instances | Contextual dependencies not modeled, effectiveness limited to within-sentence context |
DLO + UAUL | Positive/Negative likelihood joint optimization + Unlikelihood learning | Better distinguishes correct/incorrect predictions, handles complex sentences well | Still limited in capturing long-distance dependencies |
Special_Symbols | Special symbols to mark aspect/opinion/sentiment + Unlikelihood learning | Explicit markers help the model recognize implicit clues | Symbols are insufficient to express deeper meanings in complex contexts |
Paraphrase | Text paraphrasing to assist implicit reasoning | Enhances semantic diversity, improves adaptability to varied expressions | Easily introduces noise, may misassociate unmentioned content |
Proposed Method | Multi-order prompt fusion + Graph Neural Network + Pointer-index generation mechanism + | Strong implicit reasoning, accurate long-distance dependency capture, complete structural information | Requires high reasoning ability, slightly higher training complexity |