Skip to main content

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

Advertisement

Scientific Reports
  • View all journals
  • Search
  • My Account Login
  • Content Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • RSS feed
  1. nature
  2. scientific reports
  3. articles
  4. article
Sequential translation-based multimodal sentiment analysis under uncertain missing modalities
Download PDF
Download PDF
  • Article
  • Open access
  • Published: 07 May 2026

Sequential translation-based multimodal sentiment analysis under uncertain missing modalities

  • Yan Hai1,
  • Shanqi Lu1 na1,
  • Zhizhong Liu2,
  • Ling Shang3 na1,
  • Hongxiang Sun2 na1 &
  • …
  • Jing Wang1 na1 

Scientific Reports , Article number:  (2026) Cite this article

We are providing an unedited version of this manuscript to give early access to its findings. Before final publication, the manuscript will undergo further editing. Please note there may be errors present which affect the content, and all legal disclaimers apply.

Subjects

  • Engineering
  • Mathematics and computing

Abstract

Multimodal Sentiment Analysis (MSA) aims to fuse information from multiple modalities to achieve precise sentiment classification. Recently, the issue of uncertain missing modalities has become one of the new challenges in MSA. Previous studies have attempted to solve this issue by building information interactions on modality pairs consisting of two modalities. However, existing methods typically rely on interactions between paired modalities to compensate for missing information. Such representations struggle to accurately reconstruct true cross-modal semantics due to the absence of guidance from a third modality. Additionally, existing approaches have neglected the effective utilization of text modality and the complexity of the models is relatively high. To tackle the above issues, we propose a sequential translation-based MSA model (STMSA). This model incorporates two key designs. First, the text-centric bidirectional translation mechanism leverages the dominant role of the text modality in affective tasks to sequentially establish bidirectional mappings with the audio and video modalities. This mechanism fully explores the deep connections among the three modalities through semantic guidance from text, enabling cross-modal representations that more closely align with real affective distributions. Second, the low-complexity non-modal completion architecture performs distributed fitting on joint representations in a shared space using only an encoder-decoder, thereby avoiding complex missing-modality generation processes. Extensive experiments were conducted on two public datasets, CMU-MOSI and IEMOCAP, demonstrating that the proposed model outperforms 10 state-of-the-art baseline models.

Similar content being viewed by others

Adaptive multimodal transformer based on exchanging for multimodal sentiment analysis

Article Open access 26 July 2025

A text guided multimodal scale path fusion network for multimodal sentiment analysis

Article Open access 15 December 2025

Multimodal sentiment analysis based on multi-layer feature fusion and multi-task learning

Article Open access 16 January 2025

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant nos. 62273290), the Special Funding Program of Shandong Taishan Scholars Project, Key Technology Research and Development Program of Shandong (Grant no. 2025CXPT077), National Cultural and Tourism Technology Innovation Research and Development Project, Henan Province Science and Technology Research Project (No. 252102210138).

Author information

Author notes
  1. Shanqi Lu, Ling Shang, Hongxiang Sun and Jing Wang contributed equally to this work.

Authors and Affiliations

  1. School of Information Engineering, North China University of Water Resources and Electric Power, Zhengzhou, 450056, China

    Yan Hai, Shanqi Lu & Jing Wang

  2. The School of Computer and Control Engineering, Yantai University, Yantai, 264005, China

    Zhizhong Liu & Hongxiang Sun

  3. Digital Smart Creative Department, Henan Culture and Tourism Investment Group Co., Ltd, Luoyang, 471026, China

    Ling Shang

Authors
  1. Yan Hai
    View author publications

    Search author on:PubMed Google Scholar

  2. Shanqi Lu
    View author publications

    Search author on:PubMed Google Scholar

  3. Zhizhong Liu
    View author publications

    Search author on:PubMed Google Scholar

  4. Ling Shang
    View author publications

    Search author on:PubMed Google Scholar

  5. Hongxiang Sun
    View author publications

    Search author on:PubMed Google Scholar

  6. Jing Wang
    View author publications

    Search author on:PubMed Google Scholar

Corresponding author

Correspondence to Zhizhong Liu.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hai, Y., Lu, S., Liu, Z. et al. Sequential translation-based multimodal sentiment analysis under uncertain missing modalities. Sci Rep (2026). https://doi.org/10.1038/s41598-026-46910-2

Download citation

  • Received: 19 October 2025

  • Accepted: 28 March 2026

  • Published: 07 May 2026

  • DOI: https://doi.org/10.1038/s41598-026-46910-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Keywords

  • Multimodal sentiment analysis
  • Uncertain missing modalities
  • Sequential translation
  • Transformer
Download PDF

Advertisement

Explore content

  • Research articles
  • News & Comment
  • Collections
  • Subjects
  • Follow us on Facebook
  • Follow us on X
  • Sign up for alerts
  • RSS feed

About the journal

  • About Scientific Reports
  • Contact
  • Journal policies
  • Guide to referees
  • Calls for Papers
  • Editor's Choice
  • Journal highlights
  • Open Access Fees and Funding

Publish with us

  • For authors
  • Language editing services
  • Open access funding
  • Submit manuscript

Search

Advanced search

Quick links

  • Explore articles by subject
  • Find a job
  • Guide to authors
  • Editorial policies

Scientific Reports (Sci Rep)

ISSN 2045-2322 (online)

nature.com footer links

About Nature Portfolio

  • About us
  • Press releases
  • Press office
  • Contact us

Discover content

  • Journals A-Z
  • Articles by subject
  • protocols.io
  • Nature Index

Publishing policies

  • Nature portfolio policies
  • Open access

Author & Researcher services

  • Reprints & permissions
  • Research data
  • Language editing
  • Scientific editing
  • Nature Masterclasses
  • Research Solutions

Libraries & institutions

  • Librarian service & tools
  • Librarian portal
  • Open research
  • Recommend to library

Advertising & partnerships

  • Advertising
  • Partnerships & Services
  • Media kits
  • Branded content

Professional development

  • Nature Awards
  • Nature Careers
  • Nature Conferences

Regional websites

  • Nature Africa
  • Nature China
  • Nature India
  • Nature Japan
  • Nature Middle East
  • Privacy Policy
  • Use of cookies
  • Legal notice
  • Accessibility statement
  • Terms & Conditions
  • Your US state privacy rights
Springer Nature

© 2026 Springer Nature Limited

Nature Briefing AI and Robotics

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing: AI and Robotics