Introduction

Cybercrimes such as phishing, identity theft, and spoofing are rising in Malaysia (MyCERT, 2021); among these, phishing is the highest occurrence of cyberattack victimization (Statista, 2019). Popular mobile instant messaging apps like WhatsApp and Facebook Messenger enable real-time and convenient communication within virtual communities (Leon, 2018; Suganya, 2016). Alas, this characteristic makes these platforms a hunting ground for scammers (Suganya, 2016). A primary reason why cybercriminals, such as phishers, favor mobile instant messaging for targeting vulnerable and unsuspecting victims (Central Bank of Malaysia, 2017) is that, unlike emails, instant messaging applications have limited to no spam filters (Verkijika, 2019). When trusting victims disclose confidential information to scammers via messaging apps, they risk losing money (Aiman, 2020). Indeed, mobile phishing victimization is more common than traditional phishing, with mobile phishing success rates three times higher than general phishing attacks (Goel and Jain, 2018). In light of these, it is hardly surprising that phishing is placed of utmost importance among security researchers and practitioners (Verkijika, 2019; Arachchilage and Love, 2014; Chen et al. 2020; Musuva et al. 2019), given that it significantly threatens personal and organizational information security (Verkijika, 2019).

Traditionally, phishing entails fake emails to trick users into disclosing personal information (Sarker et al. 2024), often through the impersonation of an authorized person from a legitimate institution (Frauenstein and Flowerday, 2020). However, modern phishing has evolved to include channels such as mobile devices and instant messaging apps. Social media (i.e., instant messaging platforms) include both synchronous (i.e., embedded chat features) and asynchronous (i.e., private/personal message sent) communication modes (Frauenstein and Flowerday, 2020; Kuss and Griffiths, 2011). Consequently, instant messaging serves as a phishing message transmission channel, exposing users to the motivated offender for phishing attacks (Verkijika, 2019; Frauenstein and Flowerday, 2020; Ahmad et al. 2023). From a scholarly standpoint, while a significant amount of research effort has gone into determining the predictors of being a victim of phishing email attacks (Musuva et al. 2019; De Kimpe et al. 2018; Ge et al. 2021; Jansen and Leukfeldt, 2016; Ngo et al. 2020; Zhang et al. 2012), research on phishing via instant messaging is relatively limited (Frauenstein and Flowerday, 2020), particularly in Malaysia.

A lack of cybersecurity knowledge and awareness among Internet users is a primary factor contributing to their vulnerability to cybercrime (Zolkiffli et al. 2023). The recent surge in phishing cases in Malaysia highlights the prevalence of this issue, indicating a lack of phishing awareness and knowledge among Malaysian Internet users (Kaur, 2024; Mohd and Mohd, 2021; Olivia Tan et al. 2020). Instant messaging applications, such as WhatsApp and Facebook Messenger, are commonly used for both personal and professional communication (Wei, 2014). This multi-tasking nature makes them attractive to phishers, who can anonymously disseminate false information and clickbait to target Malaysian users (Saudi et al. 2007; Singh, 2013; Tan, 2023). Recent reports indicated that young Malaysians are particularly vulnerable to phishing attacks via instant messaging platforms, facing a heightened risk of financial loss due to investment and loan scams (Adamu et al. 2020; Bernama, 2019; Bernama, 2022; Goh, 2022; Singh, 2021; Wong, 2021; Yeoh, 2023; Zainal et al. 2022).

Given the extensive usage of instant messaging, increasing knowledge and awareness of fraud prevention is critical. This involves providing information on useful online security measures, such as how to use instant messaging apps safely and avoid typical risks (Pascal, 2024). Despite the increasing prevalence of cybercrime, the understanding of how individuals’ awareness (i.e., knowledge), and cyber security protection behavior vary in response to diverse cyber threats remains limited (Moti et al. 2020). To the best of the knowledge, there is a lack of empirical evidence exploring the relationship between threat knowledge and cyber protection behavior. To investigate the relationship between knowledge of the cybersecurity and online security management on instant messaging platforms, this study therefore aims to examine whether knowledge of the phishing domain is a significant predictor of effective security practices.

While numerous studies have been conducted on the governance and implications of cyber fraud, few studies have examined the risk of phishing victimization in Malaysia (Mohd and Mohd, 2021). This research gap is unfortunate because phishing threats have significantly increased in Malaysia compared to other cybercrime (Singh et al. 2021), resulting in significant financial losses (Mohd and Mohd, 2021; Singh et al. 2021). Although phishing cannot be completely eliminated, but it can be mitigated and prevented to some extent (Mohd and Mohd, 2021). This study provides the first crucial step in this direction by obtaining insights into the factors driving Malaysia’s risk of instant messaging phishing victimization, which informs potential risk factors and recommends effective preventive interventions. Increasing phishing knowledge could give users increased vigilance and savviness, reducing the likelihood of phishing victimization. To achieve this, we propose a research model for predicting instant messaging phishing victimization risk (phishing susceptibility). The following section outlines the research background, theoretical framework and the development of the hypotheses.

Literature review and theoretical foundation

Phishing susceptibility

Phishing susceptibility represents the risk rate that a phishing attack will dupe Internet users (Chen et al. 2020). It denotes the likelihood that a person will respond to phishing attacks, including interacting with or being lured by clickbaits (Wang et al. 2012). The respective literature places susceptibility to phishing as a dependent variable (Chen et al. 2020; Frank-Chou et al. 2021; Musuva et al. 2019; Parsons et al. 2019) as phishing victim is expected to update their susceptibility to phishing by combining previous victimization experience with a new phishing encounter (Chen et al. 2020).

Previous research on phishing susceptibility and victimization has primarily employed two types of dependent variables: (1) phishing susceptibility, assessed using a five-point Likert scale to measure respondents’ likelihood of being deceived by a phishing attack (Musuva et al. 2019); and (2) phishing victimization experience, measured using dichotomous scales to determine whether respondents have been victims of phishing attacks (Ngo et al. 2020).

Chen et al. (2020) highlight the frequent use of the term “phishing susceptibility” in studies examining phishing victimization. Numerous studies have explored the relationship between deception detection processes and phishing susceptibility (Alseadoon et al. 2014; Chen et al. 2020; Frank-Chou et al. 2021; Frauenstein and Flowerday, 2020; Musuva et al. 2019; Vishwanath et al. 2011). This is because, from a temporal perspective, individuals must engage in deception detection before determining their susceptibility to phishing attacks (Chen et al. 2020).

Given the current study’s primary objective of identifying significant predictors of phishing susceptibility, a dichotomous dependent variable (yes or no) measuring whether respondents responded to the phisher is not applicable. Instead, the endogenous variable in this study is phishing susceptibility, measured using a five-point Likert scale. This approach aligns with previous research that has employed phishing susceptibility to examine respondents’ perceptions of their vulnerability to phishing attacks (Algarni, 2019; Alseadoon et al. 2014; Chen et al. 2020; Frauenstein and Flowerday, 2020; Frauenstein et al. 2023).

Systematic review of phishing empirical studies

This paper adhered to the guidelines for conducting systematic literature reviews proposed by Watson and Webster (2002). To develop a comprehensive theory of phishing susceptibility, this study employs a concept-centric literature review approach, synthesizing existing empirical research on phishing susceptibility. A comprehensive literature review conducted via Scopus, Web of Science, Springer, and Google Scholar, focusing exclusively on studies that employed survey methodologies, identified several theoretical frameworks commonly employed in phishing victimization research (Table 1). These include the Theory of Deception (TOD), Routine Activities Theory (RAT), Lifestyle-Routine Activities Theory (LRAT), Elaboration Likelihood Model (ELM), Heuristic-Systematic Model (HSM) of information processing, Social Judgment Theory (SJT), Big Five Personality Traits, Protection Motivation Theory (PMT), Social Cognitive Theory (SCT), and Stimulus Interpretation Response (SIR).

Table 1 Empirical Studies of Phishing Detection and Victimizations.

The TOD, which emphasizes the significance of assessing targeted victims’ domain-specific knowledge of deception detection factors, was a foundational theory in explaining phishing susceptibility (Chen et al. 2020). According to Musuva et al. (2019), this theory suggests that individuals’ ability to make informed decisions when confronted with phishing attacks is influenced by their understanding of deception tactics. Previous empirical studies have primarily focused on three categories of phishing susceptibility: (1) phishing target characteristics, utilizing the RAT (Leukfeldt 2014) and LRAT (Ngo et al. 2020; Ribeiro et al. 2024) to explore how individual lifestyles and habits may increase vulnerability to phishing attacks; (2) phishing message characteristics, employing ELM (Algarni, 2019; Musuva et al. 2019), HSM (Farkhondeh et al. 2020; Frauenstein and Flowerday, 2020; Frauenstein et al. 2023; Ribeiro et al. 2024) and SIR (Frank-Chou et al. 2021) to investigate how individuals process and evaluate phishing messages; and (3) individual intentions to engage in protective behaviors, applying PMT (Manoharan et al. 2022) and SCT (Kwak et al. 2020) to understand factors influencing individuals’ willingness to adopt protective measures against phishing.

While RAT and LRAT have been frequently employed as theoretical frameworks for predicting cybercrime phishing, their applicability in the online context has been debated. Yar (2005) and Hsieh and Wang (2018) argued that RAT’s emphasis on physical proximity may be less relevant in the virtual world. RAT emphasizes the principle of physical co-presence between offender and victim (Choi and Lee, 2017). However, technological advancements have rendered physical proximity less relevant in the online context (Pratt et al. 2010; Reyns and Henson, 2015). The virtual environment challenges RAT’s applicability in cybercrime investigations due to the lack of physical interaction between offender and victim (Choi, 2015; Choi and Lee, 2017; Hsieh and Wang, 2018).

Previous research suggests that RAT may not be sufficient for explaining criminal victimization, particularly phishing (Hutchings and Hayes, 2009; Leukfeldt, 2014; Ngo and Paternoster, 2011). While RAT and lifestyle exposure theory can provide insights into victimization, future research may benefit from developing an integrated theory to predict cybercrime victimization (Choi, 2008; Choi and Lee, 2017; Choi et al. 2019; Lee and Choi, 2021). Cyber-RAT, a modified version of RAT, has been applied to cyber-interpersonal offending and violence victimization. Despite its potential, the application of Cyber-RAT to phishing remains unexplored. This study aims to address this theoretical gap by investigating the relevance of Cyber-RAT in predicting the risk of instant messaging phishing victimization.

Theory of deception

The Theory of Deception explains how Internet users recognize deception (Musuva et al. 2019), and researchers adopt this theory to investigate online deceptions (i.e., phishing susceptibility) occurring in the virtual network (Chen et al. 2020; Grazioli, 2004; Vishwanath et al. 2011; Wang et al. 2012). Four stages characterize the model: “activation,” “postulate generation,” “postulate evaluation,” and “global assessment” (Johnson et al. 2001). The first process (activation) manifests when targeted victims receiving deceptive information recognize inconsistent cues that differ from the expectation of an authentic message (Vishwanath et al. 2011). The targeted users generate interpretative data based on their knowledge of the threat domain (Musuva et al. 2019; Wang et al. 2012). The second process, postulate generation, entails them detecting anomalies and then developing deception hypotheses based on prior knowledge to explain the inconsistencies (Wang et al. 2012). The third stage, postulate evaluation, is when they use their competencies and knowledge of the threat domain to compare or evaluate the deception cues against some criteria (Musuva et al. 2019; Wang et al. 2012; Wright et al. 2009). The final process is the global evaluation, where messages or information are combined to form a synthetic judgment of (Musuva et al. 2019; Wang et al. 2012). The targeted victims combine the anticipated outcomes and then use the results to assess a phishing postulate (Chen et al. 2020). The phishing attempt is confirmed once the phishing evaluation is completed (Chen et al. 2020).

Vishwanath et al. (2011) used the Theory of Deception to explain how and what deception techniques (i.e., cues) can detect deception. This model is appropriate for comprehending phishing-based deception (Vishwanath et al. 2011). Similarly, Musuva et al. (2019) postulate that the Theory of Deception fits very well in examining social engineering cases because it systematically guides the assessment of mental processes undertaken by representatives to recognize the gap that leads to successful attacks. This theory contends that it is critical to assess targeted victims’ domain-specific knowledge (i.e., the targeted victims’ understanding of deception detection cues) to assist them in making final decisions when confronted with phishing attacks (Musuva et al. 2019).

Cyber routine activities theory

Choi (2008) modeled the Cyber-Routine Activities Theory (Cyber-RAT) based on the Routine Activity Theory (RAT) (Cohen and Felson, 1979) and the Lifestyle Routine Activities Theory (LRAT) (Hindelang et al. 1978). According to the conceptual model, risky online behaviors and digitally capable guardianship (e.g., online security management) are significant predictors of computer crime victimization (Choi, 2008; Choi and Lee, 2017; Choi et al. 2019). The three main elements of RAT are “motivated offender,” “suitable targets,” and “absence of capable guardianship” (Choi, 2008; Cohen and Felson, 1979). The “motivated offender” refers to “proximity to motivated to the offender” and “exposure to risk situations” associated with Internet users’ online frequency (Ngo et al. 2020; Leanna, 2020; Milani et al. 2020). A suitable target signifies the visibility of victims based on their activities, contributing to the extent to which they appear susceptible to potential offenders (Leukfeldt and Yar, 2016). Choi and Lee (2017) denote capable guardianship as the proficient and proactive management of online security measures, encompassing various practices and strategies to safeguard digital assets, minimize vulnerabilities, and ensure a robust defense against cyber threats, ultimately maintaining a secure digital environment. The Routine Activity Theory (RAT) posits that a crime will likely occur when three elements converge: a motivated offender, a suitable target, and the absence of a capable guardian, leading to victimization. (Leukfeldt and Yar, 2016; Leukfeldt, 2014; Holt and Bossler, 2009; Lastdrager, 2014). However, if one of the components is missing, cybercrime victimization might not occur.

The LRAT was developed by measuring Internet users’ daily social interactions (Hindelang et al. 1978). When Internet users engage in risky cyber activities, they are particularly vulnerable to offenders and thus victimized (Choi and Lee, 2017; Choi et al. 2019). Online risky activities include professional and recreational pursuits (Hindelang et al. 1978). Both the RAT and the LRAT posit that poor security management increases the likelihood of Internet users being victimized online and vice versa (Choi and Lee, 2017). Effective online security management (i.e., online privacy) reduces the risk of being a victim of cybercrime, particularly phishing (Leanna, 2020; Kabiri et al. 2020; Naci and Christopher, 2020).

The Cyber-Routine Activity Theory (Cyber-RAT) has garnered significant attention in cybercrime victimization research and offers an apt predictive model for comprehending cybercrime victimization (Choi, 2008; Choi and Lee, 2017; Choi et al. 2019). This research focuses on the roles of online digital guardianship (online security management) and risky online activities (vocational, leisure, and instant messaging activities) in affecting instant messaging phishing victimization.

Heuristic-systematic information processing model

The Heuristic-Systematic Model (HSM) proposed by Chen and Chaiken (1999) has been widely used in social psychology, particularly in persuasion studies (Luo et al. 2013) on how a message or information received can change people’s attitudes (Luo et al. 2013; Rahman, 2018). The HSM model posits that when people are persuaded, they will first determine the validity of the acquired information by combining systematic and heuristic processing in a composition determined by different predictors (Luo et al. 2013). The HSM is pervasive in the field of information security (Frauenstein and Flowerday, 2020) and is recognized as an appropriate framework for understanding phishing victimization (Harrison et al. 2016; Farkhondeh et al. 2020; Luo et al. 2013; Valecha et al. 2015; Vishwanath et al. 2018; Zhang et al. 2012).

HSM entails two types of information processing modes: systematic and (Frauenstein and Flowerday, 2020; Luo et al. 2013). The heuristic mode is a limited form of information processing that demands reduced cognitive effort and fewer cognitive resources, typically resorted to by people lacking cognitive or motivational resources (Frauenstein and Flowerday, 2020). The heuristic processing mode enables message receivers to decide based on specific indicators. Heuristic cues are subjected to extensive research, given that phishing attackers frequently use persuasive cues to deceive targeted victims (Musuva et al. 2019; Wright et al. 2020).

On the other hand, systematic processing occurs when Internet users thoroughly evaluate the message’s content while investigating and validating the authenticity of the phishing messages (Luo et al. 2013). However, phishing or suspicious messages are typically intended to slow down systematic processing (Workman, 2008). Message involvement is a systematic cue that has been studied (Chen et al. 2020; Farkhondeh et al. 2020; Musuva et al. 2019; Wang et al. 2012).

Justification of integration of theories

The Theory of Deception (TOD) delves into the cognitive processes of individuals encountering deceptive communication, accentuating the importance of understanding their cognitive functioning and reasoning capacity to navigate deception effectively. The theory investigates deception strategies and indicators for recognizing fraud (Musuva et al. 2019; Vishwanath et al. 2011), rendering it relevant in the social engineering domain by analyzing the cognitive processes of message recipients to identify vulnerabilities contributing to successful attacks. In sum, the TOD underlines the significance of evaluating the message recipient’s subject knowledge and comprehension of recognition indicators (Wang et al. 2012; Shang, et al. 2023; Vishwanath et al. 2011).

A TOD’s constraint lies in the lack of discerning between various indicators deemed essential for recognizing fraud (Musuva et al. 2019). Deception works when a scammer preys on the target’s information processing deficiency and actively undermines the target’s mental efforts (Johnson et al. 2001). Thus, victimization is attributed to an error in knowledge processing, an absence of the cognitive ability to recognize false information, or both. Implicatively, recipients are more likely to prey for fraud if they emphasize persuasive cues instead of threat detection signs and the level of argument inherent in the phishing messages (Luo et al. 2013; Vishwanath et al. 2011).

The Heuristic Systematic Model potentially addresses the preceding concern (Musuva et al. 2019) by discerning the types of cognitive processing modes (heuristic and systematic) in the evaluation of persuasive communication (Chaiken, 1980). The HSM may offer a more integrative view when applied in conjunction with the Theory of Deception’s one-process approach within social engineering studies (Workman, 2008). Wang et al. (Wang et al. 2012) converged TOD and HSM and confirmed significant robustness in predicting individual phishing email susceptibility. In light of this, this research integrates TOD and HSM to predict the risk of instant messaging phishing victimization in the Malaysian context.

Nonetheless, while phishing message processing and knowledge relevant to phishing detection are claimed to play an essential role in predicting phishing susceptibility (Frauenstein and Flowerday, 2020; Luo et al. 2013), many Internet users have developed, most likely as a result of their daily experience, and ability to recognize conventional mass phishing (Rizzoni, et al. 2022). However, this notion may be less valid for more intensely-crafted targeted phishing messages (Rizzoni, et al. 2022). Phishing still dupes many people, although it is a cybercrime regularly encountered publicly (De Kimpe et al. 2018). Because receiving phishing messages is a precursor to victimization (De Kimpe et al. 2018), we consider phishing target characteristics a crucial factor in this study. Sommestad and Karlzén’s (2019) meta-analysis indicates that message attributes and phishing recipient characteristics affect susceptibility. Unstructured online activities like “hanging out on the street” are conducive to crime, particularly phishing victimization (De Kimpe et al. 2018; Ngo et al. 2020; Leukfeldt, 2014; Hutchings and Hayes, 2009). Therefore, this study delves into the relationship between various unstructured online activities (e.g., online risky activities) and the susceptibility to phishing in instant messaging. In sum, our research model straddles the TOD, HSM, and Cyber-RAT to predict instant messaging phishing victimization.

Echoing the notes of Musuva et al. (2019), highlighted the lack of theoretical frameworks in existing phishing studies, particularly those focusing on Malaysia (Asfoor et al. 2018). This study was undertaken to fill this research gap. The current study presents a theoretically grounded empirical analysis of phishing target characteristics, phishing message characteristics, and individual phishing knowledge to understand the individuals fall victim to phishing attacks. According to the best of our knowledge, in contrast to the majority of available literature, which has instead resulted in the use of pre-existing theories, this study aims to integrate all of these theories to explore these causal relationships between various predictors to better understand phishing susceptibility among Malaysian youth. Figure 1 depicts the current study’s overall research framework.

Fig. 1: The present study’s research framework.
figure 1

This diagram depicts all of the individual prior phishing knowledge, including phishing messages and target characteristics that influence phishing susceptibility (H - Hypothesis).

Individual prior knowledge

Knowledge of the threat domain

Knowledge of the threat domain characterizes an individual’s acquired skills and information for detecting a threat, such as a phishing attack (Musuva et al. 2019). Knowledge about threat techniques and terminologies is one crucial aspect of knowledge (Musuva et al. 2019; Grazioli, 2004). Empirical studies have been conducted to determine whether knowledge plays a significant role in phishing detection (Musuva et al. 2019; Wang et al. 2012). Educating Internet users about the strategies could reduce their susceptibility to the phishing threat (Verkijika, 2019). Internet users with more knowledge about phishing attacks can discern phishing threats (Wang et al. 2012) and confidently mitigate the risks of phishing victimization (Musuva et al. 2019).

Interestingly, a counterintuitive finding discovered a positive relationship between phishing knowledge and susceptibility to phishing (Diaz et al. 2019) — that is, the greater an individual’s knowledge of phishing, the greater his or her susceptibility to phishing. The authors speculated that Internet users who have experienced phishing attacks may be likelier to overestimate their phishing knowledge, resulting in a higher victimization rate. Although phishing-related knowledge reduces the likelihood of susceptibility to phishing scams, this awareness of suspicious messages may not reduce the likelihood of clicking on phishing messages (Downs et al. 2006; Sturman et al. 2023). This study aims to clarify this conundrum by examining the link between threat domain knowledge and phishing susceptibility. The current study, guided by the Theory of Deception (Johnson et al. 1992), seeks to determine whether knowledge of phishing or scams can reduce the risk of phishing victimization (Wang et al. 2012). Thus,

H1: Having knowledge on the threat domain is negatively related to the instant messaging phishing susceptibiliy.

Phishing target’s characteristics

Cyber risky behaviors

The central tenets of risky cyber activities are cyber-vocational activities, cyber leisure activities, and cyber social media activities (Choi, 2008; Choi and Lee, 2017; Choi et al. 2019). One’s online daily activities influence exposure to the risk of cybercrime victimization (Choi, 2008), including phishing victimization (Hutchings and Hayes, 2009; Holt and Bossler, 2009; Leanna, 2020; Leukfeldt, 2014; Ribeiro et al. 2024), cyber-interpersonal violence, cyber-interpersonal violence victimization (Choi and Lee, 2017), and cyber-bullying victimization (Choi et al. 2019). Internet users’ visibility from various online activities contributes to the extent to which the victim is a suitable target from the perspective of a would-be offender (Leukfeldt and Yar, 2016; Ngo et al. 2020). Empirical evidence indicates that cyber-risky activities predict cybercrime (Choi and Lee, 2017; Choi et al. 2019; Goede et al. 2023), cyber interpersonal violence (Choi and Lee, 2017), and cyberbullying (Choi and Lee, 2017) victimization. Therefore, this study predicts that:

H2: Engaging in cyber-risky social media (instant messaging) activity is positively related to the instant messaging phishing susceptibiliy.

H3: Engaging in cyber-risky leisure activity is positively related to the instant messaging phishing susceptibiliy.

H4: Engaging in cyber-risky vocational activity is positively related to the instant messaging phishing susceptibiliy.

Online security management

Capable guardianship can be classified into two categories: physical guardianship and digital (i.e., cybersecurity) guardianship, emphasizing effective online security management (Kabiri et al. 2020). This encompasses cybersecurity and security applications/software guardians (i.e., cybersecurity management). Regarding cybercrime victim behavior, cybersecurity management has been identified as the most critical factor in predicting cybercrime victimization among Internet users (Choi and Lee, 2017; Back, 2016). Internet users frequently employ information security management techniques to protect themselves from cybercrime attacks (Abu-Ulbeh, et al. 2021), as affirmed by The Routine Activities Theory documenting guardianship as the most influential and critical factor in reducing victimization (Leukfeldt and Yar, 2016). Online security management has garnered significant empirical support as a critical factor impacting cybercrime victimization. Insufficient online security management, including neglecting to use privacy protection on social media, enables motivated offenders to gather the potential victim’s information (Choi and Lee, 2017; Choi et al. 2019). Whitty (2019) found that Internet users engage in online guardianship behaviors to reduce their risk of online victimization. Recent findings confirmed that digitally capable guardianship significantly predicts cybercrime victimization (Smith and Stamatakis, 2021; Guedes et al. 2022).

Studies have demonstrated the pivotal role of knowledge in effective guardianship, shaping decision-making processes. The concept of guardianship functions along a range of capacities, including accessibility, observation, and involvement (Reynald, 2010). A fundamental requirement for effective guardianship, as outlined by Felson (2006), is a comprehensive knowledge of the immediate environment and its associated risks. In line with Felson’s (2006) perspective, capable guardianship in the online realm involves a deep understanding of online security practices and their application within social media platforms in preventing crime (Choi and Lee, 2017; Choi et al. 2019). In this way, online security management complements the concept of capable guardianship from routine activity theory.

Capable guardianship is an attitude that reflects an individual’s willingness to actively engage in crime prevention efforts (Marzbali et al. 2020). Possessing a capable guardianship attitude indicates that one is willing to take an active role in efforts to prevent crime. Research showed that the association between knowledge and attitude has been established, supported by the Knowledge, Attitude, and Behavior (KAB) (Schafeitel-Tähtinen et al. 2024). It is anticipated that the knowledge will transform a person’s mindsets (i.e., attitudes) and cause changes in behavior. An individual’s attitude towards information security is impacted by their level of expertise (i.e., knowledge) in the subject of cybersecurity awareness (McCormac, et al. 2017).

Inadequate online security management, such as neglecting to engage in privacy protection on social media, can leave individuals vulnerable to online exploitation (Choi and Lee, 2017). Therefore, possessing knowledge of cyber threats is crucial for developing the capability to effectively prevent such attacks (Moti et al. 2020). A strong understanding of cyber threats can empower individuals to adopt effective protective behaviors from phishing attacks and clickbaits, and ultimately achieve optimal levels of cybersecurity (Martens et al. 2019). In addition, Internet user with a higher level of cyber knowledge are more likely to recognize potential cyber threats and, consequently, engage in more effective cyber protection behaviors (Moti et al. 2020). This study hypothesizes that respondents with a higher level of cybersecurity knowledge (i.e., knowledge on threat domain) are more likely to engage in preventive measures (i.e., practicing online security management) against phishing attacks. Therefore, this study assumes that:

H5: Effective online security management is negatively related to the instant messaging phishing susceptibiliy.

H6: Knowledge on threat domain is correlated positively with good online security management on instant messaging platforms.

Phishing messages characteristics

Message involvement

Message involvement represents how individuals perceive the information’s relevance within the context of their interests (Chen et al. 2020). It concerns the level of engagement and interest individuals feel toward the message, reflecting their subjective evaluation of how meaningful the content is to their specific concerns, preferences, or areas of interest. Messages depicted as highly-involved are those deemed more pertinent to individuals’ interests, while lower-involved messages are perceived as holding little personal relevance, evoking relatively lesser personal connections (Wang et al. 2012). Individuals are less likely to engage in information processing when they perceive information as less relevant to their needs, whereas highly-involved messages or information prompt deeper cognitive efforts (Wang et al. 2012).

Message involvement, as a systematic information cue (Franz and Croitor, 2021; Xiao et al. 2018), significantly impacts phishing susceptibility. Highly-involved messages incentivize individuals to devote higher cognitive effort to be confident in the thoroughness of their judgment and decision-making (Chaiken, 1980). Framed differently, a person will expend as much cognitive effort as is required to achieve adequate levels of confidence for messages with high involvement (Wang et al. 2012). Different levels of message involvement have varying effects on an Internet user’s susceptibility to phishing (Franz and Croitor, 2021). A higher level of message involvement, in particular, increases susceptibility to phishing victimization (Wang et al. 2012; Franz and Croitor, 2021), given that a higher level of message involvement is more likely to elicit a favorable response. Therefore,

H7: Phishing messages with a higher level of message involvement is positively related to the instant messaging phishing susceptibiliy.

Persuasive cues

Persuasive cues are cues in a message that can influence one’s perception (Musuva et al. 2019), which encompasses layout, grammar, spelling, genre conformity, and message source (Luo et al. 2013; Vishwanath et al. 2011). Unlike argument quality, persuasive cues render instant communication without scrutinizing the message content (Musuva et al. 2019). Despite not triggering a solid inspection of the message content, these cues significantly affect recipients’ trust in the message (Musuva et al. 2019).

There is a significant relationship between persuasive cues and phishing susceptibility (Wright et al. 2020). Scholars have investigated the impact of various persuasive cues on phishing susceptibility (Grazioli, 2004; Workman, 2008; Vishwanath et al. 2011; Wang et al. 2012). Musuva et al. (Musuva et al. 2019) discovered that persuasive cues impact susceptibility to phishing victimization. People relying on cognitive shortcuts (i.e., heuristic cues) to evaluate phishing messages are likelier to fall victim to phishing attacks (Hanus et al. 2022). This is because specific cues embedded in deceptive messages can disrupt the systematic processing of their content, which could otherwise potentially reveal the deception in phishing messages. Hence,

H8: Phishing messages with persuasive cues is positively related to the instant messaging phishing susceptibiliy.

Methodology

Participants and data collection

The target demographic for this study was Generation Z. instant messaging users According to recent reports, young Malaysians aged 18 to 29 have lower awareness and perceptions of cybercrime, making them easy targets for cybercrime attackers (Ghani and Ghazali, 2019; Hasan et al. 2020). Similarly, studies have found that young adults are more likely than elderly adults to fall victim to fraud (MCMC, 2023; Digi, 2023; Maxis, 2023). In 2023, more than 95% of Malaysian instant messaging users will be between the ages of 18 and 34, with over 50% belonging to Generation Z (ages 18 to 24) (Start.io, 2024). In addition, phishing cyberthreats in Malaysia were carried out on instant messaging platforms, with WhatsApp being the most commonly used method for delivering phishing attack (MCMC, 2023; Maxis, 2023; Digi, 2023).

People born in 1996 and later are categorized as Gen-Zers (Cilliers, 2017). According to Nagy (2017), Gen-Zers are people born between 1995 and 2012. Noble et al. (2009) define Gen-Zers as those born between 1995 and 2009. Considering the suggested age range for Gen-Zers, this study defined Gen-Zers as individuals born in 1995 or later.

The present study applied a non-probability purposive sampling technique for data collection. Purposive sampling allows for more reflective and situation-specific data (Lew et al. 2020). Participants were chosen based on their knowledge of cybercrime phishing victimization (Ghazi-Tehrani and Pontell, 2021), specifically having received phishing messages through instant messaging. As a result, purposive sampling in conjunction with pre-set screening criteria was deemed appropriate for the current study. Screening questions ensured that study participants met the following criteria:

  1. (a)

    Malaysians born between 1995 to 2004;

  2. (b)

    Have used mobile instant messaging for online communication;

  3. (c)

    Have ever received phishing messages.

This study followed ethical guidelines and was approved by the university’s Research Ethics Committee. Respondents had to be at least 18 years old and fill out an informed consent form before participating in the survey. All participants were fully informed about the study’s purpose, and the survey ensured anonymity by not collecting respondents’ personal information. The respondents have been adequately informed of their other rights, including confidentiality, privacy, voluntary participation, and the right to withdraw from this study without explanation. Because Malaysian young adults actively use social media (MCMC, 2020), data was collected via an online survey using Google Forms and posted on social media, such as Facebook and WhatsApp.

Measures

The online survey was developed using various sub-scales (Table 2). Items from the knowledge of the threat domain scale developed by Musuva et al. (2019) were used to measure individual prior knowledge. Items from the risky cyber activities scale (social media: instant messaging, vocational, leisure activities) developed by Choi and Lee (2017) were used to measure risky cyber behavior. This study operationalized cyber-risky social media activity as cyber-risky instant messaging activities. Cybersecurity management was measured using the scales developed by Kabiri et al. (2020). The message involvement and persuasive cues scales were used to measure the phishing message characteristics. The message involvement and phishing susceptibility scales were from Chen et al.’s (2020) study. The message involvement scale was scored on a seven-point differential scale ranging from strongly disagree (1) to strongly agree (7). The persuasive cue scale was adopted from Musuva et al. (2019), ranging from not at all influence (1) to a very great extent influence (5).

Table 2 Research construct’s measurement and normality.

A conducted expert review assured the content validity of the survey. A pilot test (n = 54) performed before the main data collection affirmed that all the research construct reliability was above 0.70 (Hair et al. 2014).

Results

Descriptive analysis

Among the collected 386 data, twenty-five sets of responses were removed due to straight-lining responses (Hair et al. 2014). The results indicated that the skewness for all research construct indicators ranged between −0.912 and 1.031. The kurtosis for all indicators ranged from -−1.024 to 0.293. Both skewness and kurtosis values fall within the criteria of normality of the data that is, +2 or −2 (skewness) and −7 or +7 (kurtosis) (Hair et al. 2010). In addition, Harman’s single factor was determined to examine the common method bias (CMB). Unrotated principal component factor analysis accounted for 29.91% (less than 50%). CMB is not a concern in the current research framework (Podsakoff et al. 2003).

This study follows Kock’s (2015) recommendation to test the full collinearity. Variance inflation factor (VIF) values greater than 3.3 indicate potential collinearity problems (Kock, 2015). All the research constructs have been regressed on a common variable. The results found that all of the VIF values for cyber-risky instant messaging activities (1.535), cyber-risky leisure activities (2.234), cyber-risky vocational activities (2.372), knowledge on threat domain (1.357), message involvement (1.833), online security management (1.442), persuasive cues (1.591), and phishing susceptibility (1.466) were less than 3.3. Thus, the results indicated that single-source bias may lead to common method variance and that common method bias is not an issue for the current data set.

Demographic Profile

A total of 361 valid responses were used for the final analyses. Of those who responded, 29.6% were males (n = 107) and 70.4% were females (n = 254). The majority of participants (80.1%) were students. 96.1% of those surveyed have a tertiary education (diploma, bachelor’s degree, master’s degree, or doctorate). WhatsApp (n = 321), Facebook Messenger (n = 241), and Telegram (n = 159) were the top three instant messaging platforms chosen by respondents for online communication. 30.2% of respondents said they rarely received phishing messages. 5.8% of those surveyed said they received phishing messages more than once a week. One hundred sixty-eight people said they get phishing emails or messages once or twice a month. Sixty-three people said they had received phishing messages once or twice every two weeks.

Measurement model assessment

To determine convergent validity, the factor loading, composite reliability (CR), and average variance extracted (AVE) were all evaluated. The outer loading of PC1 (0.405) for measuring persuasive cues was found to be less than 0.50 (Chin, 1998); thus, it was removed. As shown in Table 3, the CR and AVE values of each research variable all meet the 0.7 and 0.5 thresholds (Hair et al. 2011).

Table 3 Construct outer loading, composite reliability, and average variance extracted.

The Fornell-Larker Criterion and the heterotrait-monotrait ratio of correlations (HTMT) are used to determine discriminant validity. Table 4 shows that the squared roots of AVEs (bold diagonal values), greater than the correlations with other constructs, demonstrate discriminant validity (Fornell and Larckers, 1981). Furthermore, as shown in Table 5, the heterotrait-monotrait ratio of correlations (HTMT) was used to assess the correlations between the research constructs. None of the research constructs are correlated with more than 0.85 (Henseler et al. 2015). As such, the Fornell-Larcker criterion and HTMT results provided sufficient evidence of discriminant validity for all variables and proved that the measurement items are reliable and valid.

Table 4 Fornell Larcker Criterion.
Table 5 HTMT.

Figure 2 indicates that six path relations have a t value ≥ 1.645, thus significant at a 0.05 significance level. The data analysis results affirmed hypotheses 2, 3, 4, 6, 7, and 8, as summarized in Table 5. The findings indicated that knowledge of the threat domain (β = −0.041; p = 0.225) does not significantly predict instant messaging phishing susceptibility. Thus, H1 was refuted. The findings indicated that engaging in cyber-risky instant messaging activities (β = 0.104; p = 0.034), cyber-risky leisure activities (β = 0.122; p = 0.020), and cyber-risky vocational activities (β = 0.115; p = 0.037) were positively related to the instant messaging phishing susceptibility. Thus, H2, H3, and H4 were supported. Effective online security management (β = −0.004; p = 0.470) does not significantly influence instant messaging phishing susceptibility. Thus, H5 was rejected. Knowledge of threat domain (β = 0.502; p = 0.042) significantly influenced online security management. Thus, H6 was supported. Phishing messages with a high level of message involvement (β = 0.248; p < 0.001) and persuasive cues (β = 0.130; p = 0.025) were positively related to the instant messaging phishing susceptibility. Thus, H7 and H8 were supported.

Fig. 2: Full model of antecedents of phishing susceptibility.
figure 2

***p < 0.001; *p < 0.05. This figure depicts the significant factors found influencing phishing susceptibility.

The research variables’ variance inflation factor (VIF) ranges from 1.000 to 2.353. All of the VIF values are less than 5 (Kock, 2015). The VIF result indicates that there is no multicollinearity between the exogenous variables (see Table 6). In sum, the model explains 31.8% of the variance in instant messaging phishing susceptibility and 25.2% of the variance in online security management. Figure 2 shows this study’s structural model.

Table 6 Bootstrapping Results.

PLS-predict

This study assesses PLS-predict (Q2) to examine whether the current research framework’s predictive relevancy uses a ten-fold procedure and ten times repetition. The Q2 value of the latent variable (phishing susceptibility) is 0.285, greater than zero (Shmueli, et al. 2019). Next, this study follows Shmueli et al.’s (2019) recommendation by assessing all item differences (PLS-LM). All of the PLS-LM items have lower RMSE values, indicating that this study’s framework has a strong predictive power (Shmueli et al. 2019). Table 7 presents the PLS-predict result.

Table 7 PLS-Predict Summary.

Discussions

Phishers prowl the Internet, crafting fake messages that prey on people’s desires to entice them to reveal personal information (De Kimpe et al. 2018). To keep people from falling into phishing traps, it is critical to learn more about the types of Internet users who are more likely to become phishing targets or victims (De Kimpe et al. 2018). According to this study, engaging in risky cyber (instant messaging and vocational activity) increases the risk of instant messaging phishing victimization. This is consistent with previous research indicating that engaging in risky online activities may increase the likelihood of becoming a phishing target (Paek and Nalla, 2015; Reyns, 2015). As a result, sharing or posting personal information on instant messaging platforms is a significant predictor of becoming a phishing victim.

There is a statistically significant link between online leisure activity and phishing victimization. It is possible to conclude that downloading items such as music and movies from any website is significantly affected the instant messaging phishing susceptibility. This finding is consistent with the previous link between online leisure activities and cybercrime victimization (Choi and Lee, 2017; Kai et al. 2023). On the other hand, this observation contradicts to other studies indicating a lack of association between online activities and phishing susceptibility (Akdemir and Lawless, 2020; Leukfeldt, 2014). A recent study discovered that engaging in risky leisure activities online significantly predicts offender behavior rather than victim behavior (Llinares and Moneva, 2019). As individuals devote more time to online leisure activities, their likelihood of criminal activity victimization decreases (Llinares and Moneva, 2019).

Users’ security and online safety platforms (online security management) were found to have no significant relationship with phishing susceptibility. This finding corroborates previous research that found guardianship measures do not predict cybercrime victimization (Choi et al. 2019), particularly phishing victimization risk (Leukfeldt, 2014). When “a motivated offender,” “a suitable target,” and “the absence of a capable guardian” all come together, crime occurs (Leukfeldt and Yar, 2016). According to Choi et al. (2019), “cybercrime victimization is a relatively new form of victimization that presents unique and different situations than offline crime victimization” (Aizenkot, 2021; Choi et al. 2019). Cybercrime can occur when the above criteria are present asynchronously through the online network; in other words, cybercrime can occur when one of the criteria is met (Choi et al. 2019). The findings of this study imply that criminals (offenders) are always consistent and that anyone (targeted victim) may be a victim of cybercrime without the protection of a digitally capable guardian (Choi et al. 2019).

The current finding suggests that knowledge of the phishing domain is a significant predictor of effective online security management behavior among individuals using instant messaging platforms. Counterintuitively, this knowledge did not appear to correlate with a reduced susceptibility to phishing attacks. Individuals’ prior knowledge, such as threat domain knowledge does not predict phishing susceptibility significantly. This finding contravenes previous research indicating that threat domain knowledge (Butler and Butler, 2018) significantly affects phishing victimization risk. Internet users may be unable to detect phishing attacks due to a lack of awareness and phishing knowledge (Butler and Butler, 2018) and a lack of ability to gain phishing-related knowledge (Sun et al. 2016). This study demonstrates that having relevant phishing do not affect the instant messaging phishing susceptibility. One plausible explanation can be attributed to the advent of more advanced and sophisticated phishing techniques. Phishing methods are constantly evolving, particularly those that use psychological techniques to assess the honesty of individuals on instant messaging platforms, making these phishing forms challenging to detect (Prasad and Rohokale, 2020). Scholars observed that people’s judgments about phishing attacks are not always entirely rational (Metzger and Suh, 2017; Lei et al. 2022). Even if individuals possess pertinent information, it will not directly impact their final decision-making (Ge et al. 2021).

This study finding underscores the significance of phishing domain awareness in enabling individuals to effectively manage their online security on instant messaging platforms. This includes the ability to swiftly block or restrict unwanted contacts, report harmful content, and adjust privacy settings to mitigate the risks associated with phishing attacks. Our findings align with previous research by Kennison and Chan-Tin (2020), which demonstrated that possessing cybersecurity knowledge can influence Internet users’ behavior and enhance their grasp of fundamental cybersecurity principles. Similarly, existing studies have shown that understanding of knowledge on threat domains can empower Internet users to adopt proactive protection measures, such as controlling privacy settings (Ahamed et al. 2024).

It was discovered that message involvement (systematic processing cues) and persuasive cues (heuristic processing cues) influence phishing victimization risk. Previous research has shown that message involvement (Franz and Croitor, 2021) and persuasive cues (Vishwanath et al. 2011; Wang et al. 2012) play essential roles in predicting phishing susceptibility. This could imply that a variety of persuasive cues, such as resemblance to other official websites or emails, as well as urgency, can effectively persuade and convince people to believe the phishing message (Luo et al. 2013; Musuva et al. 2019; Vishwanath et al. 2011). As a result, the risk of being a victim of instant messaging phishing increases. Furthermore, a phishing message with a high message involvement increases the risk of instant messaging phishing victimization. An inference could be drawn: when Internet users are drawn to the contents of the messages, they are more likely to pay attention to the phishing message or information (Chen et al. 2020), making them more vulnerable to phishing traps.

Theoretical and practical implications

This study offers an integrative, holistic, and comprehensive understanding of instant messaging phishing victimization vulnerability. Our findings clarify how people deal with phishing messages while identifying potential risk factors influencing Gen-Zers’ susceptibility to phishing. Despite the importance of phishing awareness and prevention, many Internet users lack knowledge of processing information concerning phishing threats (Alseadoon et al. 2014; Kritzinger and von Solms, 2010). This study investigates whether the role of information processing in phishing detection influences phishing susceptibility to instant messaging phishing victimization, thereby empowering Internet users to detect and avoid phishing attacks.

One implication, derived from our data and synthesized with the Heuristic-Systematic Model (HSM) (Wall and Warkentin, 2019), is that users frequently rely on and only focus on heuristics when evaluating and handling messages. Users may not carefully and methodically evaluate the message contents, warranting more research be conducted to investigate the theoretical foundation in order to systematically clarify how individuals process phishing communication (Musuva et al. 2019; Wang et al. 2012). Using the HSM, this empirical study investigates how people process phishing information or communication, which leads to them falling into phishing traps. This study focuses on Gen-Zers’ information/message processing, which includes the message involvement factors of systematic information processing (Luo et al. 2013). Furthermore, our study posits that persuasive cues are a heuristic information processing factor (Chen et al. 2020; Musuva et al. 2019).

According to researchers, when people are deeply involved with a suspicious message, they are more likely to devote more resources to the message and take the necessary actions (Chen et al. 2020). In this case, detecting the deception is more difficult to detect because the cues are more difficult to identify, exposing Internet users to a higher risk of phishing victimization (Chen et al. 2020). Phishing susceptibility is increased by highly involved phishing messages with persuasive cues such as account suspension notifications, financial reward prospects, and the resemblance of legitimate emails or websites. As a result, this study advocates for more education and training for Internet users to assess messages’ authenticity more effectively. Relevant agencies responsible for combating cybercrime may regularly review the cues of phishing and genuine messages and then expose these tactics through phishing awareness campaigns to improve users’ responses to suspicious messages.

In contrast to physical (offline) activities, cyber activities, regardless of usage capacity, have the potential to influence Internet users (Choi and Lee, 2017). In the United States, the Cyber-Routine Activities Theory (Cyber-RAT) has been used to predict cybercrime (cyberbullying) behavior, whether motivated offender or victimization behavior (Choi, 2008; Choi and Lee, 2017; Choi et al. 2019). The current study validates the use of Cyber-RAT as a theoretical model for examining instant messaging phishing susceptibility. In line with empirical studies that show that exposure to motivated offenders increases the likelihood of cybercrime phishing victimization (Graham and Triplett, 2016; Leukfeldt, 2014; Ngo et al. 2020), the findings of this study imply that users’ daily online social activities (participating in leisure activities, vocational activities and instant messaging activities) predict the instant messaging phishing susceptibility. This work demonstrates that risk-taking behavior, such as clicking on phishing links or infected files (vocational activities), is a root cause of successful phishing attacks (Abdelhamid, 2020; Williams and Polage, 2018), contributing to higher phishing victimization risks (Abroshan et al. 2021).

Furthermore, this study discovered that Gen Z Internet users will indulge in illicit downloads of movies, music, and other material if they are unaware of negative consequences from the unlawfully downloaded product, making them more susceptible to instant message phishing. This is because users’ personal information may be captured when they download free movies, games, music, or other stuff from any website. As a consequence, one’s probability of being a phishing victim increases because the phishers have obtained their personal information. These observations inform policymakers and regulatory bodies in Malaysia, such as the Malaysian Communications and Multimedia Commission (MCMC) and Cybersecurity Malaysia, which are in charge of combating cybercrime (MCMC, 2024; Mohd and Mohd, 2021). Agencies may use the findings of this study to develop anti-phishing programs and awareness campaigns to help Internet users avoid being victimized by phishers.

This study focuses on Malaysian Gen-Zers because they are more vulnerable to cybercrime attacks (Mohd et al. 2016; Lalitha et al. 2017). According to the official report of the Federation of Malaysian Consumers Associations (FOMCA), the young Malaysian generation is becoming more involved in the digital economy by engaging in online activities and digital financial services (Raj, 2021). Consumers exposed to these online activities will face numerous risks, including spamming and phishing (Raj, 2021). As a result, it is critical to investigate youth behaviors within this research domain, including their daily online social activities (i.e., online posting, online vocational activities, and online leisure activities). Gen-Zers are Malaysia’s future workforce (Lalitha et al. 2017); therefore, they must be equipped with cybersecurity knowledge (Verkijika, 2019) to protect them from financial losses due to phishing. Research on the risk of instant messaging phishing victimization can help organizations guide employees to avoid phishing traps that cause financial losses.

Finally, the current study found that knowledge of the threat domain from the Theory of Deception (TOD) did not statistically predict the instant messaging phishing susceptibility. Despite insignificant findings, phishing-related knowledge and the ability to gain anti-phishing knowledge were undeniably essential factors in predicting phishing susceptibility. This is due to the extensive use of TOD in phishing victimization risk research (Chen et al. 2020; Musuva et al. 2019; Wright et al. 2009; Vishwanath et al. 2011). Furthermore, knowledge is vital in avoiding phishing victimization (Butler and Butler, 2018; Sun et al. 2016; Vishwanath et al. 2011). TOD emphasizes that people will use their prior knowledge to interpret the suspicious message and decide whether to respond (Chen et al. 2020). Prior experiences (i.e., knowledge) significantly impact how people react to phishing attacks (Chen et al. 2020; House and Raja, 2019). On the other hand, evidence suggests that less experienced Internet users cannot interpret and deal with unfamiliar or novel phishing attacks (Ebot, 2018). Without denying that TOD was the grounded theory capable of explaining phishing victimization risk, the findings of this study add to the body of knowledge in this field by indicating that this study’s respondents may have had less prior phishing victimization experience. As a result, the antecedents of TOD (knowledge of phishing domain) did not significantly predict Gen-Zers’ susceptibility to phishing.

This study discovered that respondents who have knowledge of cybersecurity (i.e., phishing domain knowledge) engage in more sophisticated protection activities (practicing digital capable guardianship attitude, that is, online security management that includes profile controls and user controls, which allow users to control the accessibility of their online profile to only a specific individual or party. Although knowledge of the phishing domain has no direct affect on phishing susceptibility, we suggest that respondent cyber knowledge may explain this gap by allowing Gen-Zers to strengthen their protection mechanisms. This study suggested that tertiary education could empower Gen-Zers by encouraging critical thinking. Encourage younger social media users to utilize critical thinking skills to assess the reliability contents and identify potential phishing attempts. Furthermore, universities might promote a security culture by encouraging students to be vigilant about online security and to share best practices with their social networks. Government organizations may conduct education and awareness efforts to assist instant messaging users comprehend and implement security settings to their instant messaging accounts, such as two-factor authentication and privacy controls.

Limitations and suggestions for future research

Some limitations and avenues for further studies are discussed in this section. This study focuses on individuals susceptible to phishing victimization, that is, instant messaging phishing victimization. However, this study’s findings may not generalize to other domains, such as phishing conducted on other online platforms. Future studies can conduct a longitudinal study, such as in-depth interviews, to add additional insights into understanding how individuals react to phishing content and how to minimize the phishing victimization risk. Additionally, further work can include other potential predictors of phishing susceptibility, including behavioral comprehensiveness (Hong and Furnell, 2021). For instance, studies can assess whether precautionary measures taken by Internet users, such as refraining from downloading unknown files, can avert phishing victimization risks. This study scopes to Gen-Zers; hence, comparative studies may extend this model for examining Internet users’ risk of phishing victimization to other generational cohorts.

Future research could extend this framework by examining whether the characteristics of phishing messages moderate the susceptibility of different target groups to phishing attacks. For example, existing empirical studies (Luo et al. 2013) have identified the psychological mechanism of need for cognition as a potential moderator in the relationship between phishing message cues and susceptibility. Additionally, recent research (Franz and Croitor, 2021) has demonstrated that social networking sites use can influence users’ processing of heuristic and systematic cues, exacerbating their susceptibility to decision-making errors and making them more vulnerable to phishing attacks.