Introduction

In wireless systems, mobile users move where they can be connected to different users, either directly or via intermediate stations. As different peers are encountered in the time-varying mobile world, it will be necessary to determine very rapidly which peers can be trusted and which ones should be avoided. The concept of trust can be a useful tool in such contexts, and a rapid method for determining and updating mutual trust metrics will be needed. The Random Neural Network Trust Model proposed in this paper aims to offer a computationally efficient tool to meet this need.

However, trust is a sophisticated notion with emotional1 and cognitive2 origins that is considered in different areas3, and that is frequently used in business4. To be able to evaluate trust, the entities whose trust is being evaluated must be connected to each other so that trustworthiness and dependability may be confirmed by multiple entities, each of which provide their independent evaluations. As proposed in the American Express trust model5, trustworthy entities should provide consistently satisfactory services, and demonstrate the needed competence to deliver results with expected Key Performance Indicators (KPI), and demonstrate caring or concern for the opinions of others, with collective decision-making.

Trust in the Internet also relies on cryptography that provides a trustworthy tool for identification of entities, privacy with regard to the communications between entities, and secure payments6,7,8. In social networks that are run over the Internet, trust is also crucial for the validation of political or commercial opinions, and the development of new computational and dynamic models of trust can be useful9.

A recent discussion paper10 recalls the role of trust as an element of human and institutional identity, and points to an analysis11 that stresses the importance of Temporal Embeddedness, implying that a trustworthy party can benefit from another entity’s trust into the future, and hence value and nurture the trust being placed in itself, while Social Embeddedness allows the trusted entity to benefit from the propagation of its trustworthiness through social networks. On the other hand, Institutional Embeddedness of trust refers to stable social institutions, such as the legal system and the courts of law, regulatory bodies, professional organizations and universities, that can certify trustworthiness within specific contexts through the award of certifications and degrees related to knowledge and professional capabilities, and propose codes of behaviour that can reinforce the role of trust within human society.

Early work has discussed how interactions in social networks can enhance trust relations12, and an analysis of the links between social networks and trust was examined in the context of medical practice13. On-line feedback that replaces human interaction for the establishment and management of reputation was considered in ref. 14, while the manner in which specialized “recommender agents” can be constructed and used is examined in ref. 15. More recent work has examined how trust representations can enhance collective intelligence and successful search in social systemes16. In ref. 17, the analytical techniques that can help evaluate reputation in peer-to-peer systems are discussed, while other work has studied the effects of personalities and human bias on the dynamics of trust18,19.

The recent computer science literature on trust is also abundant20,21, and the concept of trust has also been frequently considered for the Internet of Things (IoT)22. Trust has been long studied in the context of social networks23, because trust influences the manner in which information spreads and is repeated among a set of connected entities such as P2P networks24,25.

Trust among n entities is typically modelled as a directed “trust graph” (TG) with n entities represented by the nodes of the graph, where arcs represent the trust of some entity regarding another entity, and numerical weights or other labels on the arcs represent the strength or degree of trust, or the qualifications or restrictions of the trust relationship among the entities. Time-varying and probabilistic trust relationships have also been represented with TGs26.

Much work has been carried out regarding the use of large sets of TGs representing empirical data regarding the trust values among entities, to learn the “true” trust values that emerge from the data with a machine learning (ML) model such as an artificial neural network27. Such ML models can then be used to test whether certain TGs conform with the data learned by the ML model, and also to detect implausible or non-conforming TGs, or to discover arcs in a given TG that do not agree with the datasets that have been learned by the ML model28,29.

Since trust data is usually represented as a set of TGs, much of the research on building trained ML models that can store and classify trust data is based on training with a set of TGs that represent trust relations among a given set of entities, and Graph Neural Networks (GNN), which use graphs as their input, have been used as useful tools for learning the characteristics of TGs30. Practical applications of these approaches to recruiting employees31, the recommendation of items for usage and purchase32 have been developed, and in ref. 33 a dynamic model of trust in the presence of malevolent entities has been investigated.

In most of the literature, a TG represents the “a priori” known or measured trust relationship between entities, and does not include the effect of changes in the trustworthiness of the entities in determining opinions about the trustworthiness of other entities. Thus, the dynamic modulation of the value of the opinions of the entities as a function of their own individual level of trust, the need to avoid giving too much weight to the opinion of entities which express themselves more frequently than others, and may dominate the opinions of others, and the complex direct and indirect feedback-based impact of the trust level of individual entities on the trustworthiness of other entities, constitute challenging issues worth considering when we build trust models.

The purpose of this paper is to develop an operational model of trust for future networks composed of many connected devices, including servers that aim to produce secure and reliable services to all the other entities, in a manner that limits the influence of untrustworthy entities within the model, allowing all entities to express their trust or distrust of others through their votes, which are distributed in a rationed manner to each entity. To this effect, we propose a new time-dependent and networked model of trust, the RNN Trust Model (RNNTM), which uses the mathematical structure of the Random Neural Network Model34,35,36 to incorporate the dynamics of trust formation through a sequence of “votes” from each entity to all other entities, that mimic the successive expressions of trust and distrust that any entity may formulate about other entities.

In our model, which was first described in ref. 37, trust in a given entity is based on the opinion expressed by other entities as well as on the frequency with which each entity expresses its trust or distrust of others. Each entity (a device or server) is fed with a flow of “permits” that allow it to dispense its opinion regarding others, and it can indicate its trust about another entity if it observes that this other entity is accomplishing its normal work, or when it receives a normal message from it, while it may express distrust about the other entity if the other entity becomes non-responsive or if experiences a cyber-attack. Entities that express themselves more frequently than others can also lose their trust value, so that the model discourages excessive “gossip” or attempts to influence opinions, and entities with low trust value will have a smaller chance of influencing the trust level of others.

Thus, in the ‘Methods’ section, we first introduce the RNNTM Dynamics and its analytical solution. We also discuss its Initialization and Implementation, and the effect of alternative approaches for implementing the model. Cyberattacks are then introduced as the cause of modifications in the evaluation of trust among a collection of entities, causing changes in the rust interaction rates between entities.

To illustrate these results with simulations using realistic data, the ‘Results’ section first describes the CIC-IDS2017 real attack dataset38 that is used to evaluate the RNNTM. Then, we consider the effect of different real Datasets that drive the RNNTM with a succession of DoS, DDoS and Botnet Cyberattacks. In particular, DoS and DDoS attacks against a server within an Internet of Things (IoT) system can result in significant drops in the trust values of the different entities, which may later recover their trust values progressively after the attacks end. The experimental effect of Botnets is also examined.

Finally, the ‘Discussion’ section summarizes the main results of the paper, and presents suggestions for further work.

Methods

The RNNTM Dynamics is based on a computational model for the intrinsic and mutual trust associated with a set of n entities E = {e1, . . . , en}. Similar to the Random Neural Network formulation34, the trust for entity ei at some real-valued time t ≥ 0 is expressed as a non-negative integer Ki(t) ≥ 0, and indicates that the entity cannot be trusted at all at time t if Ki(t) = 0. On the other hand, if Ki(t) > 0 then it is worthy of some trust at time t, described by the value of Ki(t). Thus, the trust in all of the entities at time t is given by the nvectorK(t) = (K1(t), . . . Kn(t)), and as we shall see below, the trust of each entity depends on the trust value of the other entities. The RNNTM concerns all the n entities, and can be installed as a software API in each of the n entities, and each of the entities can use the same rules for updating it based on external events. These external events include periodic broadcast messages sent, for instance, every ten seconds, from each entity to all of the other entities. The repeated lack of an acknowledgement message in response to a sent message, or a lack of messages coming from a specific entity, due to losses or errors of messages, it could then be viewed as an indication of a malfunction or cyberattack concerning the non-responsive entity, which would then result in a reduction of the trust that is associated with it.

Interactions between entities occur asynchronously at different instants of time, and lead to changes in the value of Ki(t) for each ei, which also limits the “right to express” itself of each entity so that more trustworthy entities may express themselves more frequently. Furthermore, events such as cyberattacks on certain entities will modify the parameters that express the trust or distrust between entities, which in turn affect the values of each Ki(t). The original and novel features of the RNNTM, as compared to previous trust models, are:

  • RNNTM treats the trust level of each entity as a “right to vote” which is replenished by a constant rate of rights to vote that arrive externally to entity ei at a positive constant rate Λi > 0. These rights to vote may be also depleted by a constant (non-negative) flow of deplete messages λi≥ 0. Thus, the parameters Λi, λi can be viewed as the rules imposed by an external regulator to each entity ei in the system.

  • When Λi = Λ, λi = λ for all 1 ≤ i ≤ n, this means that all entities are placed on an equal footing regarding the number of voting rights they receive per unit time and the rate at which these voting rights are taken away from them.

  • The trust level and right to vote is reduced each time the entity expresses trust or distrust towards another entity, or when some other entity expresses distrust towards the given entity. Thus, the “right to vote regarding the trust of others” is modulated by the current trust level of the voter, and entity ei cannot express its trust or distrust of the other entities, and hence affect their trust values at some time t, if ei’s trust value Ki(t) = 0. More generally, the capability of any ei to influence the trust level of another entity ej depends on the probability that ei itself has a positive trust level Prob[Ki(t) > 0]. Thus, non-trustworthy entities are not allowed to affect the trust level of other entities.

  • On the other hand, each time an entity receives an expression of trust from another entity, its trust level increases by one.

Thus, in the RNNTM the trust level of an entity is determined by its external replenishment or deletion, its rights to vote and the frequency with which it expresses itself about other entities, and the trust or distrust that is expressed about itself by other entities. The RNNTM models the manner in which the mutual trust among a set of entities evolves over time in response to the opinions expressed by the entities regarding each other under the effect of external events, such as lost messages, system failures, or Cyberattacks.

The trust system we describe is affected by external prior knowledge represented by the real-valued external trust parameter Λi≥0 and the external distrust parameter λi≥ 0, for each entity ei, i = 1, . . . , n, and by the parameters that govern the interactions between entities that are defined below. In our system, entity ei can express an opinion at time t about some other entity, as long as its trust value is positive, i.e. Ki(t) > 0; when it does so, its own trust level drops by one, i.e. Ki(t+) = Ki(t) − 1. Thus, each entity has a “number of voting rights” Ki(t) about other entities which is identical to its own trust value. Thus the higher its own value Ki(t) is, the more trustworthy it is and the more votes ei will have, to express its trust or distrust of others. Note that Ki(t) = 0 means that at time t, ei cannot be trusted (it is untrustworthy) and that it is also not allowed to express an opinion about other entities’ trustworthiness.

Thus, the trust level Ki(t) of entity ei is also the number of “votes” or expressions of trust or distrust that it is allowed to express about other entities at a given time; this resembles a “plutocracy of trust”, where trustworthy individuals are allowed to more frequently express their trust or distrust of others. In this model, a probabilistic n × n connection matrix \({P}^{+}=[{p}_{ij}^{+}]\) also represents for each entity ei the probability that it may express trust about another entity ej, and similarly the probabilistic n × n connection matrix \({P}^{-}=[{p}_{ij}^{-}]\) represents for each entity ei the probability that it may express distrust about another entity ej. These matrices are constrained as follows for each i = 1, . . . n:

$${p}_{ij}^{+}\ge 0,\,{p}_{ij}^{-}\ge 0,\,{p}_{ii}^{+}={p}_{ii}^{-}=0,\,\,\mathop{\sum }\limits_{j=1}^{n}[{p}_{ij}^{+}+{p}_{ij}^{-}]=1,$$
(1)

representing the opinion of each entity ei regarding its trust or distrust for other entities. Finally, each entity ei has a specific rate ri or speed at which it may express its trust or mistrust about another entity. We use these parameters to define the “weights” with which each entity expresses its trust or distrust concerning other entities:

$${w}_{ij}^{+}\equiv {r}_{i}{p}_{ij}^{+},\,{w}_{ij}^{-}\equiv {r}_{i}{p}_{ij}^{-},\,{\text{and}}\,{r}_{i}=\mathop{\sum }\limits_{j=1}^{n}[{w}_{ij}^{+}+{w}_{ij}^{-}].$$
(2)

In general, the weights \({w}_{ij}^{+},\,{w}_{ij}^{-}\) may change, and the parameters Λi, λi may be updated or changed during the long periods of usage of a given model. For instance, a cyberattack on an entity may result in a loss of trust by the other entities towards the entity that has been the victim of a successful cyberattack, because the success of the attack implies that the entity was not well protected to detect or mitigate a cyberattack, and after an attack the entity itself may be compromised.

In the following, we will use the notation [X]+, which is commonly defined as [X]+ = X, when X ≥ 0 and [X]+ = 0, when X < 0. The n entities interact with each other using the parameters that we have defined, in the following manner at any given time t:

$${K}_{i}(t+\Delta t)={K}_{i}(t)+1,\,with\,probability\,{\Lambda }_{i}\Delta t+o(\Delta t),$$
(3)
$${K}_{i}(t+\Delta t)={[{K}_{i}(t)-1]}^{+},\,with\,probability\,{\lambda }_{i}\Delta t+o(\Delta t),$$
(4)
$$If\,{K}_{j}(t) > 0,\,then\,{K}_{i}(t+\Delta t)={K}_{i}(t)+1,\,with\,probability\,{r}_{j}{p}_{ji}^{+}\Delta t+o(\Delta t),$$
(5)
$$\begin{array}{l}Finally,\,if\,{K}_{j}(t) > 0,\,then\,{K}_{i}(t+\Delta t)={[{K}_{i}(t)-1]}^{+},\,and\\ {K}_{j}(t+\Delta t)={K}_{j}(t)-1,\,with\,probability\,{r}_{j}{p}_{ji}^{-}\Delta t+o(\Delta t).\end{array}$$
(6)

Thus (3) indicates that the external opinion of trust Λi regarding ei increases its trust level by one, while the external opinion of distrust λi reduces it by one, as indicated in (4). The expression of trust by some entity ej for ei will increase its trust level by one as shown in (5), while the expression of distrust will reduce its trust level by one, as in (6).

Using this definition of the RNNTM, and the results from refs. 34,39, the following key result allows us to compute the trust value in steady-state for a set of n interacting entities.

Theorem Let:

$${q}_{i}=\mathop{{\rm{lim}}}\limits_{t\to \infty }Prob[{K}_{i}(t) > 0],\,1\le i\le n.$$
(7)

Then if the solution to the following non-linear system of equations exists:

$${q}_{i}\equiv \frac{{\Lambda }_{i}+{\sum }_{j=1}^{n}{q}_{j}{w}_{ji}^{+}}{{r}_{i}+{\lambda }_{i}+{\sum }_{j=1}^{n}{q}_{j}{w}_{ji}^{-}} < 1,$$
(8)

such that 0 ≤ qi < 1, for 1 ≤ i ≤ n, then:

$$\begin{array}{l}\mathop{{\rm{lim}}}\limits_{t\to\infty}Prob[{K}_{1}(t)={k}_{1},...,{K}_{n}(t)={k}_{n}]\\={\Pi}_{i=1}^{n}(1-{q}_{i}){q}_{i}^{{k}_{i}},\end{array}$$
(9)
$$and\,\mathop{{\rm{lim}}}\limits_{t\to \infty }E[{K}_{i}(t)]=\frac{{q}_{i}}{1-{q}_{i}}\,.$$
(10)

Note: For a particular model or application, one can set thresholds for trustworthiness such as:

  • ei is untrustworthy if qi≤ θ1,

  • ei is undetermined if θ1 < qi≤ θ2,

  • ei is trustworthy if qi > θ2, where:

    $$0\le {\theta }_{1} < {\theta }_{2} < 1.$$
    (11)

Comment on the significance of (8): The Theorem states that, without using a lengthy discrete event simulation that would have to be programmed and run with different sets of parameters, we can predict the trust values of each of the n entities in the system, by solving the system of n equations (8), using an of-the shelf tool such as Matlab. This also give us direct insight, using the thresholds (11), into which of the n entities are trustworthy, and also which of them are the most trustworthy, and which of them are untrustworthy. Once the qi are calculated, the expression (10) provides us with the average trustworthiness of each of the entities, while (9) gives us an n dimensional profile of the trustworthiness of all of the interconnected entities.

We also note that the examples and simulations developed in the following sections directly use the Theorem given above, and in particular the expression in (7) to compute the trust values of n interacting entities.

The Initialization of the model corresponds to an initial situation where we have no evidence regarding whether any of the entities should be trusted or not, we will initialize the parameter values as follows:

  • We set the values \({w}_{ij}^{+}={w}_{ij}^{-}=w,\,{\lambda }_{i}=\lambda ,\,{\Lambda }_{i}=\Lambda\) for all distinct pairs of entities ei, ej, ij.

  • To show “perfect ignorance” we also select qi = 0.5, i = 1, . . . , n representing the probability of whether any entity ei is trustworthy or not.

Thus, using (7), we can seek the set of parameter values that we should take by using (8) and setting:

$$0.5=\frac{\Lambda +0.5(n-1)w}{2(n-1)w+\lambda +0.5(n-1)w}\,,$$
(12)

which yields:

$$2\Lambda +(n-1)w=2(n-1)w+\lambda +0.5(n-1)w,$$
(13)

so that:

$$\Lambda =0.75(n-1)w+0.5\lambda .$$
(14)

To simplify the calculations we set λ = 0 and w > 0 can be fixed at any convenient positive value so we obtain:

$$\Lambda =0.75(n-1)w,\,with\,\lambda =0,\,and\,w > 0.$$
(15)

In the sequel, we will assume that w in (15) is chosen such that:

$$\begin{array}{r}{w}_{ij}^{+}+{w}_{ij}^{-}=2w,\,\,\forall \,i,j=1,\,...n,\,i\ne j,\,w=1\,and\\ {r}_{i}=2(n-1),\,{\Lambda }_{i}=0.75(n-1),\,i=1,\,...\,,n.\end{array}$$
(16)

With this initialization, we also need to understand the maximum values that can be allowed for any \({w}_{ij}^{+},\,{w}_{ij}^{-}\) for ij. We know from (8) that qi is an increasing function of each \({w}_{ji}^{+}\) when \({w}_{ji}^{+}+{w}_{ji}^{-}=2\) as fixed in (16), and that the maximum value that qi cannot exceed 1 since it is a probability. Therefore, we compute the value M, 0 ≤ M ≤ 2 of each \({w}_{ji}^{+}\) that cannot be exceeded, by setting all the qi = 1 with the parameter values that have been chosen in (15) and (16). We then use (8) to obtain:

$$\begin{array}{l}2(n-1)+(n-1)(2-M)=0.75(n-1)+M(n-1),\\ and\,M=1.625.\end{array}$$
(17)

Since we must maintain qi < 1, i, we set the maximum and minimum values i, j, ij to:

$$0\,\le \,{w}_{ij}^{+}\,\le \,1.55,\,and\,0.45\,\le \,{w}_{ij}^{-}\,\le \,2.$$
(18)

The Implementation of the RNNTM represents the trust among a set of entities that need to cooperate in a network, such as a set of interconnected IoT devices that are monitoring a common physical system, such as a factory floor or a building, either in a centralized manner on a dedicated server that maintains and updates the trust level of each of the IoT devices, or in a distributed manner where each entity maintains a RNNTM locally based on the data it has sent and received. Clearly, a centralized implementation that is distinct from the actual entities whose trust is being monitored can avoid the risk of each device enhancing its own trust level or that of its “friends”, if each device is allowed to update its own trust data. The server which is in charge of maintaining the RNNTM would store and update the RNNTM could also be protected with additional security software, including enhanced encryption and cyberattack detection to offer a greater degree of security and resilience for trust assessment of the different entities.

As an example of such a system, each IoT device may express its trust in some other device when the information it has received from the other device turns out to be accurate. Similarly, it may express distrust of another IoT device if it receives data which later turns out to be inaccurate or incomplete. In the centralized server solution for the RNNTM, in both cases the IoT device sends a message to the server to inform it of its trust or distrust regarding some other device, and the RNNTM located at the server is updated accordingly. The RNNTM located at the server also rations the messages that will be accepted from a given IoT device ei by increasing the trust value Ki(t) at regular intervals as described in Eqs. (3)–(6). The RNNTM at the server also forwards updates regarding the trust level of all entities to the IoT devices in the network, and the lack of trust in a specific IoT device ei may be used to request additional verification of the data it forwards, or even to discard the data that is received from ei. In the context of cybersecurity, a lack of trust in an entity may result in the data or messages that it forwards being systematically discarded, since the messages it sends may contain malware.

Thus, we recommend that the RNNTM be implemented in a centralized manner, e.g., on a specific server of the sub-network that encompasses the entities in the RNNTM. The entities can then express their trust or distrust by sending messages to that server, which updates the single copy, updates the trust level of each entity and broadcasts the information to all the connected entities. There may also be two copies of the RNNTM on two distinct servers, with the secondary copy being updated after the primary copy is updated. If one of the two servers fails, then the second one takes the primary role, and the other one is restarted from the valid copy.

Cyberattacks are critical events that allow us to illustrate the RNNTM for an environment where denial of service (DoS) attacks occur against some entity ei. We assume that a DoS attack against an entity ei occurs at random and unexpectedly on average every \(\frac{1}{{\alpha }_{i}}\) time units, where αi > 0 can be interpreted as the attack rate, i.e. the average number of attacks per unit time. Equivalently, we can consider that in a time interval [t, t + Δt[, the probability of a DoS attack against ei is αiΔt + ot), Note that the time scale of the weight parameters, i.e. \(\frac{1}{{w}_{ij}^{+}},\,\frac{1}{{w}_{ij}^{-}}\) and hence of the parameters \(\frac{1}{{r}_{i}}\), as well as \(\frac{1}{{\Lambda }_{i}},\,\frac{1}{{\lambda }_{i}}\), would be in the range of seconds or tens of seconds, since the RNNTM is updated frequently (say each 10 s) whenever the communication updates are sent and received by the different entities ei.

On the other hand, the time that elapses between cyberattacks may be hours, days or even weeks (e.g. once every 2 or 3 weeks). After an attack, the entity ei that came under attack will have to recover, and this will take a time of average length Ti which may be as long an hour or more, so that in practice we have \(\frac{1}{{\alpha }_{i}} > {T}_{i}\) and \(\frac{1}{{\alpha }_{i}} > > \frac{1}{{r}_{i}},\,\frac{1}{{\Lambda }_{i}},\,\frac{1}{{\lambda }_{i}}\). Thus, the RNNTM model will typically have reached its steady-state probability distribution between two successive cyberattacks.

When a DoS attack occurs against any entity ei, the attacked entity becomes unavailable and cannot communicate with the other entities. Also, until it recovers from the cyberattack it will not be able to modify its outgoing weights \({w}_{ij}^{+},\,{w}_{ij}^{+},\,j\ne i\). On the other hand, each (other) entity ej, ji reacts to the lack of communication from ei by modifying its connection weights towards ei as follows:

$${w}_{ji}^{+}\leftarrow {w}_{ji}^{+}-{\eta }_{ji},\,{w}_{ji}^{-}\leftarrow {w}_{ji}^{-}+{\eta }_{ji},$$
(19)

where \(0 < {\eta }_{ji}\le {w}_{ji}^{+}\). As a result, assuming λi = 0 for all i as suggested in the initialization, after an attack the trust probability qi of ei obtained from (8) is updated to the value \({q}_{i}^{u}\) of ei as follows:

$$\begin{array}{rcl}{q}_{i}^{u} & = & \frac{{\Lambda }_{i}+{\sum }_{j=1,j\ne i}^{n}{q}_{j}({w}_{ji}^{+}-{\eta }_{ji})}{{r}_{i}+{\sum }_{j=1,j\ne i}^{n}{q}_{j}({w}_{ji}^{-}+{\eta }_{ji})},\\ & = & \frac{{\Lambda }_{i}+{\sum }_{j=1,j\ne i}^{n}{q}_{j}{w}_{ji}^{+}}{{r}_{1}+{\sum }_{j=1,j\ne i}^{n}{q}_{j}{w}_{ji}^{-}}\times \frac{1-\frac{{\sum }_{j=1,j\ne i}^{n}{q}_{j}{\eta }_{ji}}{\Lambda +{\sum }_{j=1,j\ne i}^{n}{q}_{j}{w}_{ji}^{+}}}{1+\frac{{\sum }_{j=2}^{n}{q}_{j}{\eta }_{ji}}{{r}_{1}+{\sum }_{j=2}^{n}{q}_{j}{w}_{ji}^{-}}},\\ & = & {q}_{i}\times \frac{1-\frac{{\sum }_{j=1,j\ne i}^{n}{q}_{j}{\eta }_{ji}}{\Lambda +{\sum }_{j=1,j\ne i}^{n}{q}_{j}{w}_{ji}^{+}}}{1+\frac{{\sum }_{j=1,j\ne i}^{n}{q}_{j}{\eta }_{ji}}{{r}_{1}+{\sum }_{j=1,j\ne i}^{n}{q}_{j}{w}_{ji}^{-}}} < {q}_{1}.\end{array}$$
(20)

We will set \({\eta }_{ji}={w}_{ji}^{+}\) which is its maximum value. From (16) we have the value \({w}_{ij}^{+}+{w}_{ij}^{-}=2\) for all ij. When we also use (15), we obtain the updated Trust Probability for the attacked entity ei as:

$$\begin{array}{rcl}{q}_{i}^{u} & = & \frac{0.75(n-1)}{2(n-1)+2{\sum }_{j=1,j\ne i}^{n}{q}_{j}},\\ & = & \frac{0.375(n-1)}{(n-1)+{\sum }_{j=1,j\ne i}^{n}{q}_{j}}.\end{array}$$
(21)

Recovery from a Cyberattack may occur after a time of average length Ti for entity ei, which then sends “all clear” messages to all the other entities. These other entities will then change their weights in successive steps s = 1, 2, 3, . . . , following each of the messages that arrive from ei to ej. These weight changes take the following form:

$${w}_{ji}^{[+,s]}={w}_{ji}^{[+,(s-1)]}+{[\frac{{\eta }_{ji}}{1+{\eta }_{ji}}]}^{s}={w}_{ji}^{+}-{\eta }_{ji}+\mathop{\sum }\limits_{s=1}^{\infty }{[\frac{{\eta }_{ji}}{1+{\eta }_{ji}}]}^{s},$$
(22)
$${w}_{ji}^{[-,s]}={w}_{ji}^{[-,(s-1)]}-{[\frac{{\eta }_{ji}}{1+{\eta }_{ji}}]}^{s}={w}_{ji}^{-}+{\eta }_{ji}-\mathop{\sum }\limits_{s=1}^{\infty }{[\frac{{\eta }_{ji}}{1+{\eta }_{ji}}]}^{s},$$
(23)
$$\begin{array}{l}where\,{w}_{ji}^{[+,0]}={w}_{ji}^{+}-{\eta }_{ji},\,{w}_{ji}^{[-,0]}={w}_{ji}^{-}+{\eta }_{ji},\\ hence\,\mathop{{\text{lim}}}\limits_{s\to \infty }{w}_{ji}^{[+,s]}={w}_{ji}^{+},\,\mathop{{\text{lim}}}\limits_{s\to \infty }{w}_{ji}^{[-,s]}={w}_{ji}^{-}.\end{array}$$
(24)

We can also calculate τi, the average trustworthiness of ei over a long length of time which includes alternating periods when the system has been attacked and is recovering and when it is operating normally:

$${\tau }_{i}\approx \frac{{\alpha }_{i}{T}_{i}}{({\alpha }_{i}{T}_{i}+1)}\frac{{q}_{i}^{u}}{1-{q}_{i}^{u}}+\frac{1}{({\alpha }_{i}{T}_{i}+1)}\frac{{q}_{i}}{1-{q}_{i}}\,.$$
(25)

Note that (25) is an approximate expression which neglects the gradual growth of the trust that each entity has in ei after an attack occurs, until the “all clear” signal is broadcast by ei.

Results

We first present simulation results for the RNNTM Trust Graph (RNNTM-TG) containing 50 entities, including 48 IoT devices numbered N2, . . . , N50 and two gateways N1, N2. Then, we use the CIC-IDS2017 Dataset emulating an enterprise network over five successive days, with raw network traffic (PCAP files) and several cyberattacks, including Denial-of-Service (DoS), Distributed Denial-of-Service (DDoS), Botnet, Brute Force, Port Scanning and Infiltration, so to carry out experiments regarding the effect on the RNNTM of three most common attacks, namely DoS, DDoS and Botnets.

Here, we first present simulation results for the RNNTM Trust Graph (RNNTM-TG) with a total of 50 entities, including 48 IoT devices numbered N0 to N47 shown around the circle, and the two gateways whose state is shown at the centre of the various diagrams. The trust graph is simulated over a long period of time, In Fig. 1 we show the corresponding RNNTM-TG which is sampled at the time units 1000, 3000, 5000, 7000 and 10,000, where each time unit is 10 s. We have set the values of θ1, θ2 from (11) to θ1 = 0.5, θ2 = 0.7. The colour code that is used is Red for an untrustworthy gateway, Green when the gateway is deemed trustworthy, while trustworthy IoT nodes are Light Blue and untrustworthy IoT nodes are Pink. IoT devices whose status is"undecided” trustworthiness levels are Yellow.

Fig. 1: Each of the five disks that are shown, represent the RNNTM-TG for 50 entities, namely 48 IoT devices numbered N0 to N47, that are in a circle, with the two gateways in the middle.
Fig. 1: Each of the five disks that are shown, represent the RNNTM-TG for 50 entities, namely 48 IoT devices numbered N0 to N47, that are in a circle, with the two gateways in the middle.
Full size image

The colour code is Red for untrustworthy gateways, Green for trustworthy gateways, Light Blue for trustworthy IoT nodes, Pink for untrustworthy IoT nodes, and Yellow for IoT devices whose trustworthiness is “undecided''. From left to right and top to bottom, each circle corresponds to a snapshot taken at successive 1000, 3000, 5000, 7000 and 10,000 time units, where each time unit is 10 s. We have set θ1 = 0.5, θ2 = 0.7 as in (11). Each IoT device omits sending a message at each 10-s time unit with probability 0.2.

The simulation results we present in Fig. 1, use the settings in (16), namely \(0\le {w}_{ij}^{+}\le 1.55\) from (18) and \({\eta }_{ji}={w}_{ji}^{+}\), so that:

$${w}_{ji}^{[+,(0)]}=0,\,{w}_{ji}^{[-,(0)]}=2,\,{w}_{ji}^{[+,(0)]}+{w}_{ji}^{[-,(0)]}=2.$$
(26)

In the simulation results that we present in Fig. 1, each IoT device is expected to send a message each ten seconds, but has a probability of 0.2 of omitting to send a message.

When any entity in the network does not receive a message from another IoT device at a given time unit, it will express distrust regarding this entity, and update its weights according to the rules in (19), (20) and (21). If an entity sends a message as expected and the message is received at its destination, each receiving entity expresses its trust regarding the sending entity and updates its weights according to the rules described in (22), (23) and (24).

The five diagrams in Fig. 1 enable us to observe the changing trust levels for all of the entities over a total period of 100,000 s.

Datasets such as CIC-IDS2017 are a comprehensive and realistic benchmark for evaluating intrusion detection systems40. Created by the Canadian Institute for Cybersecurity(CIC), it includes a wide range of modern attack scenarios, as well as network traffic, capturing real-world network behaviour. This dataset is based on two separate networks: the Victim-Network and the Attacker-Network. The Victim-Network emulates a high-security enterprise environment, equipped with a firewall, router, switches, and multiple operating systems, each running a benign agent to simulate normal user activity. The Attacker-Network is an isolated infrastructure consisting of a router, switch, and several PCs with public IP addresses and various operating systems configured to launch diverse attack scenarios.

The CIC-IDS2017 Dataset was generated to emulate typical enterprise network activities over five successive days, and includes both raw network traffic (PCAP files) and over 80 extracted flow-based features. It contains detailed records of several cyberattacks, including Denial-of-Service (DoS), Distributed Denial-of-Service (DDoS), Botnet, Brute Force, Port Scanning and Infiltration attacks. Each attack is labelled and timestamped, making the dataset suitable for trust-based security modelling and supervised machine learning. Using the traffic data from the CIC-IDS2017 Dataset, we have conducted experiments regarding the effect of three attacks on the dynamic evolution of the RNNTM for a set of interconnected nodes.

The setup that we have chosen is configured according to the contents of the CIC-IDS2017 Dataset that comprises 13 nodes, namely one Gateway, three Servers, and ten PC-like devices. Based on the data contained in this dataset, we simulate the response of the RNNTM for three specific attacks that it contains: a Denial of Service (DoS) and a Distributed Denial of Service (DDoS) attack, and a Botnet attack. Both the DoS and DDoS attacks attempt their victims with a massive flow of packets, which largely exceeds the victim’s incoming bandwidth and packet processing capacity, resulting in heavy congestion and the drop of legitimate packets at the node, and saturating the victims’ hardware and software. In the case of a DDoS, several network nodes “gang-up” against the victim, sending it a massive flow of packets, while in the DoS, a single node carries out the attack. In both DoS and DDoS attacks, the attacking node will attempt to conceal its identity by “spoofing” its IP address.

The diagram in the upper part of Fig. 2 illustrates the simulation of the RNNTM for successive DoS attacks, that were extracted from the CIC-IDS2017 Dataset. It shows that, before the attack, the trust values for all network entities initially increase steadily from a baseline of 0.5, surpassing the trust threshold of 0.7 and stabilizing near 0.9. During this data-driven simulation experiment, four successive DoS attacks are launched against the web server (Server 2), resulting in varying durations of service disruption. Consequently, the trust value of the targeted server drops significantly, and remains at a low value throughout the attack periods. Once the attack ends, the server enters a recovery phase, during which its trust value gradually increases. This behaviour shows that the RNNTM reflects the effect of the DoS attacks on the trust values during the DoS phases, as indicated by the sharp dips below the trust threshold and then its rise after the attack on Server 2 comes to an end.

Fig. 2: Simulation of the RNNTM when a DoS Attack (above), and a DDoS Attack (below) occurs against Server 2.
Fig. 2: Simulation of the RNNTM when a DoS Attack (above), and a DDoS Attack (below) occurs against Server 2.
Full size image

These attacks occur at different epochs within the CIC-IDS2017 Dataset, and affect the trust level of Server 2, and indirectly affect the trust level of the 13 devices and of Server 1.

In the diagram shown in the lower part of Fig. 2, we view the RNNTM simulation for the Distributed Denial-of-Service (DDoS) attack scenario extracted from the same attack database. Multiple simultaneous attacks against Server 2 result in the repeated sharp declines of its trust value below the distrust threshold of 0.5, with values dropping down to 0.2 after each attack. Meanwhile, other nodes also experience fluctuations in their trust values, in correlation with the attacks on Server 2. The drop in trust level of Server 2 also causes smaller drops in the trust values of other entities which have not been directly attacked; however, their trust values still remain above 0.7.

Botnets are also contained in the CIC-IDS2017 Dataset, and attack five different nodes of the 13-node network consecutively and in close succession. The corresponding data collected on Friday, July 7, 2017, corresponds to the Botnet ARES (10:02 a.m.–11:02 a.m.) where the attacker carries the IP address 205.174.165.73.

In this data, Node 4 (IP address 192.168.10.9) is attacked first at time instant 391, then the following attacks occur: Node 8 (192.168.10.15) at instant 403, Node 7 (192.168.10.14) at instant 511, Node 5 (192.168.10.5) at time 541, and finally Node 6 (192.168.10.8) is attacked at instant 583. According to the data in the CIC-IDS2017 Dataset, once any of these nodes is attacked, the malicious activity on that node persists continuously until the end of the dataset at epoch 1446.

In the corresponding simulation shown in Fig. 3, initially all the nodes are set to the trust value of 0.5, which rises gradually for all nodes towards a high value, until the attacks begin. A distinct and clear drop in the trust values of the attacked nodes is observed once the attack begins, and the attacked nodes affect the trust values of other nodes as well in the same direction. In the figures below the simulation results, we see that the trust colour code moves to red by epoch 400 for the first node that is attacked at epoch 391. The snapshot at epoch 600 shows that all the attacked nodes’ colour codes are red, as well as in the third circular network snapshot taken at epoch 1400.

Fig. 3: In the figure above, we show the simulation of the RNNTM under a Botnet attack that attacks five nodes in close succession, starting at epoch 391 until epoch 403.
Fig. 3: In the figure above, we show the simulation of the RNNTM under a Botnet attack that attacks five nodes in close succession, starting at epoch 391 until epoch 403.
Full size image

The three snapshots of the RNNTM-TG show the effects of the Botnet attack, with the first snapshot illustrating the network trust level at time 400 time units, just after the attack begins, followed by the 600-th, then followed by the snapshots at epoch 1400. We notice that the Botnet attack is still persisting at the last snapshot.

Discussion

The concept of trust among a finite number of entities is often represented by a directed graph whose nodes represent the entities, and labels as well as numerical values on the arcs to represent the type and level of trust that is expressed by some entities regarding some other entities. Various algorithmic techniques can then be used to deduce the level of trust that is enjoyed by each of the entities that are represented in the graph. In this paper, we introduce the RNNTM that aims to represent trust as a dynamic time-dependent quantity of the different entities. In the RNNTM, the trust level of an entity is determined by its external replenishment and deletion rate of its rights to vote, the frequency with which it expresses itself about other entities, and the trust or distrust that is expressed by other entities towards itself, for them that is expressed by other entities. Its purpose is to model the evolving trust level of a set of entities, in response to the opinions expressed by the different entities with respect to each other, and by their own behaviour.

The RNNTM aims to allow the expressions of trust to vary over time as a function of various significant events, such as the exchange of information through message broadcasts, and the possible existence of external adversarial effects, such as cyberattacks, which will affect the trust that can be attributed to different entities. We have therefore constructed a mathematical model where trust levels of each entity are time-varying, where all entities are treated fairly by the attribution of an equal number of “voting rights per unit time” to all of them, and the possibility for each of the entities to express both trust (a positive vote) and distrust (a negative vote) to each other, while each time an entity gives its opinion it also reduces its ability for further votes. In the resulting dynamics, it turns out that trustworthy entities have more impact on the overall resulting “opinion” about other entities.

The details of the model are introduced and developed, and are then illustrated with simulations regarding a network with IoT devices and gateways, which can be subject to communication errors and cyberattacks. Even though we do not exploit its AI capabilities in this paper, the RNNTM is also a machine learning system35,36, so that future work will use existing datasets about trust evaluations to estimate the parameters of the RNNTM which match the predictions from existing measured data. Furthermore, it will be useful to consider other “voting modalities” which can be studied by extending the RNNTM, to cases where entities are required to express clear preferences, e.g., trust in some other entities and distrust in all of the others. Another possible extension may include the rationing of voting rights based on the energy consumption due to the messages exchanged by each entity41.

The model and examples developed in this paper show how such a system can result in significant time variations of the trust level. In future work, we plan to show how the RNNTM may be used to dynamically decide about how different IoT gateways may be chosen by IoT devices as a function of the dynamically varying trust metrics of the gateways. We will also examine how these dynamic trust variations may result in workload imbalances and the additional delays that may result in the processing of IoT data that is sent from the devices to IoT gateways.