Table 2 An example of the event in dataset.

From: A unified ontological and explainable framework for decoding AI risks from news data

Type of attribute

Content of the example

Recoding process

Event attribute

This record shows that on April 4,

2015, Google, a technology provider,

used an AI-powered search algorithm

worldwide. The AI technology produced outcomes that reflected sexist bias

The event involves a major global

technology provider and the use of an

AI-powered search algorithm. The

outcome suggests algorithmic sexism,

constituting a representative AI-related

risk incident

Harm attribute

The psychological harm is conventional

harm. It is also reversible and persistent. The

victim is in vulnerable groups. It influences

self-identity and values. This event may not

include physical harm, economic loss and

privacy violations. The equal rights violations

are conventional harm

The representation of a Barbie doll as

the first female CEO—following multiple

male CEO images—may perpetuate

stereotypes and reinforce gender bias. Such

representations can cause psychological

harm, particularly to individuals from

underrepresented groups, by affecting

their self-identity and perceived

societal value

Impact attribute

The harm of this event is transmissible. The

scope of the event is local

Although originating from a localized platform

interaction, the outcome (biased image ranking)

is inherently shareable and discussable via

screenshots or media coverage. This implies

potential transmissibility of harm beyond

the initial user

AI characteristic attribute

This event is caused by untimely

maintenance of training data

The biased ranking may plausibly result

from outdated or imbalanced training data that

failed to capture evolving representations of

gender in leadership roles. This suggests

insufficient updates or oversight in

data curation

Lifecycle attributes

The event spans multiple lifecycle stages, including data acquisition

and preprocessing, model building,

verification and validation, operation

and supervision and user experience

and interaction stages

The presence of representational bias

implicates multiple lifecycle stages: from

biased data acquisition and model construction

to insufficient validation, lack of post-deployment

monitoring, and ultimately the delivery of

biased outputs to end-users