Table 1 Informational and ethical problems of ChatGPT.

From: ChatGPT and the digitisation of writing

Category

Concerns/Impacts

Accuracy and Reliability

• It “hallucinates” information, that is inaccurate, fails to acknowledge its sources and can even fabricate citations.

It only has data up to September 2021.

Transparency and Bias

• It makes biased statements, e.g., studies have shown it has political bias but also reproduces sexist and racist stereotypes (Deshpande et al., 2023; Motoki et al., 2023).

It is unexplainable because it is far from open about what data it is based on or how it works.

It is currently impossible to identify that the material was machine-generated.

Information Culture

• It be used to create misinformation, fakes, or even harmful information.

It could also be used to accelerate the content creation explosion—leading to even more challenges of information overload – but also potentially to increase the homogenisation of content.

It is “multilingual but monocultural” (Rettberg, 2022) because it is efficient in multiple languages but has American cultural assumptions trained into it.

Better tools are available to people with money to subscribe, creating inequality in access to its benefits.

Impact on Learning

• It could create lazy and superficial learning by making learning tasks like writing too easy.

Privacy is at risk if you share your data with it.

Ethical Concerns

• It may violate intellectual property rights by using copyright material in its training without permission; training data sources are not openly declared.

Very low-paid Kenyan workers were asked to view unpleasant material as part of the process of “detoxifying” data that was being input to train ChatGPT (Perrigo, 2023).

GPT technologies have a huge environmental impact (Ludvigsen, 2022).