Every lazy person will have already discovered which nice phrases can be written for the user by Artificial Intelligence. Yet, a critical look soon turns pleasure into fright. Let us take the example of ChatGPT.
AI, such as ChatGPT, reproduces and multiplies in a quite dreadful manner inaccuracies and biases.
Here an example that I have been confronted with on my first try. I typed in the words “Pliny” and “Vesuvius eruption” into ChatGPT. As everyone knows, Pliny the Younger wrote two letters to the historian Tacitus on how the eruption of Vesuvius took place and how his famous uncle, Pliny the Elder, died. Mark: There are two Plinys. The Elder, an admiral, who died and the Younger, the nephew, who wrote about it…
Now here is what AI (ChatGPT) got for me:
“Pliny the Elder wrote about the eruption of Mount Vesuvius in 79 AD in his work “Naturalis Historia” (Natural History).”
That, I thought, would be a surprising news, given that Pliny the Elder died during the Vesuvius eruption.
The next eyeopener came instantly:
“Pliny also wrote about the death of his nephew, Pliny the Younger’s father, who was a naval commander.”
Visibly, AI got entangled in the Pliny family relations, confounding uncle and nephew and inventing a third person from somewhere.
Even worse were my tries to get out of AI any kind of arguments in pro and in contra of a question. For the sake of the trial, I stated that women should have children. It was basically impossible to get anything else but well (means woke) minded reproaches out of the AI service and I sunk into a kind of discussion with the machine. That made me wonder, if our opinions and discussions will now be directed by AI. Whoever constructs it, decides what society has the right to write and think?
Wikipedia, visibly fed with information of the producers, is quite open about this brain control: “To prevent offensive outputs from being presented to and produced from ChatGPT, queries are filtered through OpenAI’s company-wide moderation API, and potentially racist or sexist prompts are dismissed.” And what is racist, or sexist is decided at home…
Increasingly my personal reactions to the “that is true”, “that is correct” answers of ChatGPT were furious. Not, because I shared or did not share the views expressed, but due to the sheer will of the creators of the AI service to control, what was written or not in public.
And here is, what AI said, to calm me:
“AI systems are programmed based on the data and instructions they receive. If the data provided to an AI system contains errors or biases, the AI system may produce results that reflect those errors or biases.”
Yes. That is right dear AI.
And that rather lets me feel, that we somehow need to control, what is fed into these AI services and that better fast…
Ups. Again. Here is the word “control”.
What about the freedom of expression. And the freedom of research. Will it be destroyed by the “normative power of fact” (Jellinek)? The fact that AI tells us, what to write and most people will be too lazy to check and copy it?
For the moment, focus in development of ChatGPT is to make the chatbot mimic a human conversationalist. But that conversationalist is easily used as replacement for humans. And it talks also easily crap.
The worst of the crap (beside the pressure of opinion imposed on users) are the so-called hallucinations.
In artificial intelligence, a hallucination is a confident response by an AI that is not justified by data. In the example I gave, that would be the father of Pliny the Younger being an admiral… In truth that was Lucius Caecilius Celio, who died 9 years before the Vesuvius eruption and was certainly not in the year 79 in Misenum.
A hallucinating chatbot with no knowledge of a fact internally picks a random information that it deems plausible, and then goes on to falsely and repeatedly insist that it is true, with no sign of internal awareness that the information was a product of its own imagination.
ChatGPT states that it is “a widely used AI language model, and available to individuals, businesses, and organizations around the world.”
Be sure of it: Our children already know ChatGPT and similar tools. And they already write their homework with it. Let’s fear the consequences…
The UN’s science organization UNESCO is now working on the matter. It seems there will be work to do. The USA in any case hurry back to join the Organization, as it did in 2005 for the cultural diversity issues, to have a say in the question.
We keep you posted.
Leave a Reply