My background in the philosophy of artificial intelligence and cognitive science of language led me to explore the new ChatGPT. Short for Chat Generative Pre-Trained Transformer, ChatGBT is a sizable online language model-based chatbot developed by OpenAI, launched in 2022. This tool allows users to shape conversations in terms of length, format, style, detail, and language, making it versatile for answering questions or generating creative content.
ChatGPT and similar programs are conversational language programs that operate by predicting the next word in a sentence based on vast amounts of training data. It is like an advanced version of Grammarly or online word predictive tools. This predictive ability extends to crafting extended paragraphs, entire essays, poems, and more.
In my experimentation, I found ChatGPT to provide remarkably insightful, in-depth, and well-crafted responses to various challenging questions, from what is critical race theory to what are the beliefs of particular philosophers to why the English punk band Discharge was so influential. Some argue that ChatGBT and similar AI language programs might eventually replace search engines. In practical use, it resembles search engine such as Google, though its responses lack references or citations, which is a problem.
Beyond factual inquiries, I explored its creative capabilities, requesting a limerick about my home city, Seattle.
I also asked it to write a short story about a dairy cow in my native Wisconsin:
AI Hallucinations
A major problem is that ChatGPT often produces entirely false information. When I asked it to describe me, it presented a flattering but partially fabricated biography, falsely saying I attended two universities I never attended and worked for an institution I never worked for. These inaccuracies are called AI Hallucinations.
There have been notable instances of AI Hallucinations in the news. ChatGPT falsely accused a prominent American law professor of sexual assault during a non-existent trip, referencing a non-existent Washington Post article as the information source, and misidentified the professor's school. An Australian mayor was wrongly accused of serving prison time for bribery, and a New York lawyer faced court consequences for incorporating fabricated cases into legal research done using ChatGPT. The lawyer’s legal brief cited ChatGPT-generated court cases that did not exist. Even in basic arithmetic, ChatGPT occasionally produces errors.
ChatGPT falsely accuses law professor of sexual assault (nypost.com)
Australian mayor readies world's first defamation lawsuit over ChatGPT content | Reuters
Lawyer apologizes for fake court citations from ChatGPT | CNN Business
Why does ChatGPT generate false information?
ChatGPT has no understanding of what it is writing. It does not understand what is a fact and what is a falsehood, and it has no common sense. It merely follows learned word patterns and associations. In this sense, it is dumb and merely blindly follows the methods it was taught to follow.
While ChatGPT excels in creativity, its inability to tell the difference between fact and fiction becomes apparent when handling factual information. It will be inventive when composing fanciful fiction, and also when responding to factual questions. For mathematical problems, it is not a calculator and uses the same purely linguistic method. Even when it produces the correct number answer it is not doing it mathematically.
When composing in a particular format or style— such as a poem or newspaper-style article— it will sometimes make up things to fit the format. In the false accusation of the law professor, it made up a reference, the fictitious Washington Post article, because such a reference is expected.
Trained on diverse and immense internet text, ChatGPT lacks access to real-time information or databases, and any misinformation and biases during training become part of its knowledge base. If a user question is ambiguous, the model may generate a response based on its interpretation of the question, leading to inaccuracies. My experience is it sometimes gives detailed answers that don’t directly answer the asked question.
Despite its imperfections, ChatGPT is a tool that can be used appropriately. However, it should not replace thorough research, as it can generate factual inaccuracies. Its information should always be double-checked. To its credit, the program issues warnings about these limitations, including that its information should be cross-referenced, and it sometimes refuses to answer questions it is incapable of answering.