Select Page

How ChatGPT works and the problems with non-explainable AI.

February 3, 2023

Since the start of the year, ChatGPT has been on the minds of many people – traversing past the typically engaged tech community, and even getting in the hands of non-tech oriented users who are just as impressed by its capabilities. 

First released by a company called OpenAI in November, this language model allows users to ask it a question and respond in a rather conversational manner. Users can pose their questions, ask follow-up questions, and feedback. They can ask the model to write them essays, stories, cover letters, and more. 

ChatGPT has rightly received a lot of applause in the last few weeks, since the results are quite impressive at first glance. From the user’s point of view, the bot magically pulls appealing sentences out of its hat, and the dialog doesn’t feel bad at all. But stop! Think again for a moment: what’s happening right now?

Andreas Blumauer

Andreas Blumauer

CEO and Co-Founder

What ChatGPT fails to address.

Here is the fundamental problem with ChatGPT: it can provide answers and information that no one ever knows for sure is true because it is not referenceable. The sources and genesis behind the information always remains hidden. ChatGPT cannot remember how it knows something and does not know why it gives its answers. Would you go to a doctor like that?

So this AI works principally differently than any other information provider in the world. We should never allow this AI to enter the discourse directly and unfiltered when it comes to substantive issues. However, my hope is low that we humans will resist this temptation. After all, this AI admits that it has this problem:

This image shows a series of questions and responses from ChatGPT.

ChatGPT is the ultimate and so far most dangerous version of an agent that says “from the Internet” every time you ask it where it got this information. So a self-referential system as soon as you put this information back on the net and so on. Mankind has just started to forget who it is, where it comes from, and where it is going.

Why we need explainable AI.

ChatGPT is a non-explainable AI, and if you ask questions like “The most important EU directives related to ESG”, you will get completely wrong answers, even if they look like they are correct. ChatGPT is a great example of how non-referenceable and non-explainable AI contributes greatly to exacerbating the problem of information overload instead of mitigating it. 

This image is a response from ChatGPT

ChatGPT is the antithesis of XAI (explainable AI), it is not a tool that should be used in situations where trust and explainability are critical requirements. The Semantic Web as a place and method to conduct and comprehend discourse and consensus building on a global scale has arguably gained additional importance simultaneously with the boom of Logic Learning Machines (LLMs).

ChatGPT about ChatGPT with regards to XAI: “In terms of explainable, it can be difficult to understand the specific reasoning behind the specific outputs, but it is possible to understand how the model works in general, and how it generates text based on the input it receives. However, it is not a decision-making AI and it’s not meant to be deployed in an operational context where safety, security, or explainability are critical requirements. Overall, ChatGPT is a powerful language generation tool, but it is important to use it responsibly and be aware of its limitations. It is not a tool that should be used in situations where trust and explainability are critical requirements.”

Still, there are enough people who do not blindly accept every innovation as “great”, but also think about the problems that can arise with such services

However, there is also great potential for using LLMs to feed information into knowledge graphs and contribute to their extension. In this way, LLMs link information to knowledge that has already been referenced and verified, preferably in a traceable way. Knowledge graphs thus serve as knowledge reference models, while ChatGPT helps to extend them by suggesting further potentially meaningful assertions.

Organizations generally do not like to rely on probabilistic models, but rather on high-quality information, e.g., technical documentation written by experts. They will bet on composite AI, a fusion of statistical and symbolic AI, and by that I see ChatGPT as just another candidate to get married with semantic systems and knowledge graphs.

Want to learn more about PoolParty? Subscribe to our newsletter!