Even the scientists creating AI can’t explain to you how it works

Artificial Intelligence (AI) is becoming increasingly common and impressive, but even the scientists creating these systems can’t fully explain how they work. AI systems like ChatGPT are capable of writing essays, passing bar exams, and even being used for scientific research. However, when researchers are asked how ChatGPT works, they admit their ignorance.

Sam Bowman, a professor at New York University and researcher at Anthropic, an AI research company, explains that ChatGPT is powered by an artificial neural network. This is a type of AI that models the human brain. Unlike traditional computer programs where the rules are explicitly coded, this type of AI is trained to detect patterns and predict them over time. However, because these systems learn on their own, it is difficult to explain their workings accurately.

Bowman explains that the primary way ChatGPT is trained is through autocomplete. The system is presented with a long text from the Internet and must guess the next word after each word it reads. This process takes a lot of time and resources, but the result is an autocomplete tool. However, to make it more useful, reinforcement learning should be used.

Reinforcement learning is based on feedback from users who interact with the system. They vote for or against the system’s responses, and the model tries to improve its responses based on this feedback. In this way, the system learns to offer more likely answers that users like and less likely answers that they don’t like.

However, despite the scientists’ best efforts, they still can’t fully understand how ChatGPT makes its decisions. Millions of numbers are flipped through the system several hundred times a second, and researchers can’t explain exactly what it all means.

This unpredictability of AI systems can lead to risky scenarios. For example, ChatGPT could offer incorrect information or even spread biased views that have been encoded in the texts used for training. This raises questions about the safety and ethical use of such systems.

In his podcast Unexplainable, Sam Bowman suggests that we should be aware of the limitations of artificial intelligence and be cautious when using it. He suggests doing more research and developing strategies to manage the risks associated with AI.

The black box of artificial intelligence remains a mystery to scientists. They create complex systems that can do amazing things, but can’t fully explain how they work. This raises questions about transparency and accountability in the field of artificial intelligence.

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x