AI and neural networks

ChatGPT will learn how to say “I don’t know.”

ChatGPT will learn how to say “I don’t know.”

Large language models continue to have problems with the validity of responses. The problem of hallucinations – giving out false information – has existed since the technology’s inception. However, ChatGPT 5 introduces a new feature: the model can now recognize lack of knowledge instead of generating fictitious responses.

OpenAI reports that the new version of ChatGPT has learned to say “I don’t know” in response to questions where there isn’t enough information. This became apparent after a viral example where the model responded, “I don’t know – and I can’t reliably figure it out.”

OpenAI reports that the new version of ChatGPT has learned to say “I don’t know” in response to questions where there is insufficient information.

img 1647

Technically, hallucinations are embedded in the architecture of language models that predict next words based on patterns rather than extracting facts from a database. GPT-5’s new ability to recognize constraints reflects an evolution in its approach to handling obscure or complex queries.

Trust in AI chatbots remains a critical aspect. ChatGPT had previously built in warnings about potential inaccuracies, but now the model has been given a more explicit mechanism for avoiding hallucinations through the recognition of ignorance.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

You may also like