< Back to 68k.news US front page

AI Chatbots' Inability to Spot a Joke Fuels Bogus Answers

Original source (on modern site) | Article images: [1] [2]

As AI chatbots become more popular, concerns about their ability to interpret information and provide accurate facts continue to rise. Different AI products are citing each other and demonstrating an inability to differentiate between satire and serious stories, creating an environment where their responses lack credibility.

In recent months, Big Tech firms have hurried to release chatbots such as ChatGPT in the quickly evolving world of AI, potentially jeopardizing the integrity of the web's information ecosystem. The level of distrust in AI is rising due to the fact that these AI tools frequently fail to distinguish between real information and false or satirical information. Not to mention the notoriously woke bias demonstrated by most AI systems.

OpenAI logo seen on screen with ChatGPT website displayed on mobile seen in this illustration in Brussels, Belgium, on December 12, 2022. (Photo by Jonathan Raa/NurPhoto via Getty Images)

The Verge reported that Microsoft's Bing chatbot recently claimed that Google's Bard chatbot had been deactivated. The chatbot's response was supported by a news article discussing a tweet, which itself was based on a satirical comment from Hacker News. This situation brings attention to the problem of AI misinformation telephone, where chatbots can inadvertently misinterpret stories about themselves and exaggerate their own abilities, frequently based on a single joke or doubtful source.

Although this latest AI mistake may seem absurd, it highlights a worrying weakness in these AI systems. If a single comment can start such a chain of false information, such systems are likely to be filled with misinformation, especially when that misinformation is spread by the corporate media.

Despite the fact that Big Tech companies label their chatbots as "experiments," and "collaborations," rather than search engines, these warnings do not sufficiently address the problem. AI chatbots have cited fake books and made up stories, which shows they have the potential to spread false information. The most recent instance of chatbots pointing out one another's errors only makes the issue worse.

Breitbart News has reported on Silicon Valley rushing into AI to capitalize on the ChatGPT craze, a trend that is likely to make this problem worse before it gets better.

Read more at the Verge here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship. Follow him on Twitter @LucasNolan

< Back to 68k.news US front page