As AI chatbots become increasingly prevalent, experts warn of the dangers of disinformation networks spreading false information through these platforms. Can you trust what you read from a chatbot?
Understanding AI Chatbots: How to Spot Errors
AI-based chatbots have become increasingly popular for finding information online, but they are not immune to errors. These errors can be caused by the chatbot’s reliance on data from the internet, which is often linked to Russian disinformation networks.
AI chatbots are computer programs designed to simulate human-like conversations with users.
They use natural language processing (NLP) and machine learning algorithms to understand and respond to user inputs.
Chatbots can be integrated into various applications, including messaging platforms, websites, and mobile apps.
According to a report by Grand View Research, the global chatbot market size is expected to reach $1.3 billion by 2025.
AI chatbots are used in customer service, e-commerce, and healthcare industries for tasks such as answering frequently asked questions, providing product recommendations, and assisting with medical diagnoses.
The Role of Disinformation Networks
Pro-Russian websites and propaganda networks, such as Portal Kombat, have been found to spread false information through AI chatbots. This phenomenon is known as hallucination or confabulation. These networks use third-party content from various sources, including social media accounts, Russian news agencies, and official websites of local institutions.
Mortal Kombat is a popular American media franchise created by Ed Boon and John Tobias.
The first game, released in 1992, was developed by Midway Games.
It introduced the concept of 'fatalities', where characters could be killed in gruesome ways.
The series has since expanded to include games, movies, and television shows.
Mortal Kombat's popularity stems from its unique blend of martial arts and fantasy elements.
The Impact on Chatbot Outputs
Analysts believe that the sheer volume of articles published by these networks affects chatbot outputs. They suspect that this is precisely the purpose of the entire network: to spread disinformation through large language models (LLMs) used by chatbots.

A disinformation network is a complex system of individuals, organizations, and online platforms that spread false or misleading information.
These networks often use social media to disseminate propaganda, manipulate public opinion, and influence decision-making processes.
According to a study by the Oxford Internet Institute, 70% of adults in the US have encountered 'fake news stories' on social media.
Disinformation networks can be difficult to detect, as they often employ sophisticated tactics to evade detection and maintain their anonymity.
How to Protect Yourself from AI-Generated Errors
To avoid falling prey to these errors, experts recommend several strategies:
-
Check sources: Verify the credibility and transparency of websites and web services.
-
Compare sources: Cross-reference chatbot responses with reliable sources to assess the situation.
-
Stay vigilant: Be aware that misinformation is circulating online and take steps to verify information.
Conclusion
AI-based chatbots are not infallible, and their outputs can be influenced by various factors. By understanding how these chatbots work and taking steps to verify information, users can protect themselves from errors and stay informed in a rapidly changing world.