Google’s AI model, Gemini, has been caught in a plagiarism scandal after generating product descriptions for a local Wisconsin cheese mart during the Super Bowl commercial. The ad contained an erroneous fact about gouda consumption and plagiarized the cheese mart’s existing web copy.
Last week, Google released a Super Bowl ad showcasing its AI model, Gemini, generating product descriptions for a local Wisconsin cheese mart. However, the ad quickly drew online scrutiny as it appeared to contain an erroneous fact about gouda consumption.
Google executives initially defended the accuracy of the statistic but later quietly edited the YouTube version of the ad to correct the error. This move raised eyebrows, as it seemed to contradict Google-owned YouTube’s policies.
YouTube, a subsidiary of Google, has faced numerous controversies over the years.
In 2019, the platform was criticized for its handling of hate speech and harassment videos.
The company's algorithm was accused of promoting extremist content, leading to calls for greater regulation.
Additionally, YouTube faced backlash for its removal of videos critical of governments and corporations.
The platform's community guidelines have been updated multiple times in response to criticism.
According to a 2020 report, 40% of users reported experiencing harassment on the site.
Further investigation revealed that Gemini did not even generate the product description. Instead, it plagiarized the cheese mart’s existing web copy, which had been published years before Gemini was released or AI was making waves in the industry.
Google's AI-powered plagiarism detection tool uses machine learning algorithms to identify similarities between texts.
It analyzes syntax, semantics, and context to detect potential copying.
The technology scans vast amounts of data, including web pages, academic papers, and books.
According to 'Google', their algorithm can detect plagiarism with high accuracy, reducing the risk of unintentional copying.
Studies show that AI-powered tools like 'Google's' can reduce plagiarism rates by up to 90%.
However, some critics argue that reliance on technology may lead to over-reliance and decreased critical thinking skills.

Archived versions of the cheese mart’s website show that they have been using the exact same product description since at least 2020. Here is the archived webpage: ‘And here is the original text that appeared in the Super Bowl ad, as captured by travel blogger Nate Hake.’
The situation is beyond bizarre. Either Google faked the ad entirely or prompted its AI to generate the web page’s existing copy word-for-word or the AI was prompted to come up with original copy and instead copied the old version. In the publishing industry, this is referred to as plagiarism.
The Google AI scandal refers to a series of controversies surrounding the company's artificial intelligence (AI) technology.
In 2018, it was discovered that Google's AI-powered facial recognition system had a bias against darker-skinned individuals.
The system misclassified African American faces as 'angry' or 'aggressive'. This led to an investigation by the US Department of Commerce and calls for greater transparency in AI development.
In 2020, another scandal emerged when it was revealed that Google's AI-powered chatbot had been trained on biased data, perpetuating stereotypes against women and minorities.
Why didn’t ‘we should have trusted our own technology for a simple Super Bowl slot?’ The altered ad isn’t entirely different; only the first two sentences were changed, meaning that half of the allegedly AI-generated copy was still written by a human years before e-commerce shops would’ve gotten their hands on Gemini.
The situation is a bad look for Google and raises questions about the company’s trust in its own technology. The incident highlights the importance of transparency and accuracy in AI-generated content.