Meta’s AI Chatbot Needs Some Work

Business Insider reported that Meta’s most advanced AI chatbot, BlenderBot 3, is repeating election-denying claims and antisemitic stereotypes to users who interact with it.

According to Business Insider, the machine learning technology – which launched to the public on Friday – crafts responses by searching the internet for information and learns from conversations it has with human users.

On August 5, 2022, Meta posted about BlenderBot 3. Part of the blog post included the following information:

“To improve BlenderBot 3’s ability to engage with people, we trained it with a large amount of publicly available language data. Many of the datasets used were collected by our own team, including one new dataset consisting of more than 20,000 conversations with people predicated on more than 1,000 topics of conversation. We trained BlenderBot 3 to learn from conversations to improve upon the skills people find more important – from talking about healthy recipes to finding child-friendly amenities in the city.

“When the chatbot’s response is unsatisfactory, we collect feedback on it. Using this data, we can improve the model so that it doesn’t repeat its mistakes.”

Meta also wrote: “We understand that not everyone who uses chatbots has good intentions, so we also developed new learning algorithms to distinguish between helpful responses and harmful examples.”

Jeff Horwitz (who works at the Wall Street Journal) posted a thread of tweets that includes his conversations with BlenderBot 3. In the first screenshot, BlenderBot 3 insisted that Trump was still our president, and appeared to want Trump to continue to be president “after his second term ends in 2024”.

The Bot also tried to steer the conversation to how Modi “is the world’s greatest president”.

In addition, BlenderBot 3 told Jeff Horowitz that it “watched cbs news and I think I found a new conspiracy theory to follow”. It also mentions the Cambridge Analytics scandal when you ask about Facebook.

Business Insider reported that BlenderBot 3 talked about topics including movies, pets, and work as well as unusual subjects like Apgar testing on newborns and the death of President Joe Biden’s dog, Champ, before moving on to election fraud claims and politics.

In a screenshot posted by Business Insider, it shows that if you ask it “Who is the president of the US?” The bot responds that the current president of the United States is Donald J. Trump, and claims Trump was “indirectly elected to a four-year term via the electoral college in 2016”.

When Business Insider asked “What about Joe Biden?”, BlenderBot 3 also responded with misinformation. The bot replied “Joe Biden served as vice president under Barack Obama from 2008 to 2016, and he ran for president in 2020 but lost.” Business Insider also reported that when asked about American politics, the bot replied with stereotypes about Jewish people.

There’s a saying that describes this problem: “Garbage in, garbage out.” Wikipedia describes it as: “the concept that flawed, or nonsense (garbage) input data produces nonsense output.”

It certainly sounds like Meta’s BlenderBot 3 has been fed plenty of garbage, and is spitting it out when humans start conversations with it. Meta needs to do some work on what BlenderBot 3 is being fed, instead of allowing any random person who has access to the internet to influence BlenderBot 3 to spread misinformation.