Meta announced that, building on Meta’s AI research and advancements, they have developed the first model capable of automatically scanning hundreds of thousands of citations at once to check whether they truly support the corresponding claims. Volunteers double-check Wikipedia’s footnotes, but, as the site continues to grow, it’s challenging to keep pace with more than the 17,000 new articles added each month.
Automated tools can help identify gibberish or statements that lack citations, but helping human editors determine whether a source actually backs up a claim is a much more complex task – one that requires an AI system’s depth of understanding and analysis.
Meta AI states that they have already begin to develop the building blocks of the next generation of citation tools. Last year, they released an AI model that integrates information retrieval and verification, and they are training neural networks to learn more nuanced representations of language so they can pinpoint relevant source material in an internet-size pool of data.
TechCrunch reported that Sphere’s first user is Wikipedia, which is using it to automatically scan entries and identify when citations and entries are strongly or weakly supported.
According to TechCrunch, the Wikimedia Foundation, which oversees Wikipedia, has been weighing up new ways of leveraging all that data. Last month, it announced an Enterprise tier and its first two commercial customers, Google and the Internet Archive, which use Wikipedia-based data for their own business-generating interests and will now have more formal service agreements wrapped around that.
TechCrunch also stated: On Meta’s part, the company continues to be weighed down by a bad public perception, stemming in part from accusations that it enables misinformation and toxic ideas to gain ground freely. …It’s a mess for sure, but in that regard launching something like Sphere feels like a PR exercise for Meta, as much as a potentially useful tool. According to TechCrunch, if it works, it shows that there are people in the organization trying to work in good faith.
I find it interesting that Meta posted a “NOTE” at the end of its announcement. “Wikipedia and Meta are not partnering on this project. The project is still in the research phase and not being used to automatically update any content on Wikipedia.”
The thing about AI doing the work that previously as done by humans is that an AI lacks discernment. A human can easily spot when a cited source turns out to be misleading (or has a dead link). Personally, I’m not comfortable allowing an AI to make decisions about whether or not a link to a cited source is more or less valid than another on the same topic. I’m unconvinced that an AI has the nuance to discern why one source is better than another.