Wikipedia is the largest online encyclopedia, with content ranging from academic research to pop culture. It serves as a source of information on almost any topic. However, managing millions of articles and ensuring their credibility through verified sources can be a difficult task. To address this challenge, Fabio Petroni, Samuel Broscheit, and colleagues developed an artificial intelligence (AI) system called SIDE, which helps to evaluate the accuracy and relevance of Wikipedia references.

SIDE is trained on a massive dataset of Wikipedia references, learning from the contributions of thousands of editors. SIDE works by checking if references properly support the claims to which they are connected. If they don’t, the tool suggests better references. Here’s how SIDE accomplishes this task:

  1. The user provides a claim that requires citation.
  2. SIDE retrieves potential references from the web using Sphere, an open-source search engine.
  3. SIDE ranks the references using its verification engine, which is a neural network that predicts how well each source supports the claim.
  4. If the original reference is not ranked highest, SIDE suggests the better option it found.

Almost half the time, SIDE identified the same reference that was already cited in the article. In cases where SIDE suggested a different reference, the researchers found that people preferred SIDE’s recommendation more than 60% of the time, and more than 80% of the time in cases where SIDE predicted the existing reference was highly likely to be wrong. These findings demonstrate that SIDE is effective in providing citation recommendations, which can improve the credibility and accuracy of information on Wikipedia if implemented widely on the site. This improvement is particularly important in today’s world, where misinformation and fake news are rampant and accurate information dissemination is crucial.

This study was led by Fabio Petroni and Samuel Broscheit. Fabio Petroni is a founder of London-based company Samaya AI and Samuel Broscheit is an Applied Scientist at Amazon Alexa AI and a PhD candidate at the University of Mannheim in Mannheim, Baden-Württemberg, Germany.

Managing Correspondent: Marwa Osman

Press article: AI tidies up Wikipedia’s references — and boosts reliability (News from Nature)

Original article: Improving Wikipedia verifiability with AI (Nature Machine Intelligence)

Image Credit: Gerd Altmann from Pixabay

One thought on “AI validates quality of sources on Wikipedia

  1. As of my last knowledge update in January 2022, AI does not autonomously validate the quality of sources on Wikipedia. Wikipedia relies on human editors to assess and verify the reliability of sources. While AI tools may assist in some tasks, the final editorial decisions and source evaluations are typically made by human contributors who follow Wikipedia’s guidelines for verifiability and reliable sourcing. Keep in mind that developments in AI and Wikipedia policies may have occurred since my last update.

Leave a Reply

Your email address will not be published. Required fields are marked *