How Wikimedia is combating misinformation using AI and human intervention

From bots and extensions to assisted editing programs and web applications, the Wikimedia Foundation is monitoring Wikipedia this election season.

More than 60 countries and the European Union are holding elections in 2024. This year more people than ever before—almost half the world’s population—are eligible to vote. This means more people than ever will be looking for information to shape their decisions. India is in the final phase of the general election and the counting of votes will take place on June 4. The first thing that comes up on search engines when a user enters a search query is the Wikipedia page.

To understand how Wikipedia is tackling misinformation and disinformation at such a crucial time, The Indian Express spoke to Costanza Sciubba Caniglia, head of counter-disinformation strategy at Wikimedia Foundation, the non-profit that hosts the global website.

Let us tell you that the information on Wikipedia is created and compiled by a community of more than 2,65,000 volunteers around the world. Together, they compile and share information on notable topics, citing reliable sources. Caniglia claimed that the volunteers vigilantly avoid information that is not in line with the site’s policies. He also mentioned that the entire process of content moderation by Wikipedia volunteers is open and transparent.

Costanza said the organization believes artificial intelligence (AI) should support the work of humans and not replace them. The approach to AI on Wikipedia has always been through a “closed-loop” system, in which humans are in the loop — they edit, improve, and audit the work done by the AI. While all content on Wikipedia is created and curated by humans, since 2002, some volunteers have used AI and machine learning (ML) tools to support their work, especially on time-consuming and redundant tasks.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top