Encyclopedias were for centuries the absolute domain of a small group of scholars and experts. Only they could condense human knowledge in those printed volumes, but with the internet and Wikipedia everything changed. Suddenly it was possible to access an open encyclopedia to which anyone could contribute. Wikipedia has become an excellent information resource -not free of problems, yes-, but now a threat is plaguing it: it’s called ChatGPT.
The community, divided. As indicated in Vice, a recent meeting of the Wikipedia community discussed the potential use of generative AI models to help the work of Wikipedia editors and managers. The risk that those texts generated by models like ChatGPT contain errors – or even inventions and “hallucinations” – is high, and that requires human supervision that could end up giving more work than avoiding it.
ChatGPT can be useful… keeping an eye on it. Amy Bruckman, a professor and author of a book on Wikipedia, explained how “the risk for Wikipedia is that people end up lowering the quality by posting things that haven’t been verified. I don’t think there’s anything wrong with using [ChatGPT] for a first draft, but every point needs to be checked.”
setting limits. Those responsible for Wikipedia have created a document that talks about large language models (LLM, for Large Language Model) such as ChatGPT and its potential application to the work of editors. The basic guidelines are not to publish content obtained by asking an LLM, although these models can be used as “advisors” in the writing. If these tools are used, it is necessary to make it clear in the edition summary of the affected/edited article.
But the IA could do Wikipedia even better. Although there are opponents of the use of this type of technology, spokespersons for the Wikimedia Foundation indicated in that Vice article how AI represents an opportunity to help the work of Wikipedia volunteers and Wikimedia projects grow. One of those spokespersons indicated that “AI can work better as a complement to the work that humans do in our project.”
Meta already made his proposal. In fact there has been another project related to AI that wanted to help make Wikipedia better. It was announced last summer by Meta, which has developed an artificial intelligence system called Sphere. After training it with 4 million Wikipedia text snippets and a large data set, the goal was to be able to be used to verify the sources or references of Wikipedia articles.
Quote and verify, paramount. It therefore seems that Wikipedia does not oppose or prohibit the use of models such as ChatGPT to help improve the platform, but there are suspicions and the risks are known. As Bruckman said, “content will be only as reliable as the number of people who have verified it with strong citation practices [a fuentes originales y válidas]”.
In Xataka | The “great” redesign of Wikipedia: everything that changes in the encyclopedia in its first reform in decades