Amidst the growing concerns over the dissemination of false information, researchers say that it is now more important than ever to curb the propagation of false statements.
UW graduate students Jason L. Deglint and Ibrahim Ben Daya in collaboration with Chris Dulhanty, and Alexander Wong, the Canada Research Chair in Artificial Intelligence and Medical Imaging developed an automated fact-checking AI tool which can detect fake news with high accuracy.
They developed screening tool sets a new benchmark for accuracy, in that they say it can flag fake news correctly nine out of ten times.
The tool uses a technique called stance detection, which is an emerging area of research that determines the degree of relationship between the principle claim of one story and other news stories. In other words, the learning algorithm compares the content in the claim with news from a variety of different sources on the same topic and estimates whether these stories substantiate or refute the claim.
Alexander Wong, a professor of Systems Design Engineering at UW, said the automated fact-checking system consists of a series of sequential sub-tasks.
First, during the document retrieval stage, the AI gathers news stories pertaining to the claim.
Next, in the Stance Detection process, the stance or relative position of each article with respect to the claim is identified.
Then the penultimate step, Reputation Assessment, determines the trustworthiness or reputation of each news source.
Finally the claim verification step determines the veracity of the claim in question.
Wong said fact checkers could use this screening tool could be used by fact-checkers as an efficient means to flag fake news, an otherwise be a laborious task when is done manually.
The learning algorithm leverages the features of an open source, bidirectional language model, which allows the neural network to gain a deeper understanding of the context in the principle claim. It also references articles compared to earlier unidirectional models.
“Garnering reliable sources of datasets for training the model is one of the major challenges the research team had to deal with, as bias in the data leads to undesirable outcomes. Therefore, the intended primary end-users of the developed automated tool will be journalists and fact checkers.
“That said, a simple user interface in the form of a browser plug-in will be rolled out in the near future,” Wong said.
Although there have been some fascinating improvements in the algorithm’s capability to flag fake news.
However, it is still a challenge to keep up with the barrage of disinformation in varying forms.
This prompts the subject matter experts to work toward sophisticated calibration of the tool’s fake news detection.
The UW researchers are determined to elevate the algorithm’s performance further by training their model with much larger datasets targeting media outlets that report news in other languages.