A team of researchers from the University of Waterloo are building a blockchain system to publish news that is assured of being “truthful.” The researchers are developing the technology to combat “misinformation,” or “fake news,” which they say is threatening democratic institutions.
The technology has three elements, blockchain (which enables decentralized publication from transparent interests), human intelligence through a quorum of “validators,” and finally a check on validators through an “entropy-based incentive mechanism.” In essence, it boils down to “popular opinion,” though the researchers would be loath to describe it in that manner.
From techxplore:
Ultimately, the article would be validated as trustworthy on the platform only if most of the validators’ opinions align with the truth. As for the user who published the article in the first place, that person might then receive a reward using the entropy-based incentive mechanism. But if the article is exposed as fake news, the user who published it could be penalized. Meanwhile, this entropy measure would convey to the end-user the degree of uncertainty in the output.
At this point the researchers have built and are working with an early prototype. While initial test results are promising, their system is still in the development stage and needs a significant effort to make it usable in the real world. Even so, an industrial partner, Ripple Labs Inc.—a leading provider of crypto solutions for businesses—is part of the Waterloo research.
Chien-Chih Chen, one of the project’s lead researchers, said of the project “We are confident our system has the potential to be applied in practical situations within the next few years. We believe it can provide a robust solution to fake news. I hope my research can impact the world to make a positive difference.”
The use of the term “fake news,” the belief that “truth” is something popular opinion can calibrate shows the hubris behind this project, which some might categorize as fake news in and of itself. Promising a machine to assure truthfulness in news is fake news, right from the start. Whole philosophies have risen and fallen on such claims.
To emphasize my point, here is how the publication that perpetuated this fake news categorized examples of “fake news” that this project might debunk: The danger of disinformation—or fake news—to democracy is real. There is evidence fake news could have influenced how people voted in two important political events in 2016: Brexit, the exit of the United Kingdom from the European Union; and the U.S. presidential election that put Donald Trump in power. More recently the Canadian government has warned Canadians to be aware of a Russian campaign of disinformation surrounding that country’s war against Ukraine. Although big tech companies—including Facebook and Google—have established policies to prevent the spread of fake news on their platforms, they’ve had limited success.
First of all, the definition of democracy and the practice of democracy are as varied as the political factions that claim to be bastions of that term. Second of all, the notion that “disinformation” is a threat to democracy is fake news, once again. What is dangerous to some form of democratic spirit is the blocking of information under the name of protecting people from “disinformation.”
Totalitarians the world over, since totalitarian states have existed, have long used claims of misinformation to justify preventing the free and open exchange of information.
This project seems more like the nascence of a new method of governments to assure only their narrative is perpetuated, using a “disinformation-safe” machine that might be “de-centralized” on the platform itself, but the platform will be licensed by the state that builds it.
Protecting the people from fake news is not real, not possible, and no government actually wants to do that (for they would be undermining their own justifications for power). What is real is your ability to discern efforts to approach truth in reporting (for pure truth is not accessible even in the very language that we use, which has ambiguity, subjectivity unavoidably built into it), not the ability of any machine or aggregate of human committees to discern truth.
You don’t need to protect the world from disinformation. You need to equip humans to protect themselves from disinformation, imperfectly though that would be. No government (and these researchers are all dependent on that “government”) has any incentive to equip their citizens to discern approaches to truth from actual disinformation, for no government can stand, including the United States, without, in part, disseminating “fake news.”


4 thoughts on “Can Blockchain Save Us from Fake News? These Researchers Say Yes”