fbpx

When the news media first started to build the fake news narrative, the thought of many at that time was this, if human beings in America are not able to discern what real and fake news actually is, this is a reflection of the poor quality of our government schools.  When the response to this ‘threat’ was to push for censorship on social media platforms, many responded that training people to discern fake news from real news (really, just training them to be crticial thinkers and how to use the internet to find more objective-type date to figure things out for yourself) would be way more effective and not fundamentally in violation of the spirit of the Bill of Rights, to live dangerously in freedom, ESPECIALLY when it comes to expression, and even more esssentially when it comes to news reporting.

A new study might give the public schools a bit of a defense, while indicting those who would have you believe humans are too stupid to figure out when people are making things up.  You might fool them to some degree, but ultimately people have a heuristic understanding of inauthenticity, of fakeness.

A study from MIT suggests that regular readers have proven to be as effective as professional fact-checkers at discering fake stories from real stories.  The study itself, however, is not being used to push back against the gatekeeper approach, protecting the dumbs from the dangerous misinformations, but rather to develop models in which you utilize reader groups of proportional representational size to fact-check the content users post on a given social media platform (in this case, Facebook).

Scienceblog itself refers to the problem of misinformation in its opening senten.ce from the article we blurb from below,

In the face of grave concerns about misinformation, social media networks and news organizations often employ fact-checkers to sort the real from the false. But fact-checkers can only assess a small portion of the stories floating around online.

“Grave concerns” is the tone this site chose to reflect, missing, like, perhaps, MIT itself, the broader ramifications of its study, that no filters are needed at all, and that if you took the time to educate people on how to discern stories for themselves you would be far more effective in actually screening out misinformation, but then again, its not misinformation these platforms hope to contain, for misiniformation is regularly approved at the highest levels of government and media and news on an almost daily basis, rather its the orthodox version of reality, in terms of the phenomenology of it, as well as the moral, social, and cultural version of reality.

The DNC theorizes, based on the CCP model of recent decades, that they need not worry about loss of productive excellence at the highest levels being so affected by taking so much control over the lives of the people of the land they coercively control.

Thus, where concentrated power perceives it can extend itself and cut itself off from competition going forward, it will do so, and this is the reality of power assumption of the DNC, an ambition that would be fundamentally threatened if the perception of the the threat of misinformation to the welfare of others no longer held sway over any significant portion of humans that live in this land, and this study certainly lends credence to the opposing fear-based narrative of the DNC.

 

Study: Crowds can wise up to fake news

From scienceblog.com
2021-09-01 19:02:09
MIT
Excerpt:

 

In the face of grave concerns about misinformation, social media networks and news organizations often employ fact-checkers to sort the real from the false. But fact-checkers can only assess a small portion of the stories floating around online.

A new study by MIT researchers suggests an alternate approach: Crowdsourced accuracy judgements from groups of normal readers can be virtually as effective as the work of professional fact-checkers.

“One problem with fact-checking is that there is just way too much content for professional fact-checkers to be able to cover, especially within a reasonable time frame,” says Jennifer Allen, a PhD student at the MIT Sloan School of Management  and co-author of a newly published paper detailing the study.

But the current study, examining over 200 news stories that Facebook’s algorithms had flagged for further scrutiny, may have found a way to address that problem, by using relatively small, politically balanced groups of lay readers to…

 

Read Full Article