Michael Lopez Brow and Cal Tech undergrad student Alex Kwai Michael are claiming to have created a program to defeat “Fake News”.
Here is how they explained their software:
“When you’re looking for fake news so perfect news we’re looking for a lot of very subtle biases that might be present in articles and how these are these bodies biases in the articles might affect the users reading them”
Right off the bat, we see an issue. They are very particular about using the word “bias”. Now, let me explain why this is an issue. Bias doesn’t mean something is “Fake News”. Bias means something different to all different people. So how exactly will they create a computer program that makes sure it is fair?
They were asked this question:
“How can you tell or how does the program tell the difference between maybe something in the Washington Post or CNN that may be leaning left because the author has liberal tendencies versus something that is just straight up not”
Here is their response:
“Right so I think it’s important for our definition of fake news on our software we’re not just oh we don’t look at biased news we’re looking at news sites that deliberately try to deceive their viewers so an example is the website MSNBC Co which is trying to impersonate MSNBC MSNBC and so that’s the kind of level site level fake news that we’re trying to tackle”
Notice how when confronted they switch their language. No longer do they point out bias. They now are pointing out news sites like MSNBC.co that are trying to masquerade as MSNBC. Obviously, I would never support a website that attempts to deceive their readers by posing as a different outlet. My question is will it only be used to detect obvious websites like this or will this be expanded to a place where we are shutting down opinions we disagree with.
They further explain how it operates:
“yeah so it seems like they’re pretty interested in the in the end the idea when we started this project previous attempts have focused on trying to use machines to classify an article as either fake or real but none of them have actually tried to target the users and to see if we can train users and just being more careful and skeptical when they’re reading new sites regardless of whether or not it agrees with them so based on your technology what happens you enter a search term for a certain news story your go-to site and what’s the for the reader what’s different about the interface then sure so if the new site matches on our database of fake news sites we show a pop-up warning that tells them why this site was being marked as fake news and of course they can dismiss this pop-up and continue reading on the other hand in the background we’re analyzing the
new sites that they’re reading looking at the major topics and it’s biases and let’s say it’s always they’re always reading you know negative news about a politician we might suggest a positive take on this positive politician from another news site to give them a more balanced diet of news because we think that gives them the clear view of both sides of the story”
We don’t know a ton about this new software but we will certainly keep an eye on it. The idea behind this software seems to be a good one. Keeping actual fake news from being consumed is certainly a good thing. Also, the idea that they will show good and bad things about a certain topic isn’t bad on its own. My issue with this is where it will lead. It will lead to Conservative speech being censored on all platforms. They have already hinted at it and once in the hands of Facebook will be used for their political ends. I can guarantee you if a Mainstream Media article gets debunked this software will do nothing to explain this to the people consuming the information.
We have no tolerance for comments containing violence, racism, profanity, vulgarity, doxing, or discourteous behavior. If a comment is spam, instead of replying to it please hover over that comment, click the ∨ icon, and mark it as spam. Thank you for partnering with us to maintain fruitful conversation.