Ah, the Internet.
While the technology has certainly done wonders for the world in terms of educating folks in less developed areas and bringing the world closer together, it also serves as a facilitator to the spread of misinformation. It’s simple: Anyone on the Internet can post whatever kind of information they want to post—it doesn’t matter if it’s true or not.
Given that the service accommodates more than 1.4 billion users each month, it’s fairly safe to say that Facebook is a platform where a great many folks get their news and other information. But what you might not have known is the fact that, when they post links to news content on their own walls, Facebook users are able to easily augment the headlines and sub-headlines, changing the meaning of the story altogether.
Facebook Has a Plan…
Since 60 percent of people get all of their news from headlines, this is a pretty big deal.
But here’s the good news: The juggernaut social media network recently announced that it was taking steps to update the feature that allowed that kind of editing, essentially doing what it can do to help combat the spread of misinformation.
According to Facebook personnel, the editing feature had been around since 2011. Back then, it was needed because Facebook wouldn’t automatically display headlines, sub-headlines, and the correct photos to accompany news stories when users linked to them. Over the last few years, Facebook’s technology has evolved, and now that’s no longer the case.
…But Where Do You Draw the Line?
It’s likely that not every single person was aware that Facebook users were actually editing content so drastically, making it appear as though a well-respected news organization—like NPR, for example—could write a headline that made it look like President Obama was cancelling baseball’s All-Star Game. Most folks would probably agree, however, that the decision to disallow that functionality is a good one by Facebook.
Still, it’s worth speculating about what role, if any, Facebook should have in “censoring” what its users post if for no other reason than because so many people rely on the social network for their news. This isn’t the first time that the social network has announced its plans to do all that it can to reduce the likelihood that misinformation will pop up in your news feed.
At the beginning of 2015, the company announced that its team of researchers found that misinformation and hoax stories (e.g., celebrity death news) travel considerably faster than regular content. While a user’s friends will usually debunk false news stories in the comments by linking to trusted sources like Snopes or PolitiFact, that debunking, unfortunately, doesn’t seem to travel as fast as the misinformation, the researchers found.
So Facebook decided it would attach warning signs to content that most users said contained false information. Which might sound good to some the first time they hear the information, but further inspection may be cause for concern.
In today’s hyper-partisan political climate, for example, who’s to say that a zillion Republicans won’t report that a positive story about Bernie Sanders is full of lies? Same goes for Democrats and stories praising Donald Trump (they do exist—sort of).
It’s one thing for Facebook to update the features it gives users so that they’re unable to pass off a story from a legit news source as the ramblings of a delusional man who wears tinfoil hats (remember, again, people only read headlines).
But perhaps we can agree that Facebook algorithmically limiting access to certain content or labeling it as fake might be construed as the social network overstepping its bounds. Just think: What if a bunch of Christians labeled articles about evolution fake? Or a bunch of atheists labeled articles about Christianity fake?
The bottom line: Facebook is a platform that exists first and foremost to connect people. If it’s really committed to the truth, it probably shouldn’t get too deep into the censorship business.