On Misinformation and Responsibility

With the pandemic still on-going, I’m not sure if this is the most fitting time. But I want to talk about misinformation and ecosystem responsibility.

We’ve been seeing some crazy things since the internet. Home remedy articles that are more poison than medicine: like drinking bleach or nonsensical herb mixes that at best do nothing and at worse are harmful. But tamer and almost-reasonable sounding misinformation exists too for those who think they’re more educated or prepared: information and “tips” like infection via pets or sanitizing masks via alcohol spray.

And unless you work in chemistry, biology, or come across masks often in your daily life— this sounds perfectly reasonable. After all, We’re all told alcohol cuts 99% of germs and viruses. If it works on my tabletop, why wouldn’t it work on masks? But it doesn’t, and because we’re not experts in it we forfeit the responsibility to verify to those we think know better.

Such misinformation is on every single side of the spectrum and exists in every demographic. Even moreso, it spreads not because of the malicious intent of every propagator, but rather the benevolent and hope of its believers. Facebook and Google have been doing everything they can to filter and delete this information. And I really do believe they’ve been putting in their honest effort.

Yeah places like NYT might still be able to find something, but that’s just because that’s how the internet is. Unless they’re asking for the internet to become a gated walled garden, that’s how it’ll always be too. Nevermind the fact that they have every incentive to hope and wish for a return to that reality.

But we don’t want that, because information goes both ways. Case in point, Twitter became one of the places where Dr. Chu was able to blow the whistle on cases already existing in Washington and WeChat was where Dr. Li had raised the first ever alarms.

So we have both misinformation and valuable information, that until vetted has the very real possibility of being mistaken for the other. Yet for some reason the loudest voices screaming for more technological intervention seems hell-bent on just the one: “fix it, block it, stop it.” They don’t know the “how” but are abosolutely sure on the “who,” “when,” “why,” and “what.”

The internet is about the democraticization of information. Whether thats right OR wrong information. And just like for a political democracy, FDR’s quote rings true: “Democracy cannot succeed unless those who express their choice are prepared to choose wisely. The real safeguard of democracy, therefore, is education.”

The internet is the same exact thing. On one side, thats a scary scary thought. Because there ARE things that are atrocious. Livestreaming terrorist behavior for example or things that just present an immediate and obvious physical harm like bleach remedies.

But we have to be able to balance that with the possible and practical. Of course we don’t want false information on the internet. Of course we don’t want bad actors to be able to rally others to their cause. Of course, of course, of course. But just like there’s an infinite array of good things, theres an infinite number of bad and its just not possible to automatically or remove and identify all of that in a prompt manner. Even more, it’s ridiculous to have that expectation: because we’ve never done that successfully in any point in history.

TV still has scams and directly misleading shows. Newspapers and magazines get sued successfully for defamation and slander. Radio has always had “pirate stations” that broadcast some not so saavy things. What are rumors but misinformation via the spoken word?

Yes the frequency is much lower because the investment costs used to be higher. Yes spread and reach is now higher and easier than ever. Anyone can make a blog or share a post. But while that means that there’s a lot more junk, it also means theres a lot more people who are and can be whistleblowers. Who can lead charges and make change. 

Maybe the reason why we’re extra hard on it nowadays is because its the most recent, the most viral, and at the same time— we see it as the most addressable. Because we see how scaleable and amazing technology has become. But I would say its the opposite, just look at what happened to Facebook when their algorithms blocked an iconic anti-war photo due to nudity. In this case scale doesn’t mean a solution or perfection, it means magnification of flaws and deficiencies. Because theres two ways to look at this: the algorithm blocked the image on purpose and correctly because it WAS nudity, or the engineer accidentally blocked it because the algorithm they designed could not take into consideration political, historical, and emotional context. When even humans struggle to read and make that context, how can we possibly think that dry code can?

So I don’t think that we can push this as a purely supply-side driven solution. As consumers we have to take up our responsibility in recognizing and identifying this sort of information ourselves.

And this is how it ties back to the pandemic. Because I’ve been very heartened recently to have seen this adjustment made by my own parents. My mother who previously fell for almost every medicinal gossip on what diets will lend to a 120 year old life now eyes every health article shared with caution. She looks to get her information confirmed with other sources. She looks to her own experience to make a logical check.

She actually said to me on the phone: “Hey, you can’t trust news on [XYZ App].” She’s not adept with technology. She never graduated college. Nor is she particularly scientifically or politically informed. But yet she’s developed the new skill of passive information filtering required for our new day and age.

We need to recognize that a real working solution is going to require fundamental changes on the consumer-side, and we need to accept that as soon as possible so that we can push and incentivize such change immediately.

Companies can and should ban, be wary of, and design their product to disincentivize bad behavior. But they shouldn’t be the ones to dictate what information is or isn’t correct. Nor should we want the consequences that come with such a solution. By giving up responsibility, we are at the same time giving up our own power and the promises of the internet. Maybe this is just my upbringing as an American, but that’s just such an unnerving idea.

We have to be able to accept this weight of responsibility. And thats uncomfortable. Its going to be a rough transition. Especially in times like this when its during a pandemic. Because people are scared, and bad information can mean life or death. But I don’t think there is a technical solution. At the end of the day, humans need to adapt and learn and evolve. We’ve stopped evolving biologically since we’ve exited nature to a certain extent. But now we have the adapt and evolve mentally, cognitively, and as a society.

Share Your Thoughts