Can we “fix” social media while maintaining arenas of discourse?

David Brin
6 min readJun 2, 2020

--

Is impartial truth-checking possible?

Twitter’s decision to slap warning labels on some Trumpian tweets — those seeming to incite violence — “was the culmination of months of debate inside the company over developing protocols to limit the impact of objectionable messages from world leaders — and what to do when Mr. Trump inevitably broke it.”

Perspective time. The problem of toxicity in media is not a new one. Every new medium of communication was misused for nefarious ends, before it eventually lived up to its elevating promise. The printing press was first used to spread horrible hate tracts exacerbating Europe’s religious wars. Only across subsequent centuries did the spread of books truly elevate an increasingly literate population. Similar bad-beginnings were seen with the arrival of newspapers and newsreels. In the 1930s, loudspeakers and radio amplified gifted orators with godlike voices, sparking humanity’s worst era. It always starts by empowering predators. But over time, citizens became better at culling wheat from chaff from poison in each technology, and we all grew better for it.

Today (as some of us predicted in the 1980s) a similar transition is happening in digital media at 10x the speed and 10,000x the sheer volume of crap and lying misuse, leaving us with very little time to make the same transition. Meanwhile, evil or fanatical or insane manipulators twist the very concept of “fact” or “truth” out of all recognition. We need tools of maturity and we need them fast.

There are two general ways to achieve this. The first was used in almost every society before ours — to set up a caste of censors, gatekeepers, regulators of what the masses may see or know. Our entire Enlightenment Experiment has been a rejection of that approach, which stifled and brought nothing but error and calamity across history. All our values rail against it — e.g. in every Hollywood film. Indeed, our enemies are using this Suspicion of Authority (SoA) reflex against us, by attacking even the very idea of professional expertise.

The other approach is lateral criticism. Argument (ideally based at least somewhat on facts) can apply reciprocal accountability via markets, democracy and now the innovation of the web. It can work! We and all our vast array of miracles are proof. But the whole thing breaks down when we huddle in separated ghettoes of ignorance, reciting incantations and nostrums that are fed to us by evil men.

Can we innovate ways to save innovative media?

In early 2017, I was invited to Facebook HQ where executives and designers were wringing their hands. They fretted over how thoroughly their platform had been hijacked and abused — much of it by hostile foreign powers — with clear intent to warp American democracy. And yes, for a brief time, folks at Facebook seemed serious about trying to find solutions, hoping to achieve a three-way win-in, starting with their top priority:

1 — Protect user growth and profitability.

2 — Maximize user freedom of self-expression.

3 — Reduce the amount and impact of deliberate or inadvertent campaigns of falsehood or incitement.

During my hour-long meeting with executives, I offered possible ways to achieve this trifecta. But I might as well have saved my breath. As the Trump Era became a new (if bizarre) normal, goal number three simply floated away.

And so we now approach another U.S. Election. And seeing all their efforts to wreck the Western Enlightenment teetering in the balance, our enemies will concentrate on spreading tsunamis of lies via social media. Moreover, while Facebook will remain obdurate until the end, Twitter and other platforms are beginning to take this seriously.

And so, it is for them that I’ll trot out one — just one — of the proposals I offered Facebook on that futile day.

The simplest method

Consider that list of three desiderata that a social media company might have, prioritized in that order. They will only seek #2 — service to users — if #1 — profitability — is secure. And only if user self-expression at least appears to be safe will they consider #3… any responsibility to reduce the efficacy of lies that spread across their platform.

Hence, one of the interventions that I offered might seem very small… a slight modification that countless users would simply ignore. It does not fix the problem of deliberate incitement or lies! Not altogether. There must be other remediations, especially for the worst cases. But it would apply a subtle “thumb on the scale,’ reminding users that there is more to most controversial topics than just one side.

And, as an added benefit, this intervention can be done 99%+ by algorithm! It seems likely there’s no need for a vast infrastructure of living arbiters. Ready?

Envision just a pair of symbols added next to the Thumbs-Up indicator, as in the example below. Say a small exclamation point and a question mark. Generally innocuous, these clickables allow the user to seek more information… or alternative points of view. Note that they have minimal footprint on the user’s precious screen space.

In Figure #1 (above) we see the two symbols are empty and easily ignored.

Only now we get to the part where I lean on insights from Edward Tufte’s classic book The Visual Display of Quantitative Information. Because there are many dimensions of useful information that can be conveyed via a mere exclamation point!

In Figure 2 (below) we see how the exclamation point can convey several spectra of information, perhaps throbbing when the company has detected a suspicious source or bad actors at work. Fullness — as in a thermometer — can show the host’s level of certainty that there’s a problem, while color or texture can bear upon the type of problem.

Users do not have to memorize any of the meanings! But they’ll learn, over time, that a tiny, flashing red exclamation point means there’s another side to whatever meme they are relishing. Moreover it’s hard to accuse the host company of partisan bias when the same thing happens to every side.

Is an offer of rebuttal enough to cancel toxic memes? Well, it can’t hurt to lure a few of the more curious to sample refutations. And that tiny nonpartisan nag could be enough to crack the wall of a Nuremberg rally.

The second kind of clickable Alert-o-meter — a Question Mark — links to sites that are less contradictory and more informative than linked by the exclamation point. Here user preferences play a role. The follow-up path may be encyclopedic or lighter or even entertaining. The aim is to encourage curiosity and depth to the topic.

Again, the User is free to ignore the small alert-o-meter symbol. Still, it lurks there, serving as a reminder that there’s more to this!

Not only does this help at least a little to re-establish the notion of argument and verifiability … that some sources are more verified and trustworthy than others… but we are entering an era when society may decide to modify the blanket protections enjoyed by social media companies, from all responsibility for malicious content. An approach like this one might be just enough to protect the site host from liability for helping to spread lies with dire consequences.

And there you have it. Just one of a dozen ideas I offered mavens at Facebook in their panic after the 2016 elections (See also my article: Disputation Arenas: Harnessing Conflict and Competitiveness) … before they realized that the winners of that stolen contest actually wanted no meaningful changes at all, and their best (commercial) interest lay in leaving things alone.

Think about that. And realize that nothing is likely to happen via self-regulation or reform or tweaks like mine, no matter how logical and helpful. I am wasting my time.

We all know this dire moment will be resolved massively, in one direction or another. And when it is, a mere couple of innocuously flashing symbols just won’t do.

--

--

David Brin
David Brin

Written by David Brin

Author, scientist, public speaker. My books include The Transparent Society, The Postman, Earth, Existence, and Startide Rising.

Responses (1)