One of the most ubiquitous features of the internet seems to be that nobody enjoys arguing on it, but everyone does anyway. Just recently, I was scrolling through a Twitter thread, for god knows what reason, on a paper that uses semantic alignment (measuring how closely words correspond to each other across languages with math) to measure cultural influences on how languages create categories. It was a pretty cool paper! The Twitter discussion, on the other hand, not so much.
Replies to a tweet sharing this paper were generally along the lines of "OMG tech people think they just discovered linguistic relativism yesterday! Clearly these people have no understanding of linguistics or psychology!" And the narrative quickly became that this work was some clumsy attempt to study language by some glorified statisticians with no understanding of linguistics. They were trampling all over our (linguists') territory! And doing it badly! And it got into Nature Human Behaviour!
Of course, if you actually read the paper (which is freely availably online, no paywall), or even just glanced at the first page, you would see that three of the four authors have PhDs in linguistics. If you read the first paragraph of the introduction, they make it abundantly clear that they are aware of the relevant work in linguistics. The paper is novel in that it tests those theories using large-scale computational methods.
So hundreds of people posted replies and retweets criticizing the authors for their "lack of background in linguistics," and in so doing made it clear that they had not even looked at the first page of the paper they felt so strongly about. And this is the subset of Twitter users who follow accounts that write about scientific papers!
This phenomenon, where hordes of social media users make impassioned arguments about something they know nothing about, drowning out the few who actually know about the topic, is commonplace. But how did we get here? How has the digital agora become such a cacophony of uninformed yelling?
I suppose a good starting question would be "Who participates in internet conversation and how?" I'm not old enough to remember the internet of the late 90s and early 2000s, but I get the sense that discussion was generally better then: more long-form blog posts, less snark. Since then, the internet has expanded massively. On one hand, it's great that more people can access the vast wealth of information. On the other, the fact that the constituency of the internet now includes your grandparents and teenagers in rural Siberia seems to necessitate some coordination mechanisms that weren't needed earlier.
Suppose that everyone's potential contribution to a discussion is summarized by two values: knowledge and confidence. Knowledge describes how much one knows about a topic and how well they understand it, while confidence is one's self-perceived knowledge value. These can be very different.
The psychologists Dunning and Kruger (1999) studied precisely the difference between these two values in various domains. They found that the relationship between true and perceived skill looks more or less like this:
Let's break this up into three thirds. In the first third, people are incompetent. However, because the skills they need to assess their own competence are the same as the skills they need to be competent, they don't view themselves as incompetent. This is akin to someone who has never played a piano before thinking "How hard can this be? I'd be a decent pianist." In the second third, our naive pianist has taken a few lessons, tried to play a few complicated pieces, and realized just how much they don't know. This person has developed enough skill to assess their own ability more accurately, but not enough to actually be good. Finally, in the last third, we have the pianists who have been practising for years and have truly mastered their craft. They are good and they know they are good.
Let's assume that people contribute to a discussion if they believe their potential contribution would be valuable. That is, if their confidence is above a certain threshold. So the first and third groups contribute, while the second prefers to read and learn. People with PhDs in linguistics and people with no linguistics training both think they understand language deeply. The former due to their years of linguistics training and the latter because they learned a bit of grammar in their high school English class. The second group, which consists of people who took just enough linguistics in undergrad to realize how deep the rabbit hole goes, tends not to express opinions.
So the middle falls out and public conversation consists of those with the most and least understanding. To make things worse, the uninformed far outnumber the well-informed in the general population. This is true for almost any topic.
So online conversation consists of a small group of knowledgeable people conversing with a large group of uninformed people. I think this effect is also fractal. Among people with PhDs in linguistics, a small fraction of people bothered to read the paper in question, while a larger group didn't but still feels qualified to discuss it.
Things really get bad when we introduce the final piece of the puzzle: the structure of social networks. Generally, platforms provide no indication of who has expertise in a subject and who doesn't. There is no little "linguistics PhD" badge next to someone's username on Reddit. There's no "certified reader of the paper under discussion" badge.
This flat structure often works well when it empowers honest autodidacts to compete or collaborate with credentialed experts on a level playing field. But the level playing field also extends to cranks, ideologues, and the simply uninformed. To an outside observer who is somewhat curious about the topic, there are no markers to distinguish the careful from the careless. There's no grounding, no sign saying "THESE PEOPLE KNOW WHAT THEY'RE TALKING ABOUT".
The first two sections read like an argument in favour of gatekeeping and a more hierarchical structure on social media. I'm a big fan of the whole speaking truth to power thing, I'm just worried about our capacity to collectively agree on what truth is.
So how might we solve this? Fundamentally reconfiguring human psychology seems lofty, but maybe we could try to design an online environment that (a) doesn't suffer from the effect I just outlined and (b) isn't excessively hierarchical. How can we design social systems that acknowledge expertise without relying on existing institutions? Solving this problem would vastly improve the quality of communication on the internet.
To avoid simply privileging anyone with an advanced degree, any solution to the expertise identification problem would have to check people's statements against the world somehow. I also don't see a way to do this without the platform being explicitly designed to assess knowledgeably. Any small band of people trying to establish norms of honesty and humility will quickly be overwhelmed by the selection pressures in favour of sensationalism.
If I were the Social Media Czar, here are some ideas I'd be interested in trying:
- a joint prediction market and social network: Users all have Elo ratings and they can make bets against each other like it's a Bayesian utopia. If you win a bet, your Elo goes up. If you lose, it goes down. The higher your Elo rating is, the closer your posts and comments appear to the top of people's feeds. This means that conversations become disproportionately dominated by those who have made accurate predictions in the past.
- a web-of-trust style system: Everyone designates some people who they trust, then using some statistics on the trust graph, some people. Twitter follows are somewhat like this, except that (a) people follow people they don't trust very much, just because they are important to the world and (b) To the best of my knowledge, Twitter doesn't preferentially show people who your followers follow. Separate mechanisms for endorsement and following might be quite different.
- an upvote-downvote system (like Reddit) where each user has a persistent score. Each post would have a score which is a function of both the number of upvotes and downvotes that post got and the score of the user who posted it. Getting upvotes increases a user's reputation and getting downvotes decreases it. The higher the score of a post, the more likely other people are to see it. This would create a reputation mechanism where we would see people with a track record of making pretty good posts.
What would your social media utopia look like? Feel free to comment below or email me at [email protected]