We live in a world now where there is an economic model that strongly incentivizes online platforms
就像脸书 谷歌 推特都在尽可能地吸引我们的注意力
like Facebook, Google, Twitter to capture as much of our attention as possible.
The way to do that is to promote content that is the most engaging.
And what is the most engaging?
There was a recent study that came out of NYU recently that characterized the language
And this study, which was led by William Brady and Jay Van Bavel and colleagues, found that
each “moral emotional” word in a tweet increased the likelihood of a retweet by 20
So content that has moral and emotional qualities to it, of which moral outrage is the poster
child, is the most engaging content.
And so that means that the algorithms that select for what is shown to all of us in our
newsfeeds are selecting for the content that’s going to be the most engaging, because that
draws the most attention—because that creates the most revenue through ad sales for these
And so this creates an information ecosystem where there’s a kind of natural selection
process going on, and the most outrageous content is going to rise to the top.
So this suggests that the kinds of stories that we read in our newsfeeds online might
be artificially inflated in terms of how much outrage they provoke.
And I’ve actually found some data that speaks to this.
So there was a study a few years ago by Will Hofmann and Linda Skitka, colleagues at the
University of Chicago where they tracked people’s daily experiences with moral and immoral events
in their everyday lives.
And they pinged people’s smartphones a few times a day and had them rate whether in the
past hour they had had any moral or immoral experiences.
他们让观众吐槽自己的感受如何 惊骇 快乐之类
And they had people rate how emotional they felt, out outraged they felt, how happy and
这个数据是对公众开放的 所以我能够再次分析 因为
This data became publicly available and so I was able to reanalyze the data, because
these researchers had asked them: “Where did you learn about these immoral events?
是在网络 私下谈话 电视上 收音机里 报纸之类的吗”
Online, in person, on TV, radio, newspaper, et cetera?”
And so I was able to analyze this data and show that immoral events that people learn
about online trigger more outrage than immoral events that they learn about in person or
像电视 报纸 无线电这些传统媒体了解道到的
through traditional forms of media like TV, newspaper and radio.
So this supports the idea that the algorithms that drive the presentation of news content
online are selecting that content that provokes perhaps higher levels of outrage than we even
see on the news.
And, of course what we see normally in our daily lives.
It’s an open question, “What are the long term consequences of this constant exposure
to outrage triggering material?”
One possibility that has been floated in the news recently is: outrage fatigue—and I
think many of us can relate to the idea that—if you’re constantly feeling outraged, it’s
And there may be a limit to how much outrage we’re able to experience day to day.
That is potentially harmful in terms of the long term social consequences, because if
we are feeling outraged about relatively minor things and that’s depleting some kind of
reserve, that may mean that we’re not able to feel outraged for things that really matter.
On the other hand there’s also research in aggression showing that if you give people
the opportunity to vent their aggressive feelings about something that’s made them mad, that
actually can increase the likelihood of future aggression.
So in the literature on anger and outrage there are two possibilities.
One being this long term depletion, “outrage fatigue”.
The other being a kind of sensitization.
And we need to do more research to figure out which of those might be operating in the
context of online outrage expression.
It may be different for different people.
Social media is very unlikely to go away because it taps into the things that we find most
Connection with others, expressing our moral values, sharing those moral values with others,
building our reputation.
And, of course, what makes social media so compelling, and so addictive even, is the
fact that these platforms are really tapping into very ancient neural circuits that
we know are involved in reward processing, in habit formation.
One intriguing possibility because the way these apps are designed are so streamlined—You
have stimuli icons that are so recognizable and familiar to all of us who use these apps.
And very effortless responses to like, to share, to retweet.
And then we get feedback, and that feedback in the form of likes and shares is delivered
at unpredictable times.
And unpredictable rewards, we know from decades of research in neuroscience, are the fastest
way to establish habit.
Now habit is a behavior that is expressed without regard to its long term consequences.
Just as someone who’s habitually reaching for the bag of potato chips when they’re
They’re eating those potato chips, not to achieve some goal to satisfy their hunger,
but just mindlessly.
We might be mindlessly expressing moral emotions like outrage without actually necessarily
experiencing them strongly or desiring to express those so broadly the way that we just
do on social media.
And so I think it’s really worth considering and having a conversation about whether we
want some of our strongest moral emotions, which are so core to who we are—Do we want
those under the control of algorithms whose main purpose is to generate advertising revenue
for big tech companies?
为什么社交媒体总是令我们愤怒 | Molly Crockett