This summer marked the launch of Cinderly, a new fashion app for girls that literally doubles as your personal fashion consultant. According to a press release, the startup “uses data and technology to match everyday girls with users their same style and dress size, building a community of real role models to inspire new fashion choices…It’s like a celebrity fashion stylist at one’s fingertips.” Picture this: you’re trying on a new outfit and want some honest feedback, so you turn to your Cinderly community for comments. On one hand, it’s a great resource. On the other, it could quickly turn hostile, especially if the trolls are waiting with their nasty reviews. This is why Cinderly has clearly made the following declaration: “The Cinderly kingdom is a troll-free zone. If you troll here, you’ll be banished to a kingdom far away from far, far away.” Users must sign a no-trolling pledge.
Commenting online about articles, products, companies and concepts is a great way to explore topics and communicate with the Internet community through social media. However, trolling — a term that has become a catchall for a spectrum of bad online behavior — is the dark side of online commenting. Internet trolls are people (often hidden behind the veil of a generic username) who start arguments and/or upset people by making hurtful, sometimes even violent comments. The language and the tone often turn intense and offensive.
Take, for example, Milo Yiannopoulos, the technology editor at the conservative news site Breitbart who has built a strong online following. Last month, Yiannopoulos led a Twitter campaign against Ghostbusters actress Leslie Jones. Hundreds of trolls heeded his call, hurling racist comments and ugly memes, compelling Jones to leave the social networking service.
A 2015 podcast from This American Life (see Related Links in the toolbar) tells the story of writer Lindy West, harassed online by hundreds of trolls. During the podcast, West tracks down and contacts a particularly cruel troll to ask him why he has targeted her. Among other things, he says it is because he is threatened by the fact that her writing shows no fear.
Trolling is a juicy discussion topic on many levels, starting with why random people feel the need to spread such negativity online. The practice also has important business implications.
Trolls and their invective pose major challenges to social media sites, publishers and retailers. “When trolling gets bad, it can really wreck the experience of any customer who wants to use the online service. Not many people want to go to or participate in a place where others are being mean or acting like idiots,” says Rider University psychologist John R. Suler, author of Psychology of the Digital Age: Humans Become Electric. “How much trolling is a problem for a business depends on how it runs its social media site. If there is a space for people to speak their mind anonymously, trolls will likely appear. It will also depend on the reputation of the company and the nature of the products or services they offer. Some companies, products and services draw more fire from trolls than others.”
And then there is the question of how much a company actually wants to discourage trolls, says Wharton professor of legal studies and business ethics Kevin Werbach. “Trolls and their followers often generate a large volume of activity. Services that monetize based on eyeballs may be concerned about cutting down on their traffic or user growth,” says Werbach. “Companies need to consider their revenue model, how much activity the trolls actually represent and the overall impacts in both directions.” Cutting down on abuse may make the platform more attractive to current and potential users, for example. “Ultimately, these firms have to decide what kind of company they want to be,” he adds. “Sometimes pursuing every drop of short-term revenue obscures the most profitable strategy over the long term.”
Some are trying to have their cake and eat it, too. Entrepreneurs and Google alumni Bindu Reddy and Arvind Sundararajan have co-founded a new social app called Candid that aims to create a digital safe space by using artificial intelligence to monitor and curate conversations. Users are anonymous, but earn “badges” based on past posts that tag them as influencers, givers or socializers — or gossips and haters.
Trolling is worse than ever, but it has been present “since the very beginning of the Internet, when chat rooms and discussion boards ruled,” says Suler. “Before the Internet, we didn’t see much trolling on TV, but it did happen on radio, especially during call-in shows that allowed people to be anonymous. Trolling has always existed in the social life of humans and always will exist, because there will always be people who antagonize and hurt others, either because they feel compelled to or simply because they enjoy it.”
Trolls are a busy breed. Near three-quarters of 2,849 respondents to a 2014 Pew Research Center survey said they had seen someone harassed online, with 40% saying they had experienced it personally — from being called a name, to stalking and threats of physical violence. The report showed that men are “more likely to experience name-calling and embarrassment, while young women are particularly vulnerable to sexual harassment and stalking.”
Trolls come in a variety of shapes and sizes, Suler says, though the basic categories are immature teenagers, chronically angry and frustrated people who take it out on others, narcissists and sociopaths. “The hardcore troll is a sociopath who enjoys hurting people, who wants people to get upset, angry and depressed,” says Suler. “It’s a deliberate act of manipulation and control in order to feel powerful. In fact, such sociopaths want to destroy other people as best they can.”
The challenge for companies such as Twitter and Reddit is that “the less you try to control what users do on the platform, the easier it is for some of them to engage in abuse,” says Werbach. “Requiring real names is one way to cut down on abuse, but there are many legitimate reasons to allow pseudonyms or anonymous speech. And it’s very tough to write a set of rules that distinguish appropriate and inappropriate activities in every context. Add to that the enormous volumes of traffic on these social media services, and it’s a real challenge.”
The recent Twitter episode involving Leslie Jones blended two Internet plagues into one hot mess — trolling and cyberbullying. After it was revealed that the new Ghostbusters movie would feature an all-female cast, both Jones and her co-stars were targeted by trolls. The racist and misogynist messages fired at Jones’ account intensified after the film’s release in July.
Twitter eventually suspended Yiannopoulos, and the company admitted that it needed to do more to “prohibit additional types of abusive behavior and allow more types of reporting, with the goal of reducing the burden on the person being targeted.” The company, which has long been criticized for not doing enough to protect its users, says it will announce more details shortly.
For businesses, the consequences of allowing unmediated nasty comments may be quite severe. Research has shown that for some people, the act of reading rude or angry comments actually changes opinions about the subject matter at hand. In “The ‘Nasty Effect:’ Online Incivility and Risk Perceptions of Emerging Technologies,” published in the Journal of Computer-Mediated Communication, the authors had 1,183 participants read a news post on a fictitious blog about the risks and benefits of a substance called nanosilver.
Afterward, half of the readers were shown civil comments, the other half rude ones. Those who read the civil comments were not swayed from their original opinions on the question of nanonsilver’s risk-benefit proposition. But readers exposed to the rude comments ended up being much more likely to think that the downside of nanosilver was greater than they had thought upon first reading the article.
Trolling is also not necessarily good for commerce. “A good tool that many social media companies have used is giving customers a way to ‘tune out’ anyone who is a troll,” says Suler. “But in a customer site for discussing or reviewing products, many customers probably won’t use it much. They just won’t come back if the trolling is out of control.”
Companies can do a variety of things to prevent, detect and manage trolling, but no solution is perfect, says Suler. “Relying on automated interventions, such as algorithms, to detect and delete offensive language works OK, but not great. It’s always a good idea to make it easy for customers to report inappropriate behavior. But then how does the company intervene? Try to reason with trolls? Ban trolls from the site? Both of these turn out to be tricky issues, because ‘trolling’ will have to be defined with rules and then the rules must be consistently enforced. That takes paid workers to carry out those strategies. Paid moderators is always a good strategy for monitoring, regulating and intervening in social media, but that does cost money.”
Ultimately, social media platforms must choose what they want to be, says Werbach: Anything goes, regardless of the human cost, or a safe space for open communication. “While management of these companies may be ideologically reluctant to be the ones limiting speech, if they don’t, they’re allowing the loudest and most abusive voices to silence other users.”
You can access a longer version of this article, which originally appeared in our sister publication Knowledge@Wharton, under ‘Related Links.’
What is a troll? What are some of the reasons offered for why trolls do what they do? Why do you think the Internet has unleashed a whole new generation of trolls?
How do trolls affect businesses? What can businesses do to control troll chatter? Will all businesses choose to do so? Why or why not?
Were you aware of the Twitter uproar involving Ghostbusters’ Leslie Jones? Research this topic in Related Links. How did the trolls fuel this drama? Can you think of another example of the negative influence of trolls on social media? How do you think Twitter will ultimately respond to the troll debate?
Wharton Professor Kevin Werbach says that ultimately social media platforms must choose what they want to be: “Anything goes, regardless of human cost, or a safe space for open communication.” What do you think they should be? When does someone step over the line? Consider any personal experiences, including being targeted by trolls or cyberbullies. Also, consider the Candid startup example. Debate this topic with your friends and/or classmates.