YouTube’s tougher harassment policy aims to address hate speech, veiled threats and repeat offenders
In YouTube CEO Susan Wojcicki’s quarterly letter last month, the exec said the company was working to develop a new harassment policy. Today, YouTube is sharing the results of those efforts with the release of an updated policy which now takes a stronger stance against threats and personal attacks, addresses toxic comments, and gets tougher on those with repeat violations.
“Harassment hurts our community by making people less inclined to share their opinions and engage with each other. We heard this time and again from creators, including those who met with us during the development of this policy update,” wrote YouTube’s Matt Halprin, Vice President, Global Head of Trust & Safety, in an announcement.
YouTube claims it will continue to be an open platform, as Wojcicki had earlier described it. However, it will not tolerate harassment, and is laying out several steps it believes will better protect YouTube creators and the community on that front.
The company says it met with a range of experts to craft its new policy, including organizations that study online bullying, those who advocate on behalf of journalists, free speech proponents, and organizations from all sides of the political spectrum.
The first change to the policy focuses on veiled threats.
Before today, YouTube prohibited videos that explicitly threatened someone, revealed confidential personal information (aka “doxxing”), or encouraged people to harass someone. Now, it will expand this policy to include “veiled or implied threats,” as well. This includes threats that simulate violence toward an individual or use language that suggests physical violence could occur.
The new policy will also now prohibit language that “maliciously insults” someone based on their protected attributes — meaning things like their race, gender expression, sexual orientation, religion, or their physical traits.
This is an area where YouTube has received much criticism, most recently with the Steven Crowder controversy, in which the conservative commentator repeatedly used racist and homophobic language to describe journalist Carlos Maza.
YouTube demonetized the channel but said the videos weren’t in violation of its policies. It later said it would revisit its policies on the matter.
YouTube’s decisions around its open nature were raised again this month after Wojcicki went on “60 Minutes” to defend the YouTuber platform’s policies.
As reporter Lesley Stahl rightly pointed out, YouTube operates in the private sector and therefore is not legally beholden to support the first amendment’s right to free speech. That means it can make up its own rules around what’s allowed on its platform and what’s not. Yet YouTube has over the years decided to design a platform where hateful content and disinformation can flourish, whether that’s white supremacists looking to indoctrinate others or conspiracy theorists peddling wacky ideas that have even translated into real-world violence, as with #pizzagate.
Notably, YouTube says its policy will apply to everyone — including private individuals, YouTube creators, and even “public officials.” Contrast that with Twitter’s policy on this matter, which will leave up tweets from public officials that violate its rules, but places them behind a screen that users have to click through in order to read.
Another big change involves tougher consequences for those whose videos don’t necessarily cross the line and break one of YouTube’s rules, but repeatedly “brush up against” its harassment policy.
That is, any creators who regularly and repeatedly harass others in either their videos or in the comments will be suspended from YouTube’s Partner Program (YPP). This gives YouTube a way to deal with creators whose behavior isn’t appropriate or “brand-safe” for advertisers — and potentially allows YouTube to make calls on an individual basis, at times, based on how it chooses to interpret this rule.
Removing the creator from YPP may be the first step, but if they continue to harass others, they may begin to receive strikes or see their channel terminated, YouTube says.
Of course, YouTube has demonetized channels before as a punishment mechanism but this solidifies that action into a more formal policy
What isn’t clear is how YouTube will specifically define and enforce its rules around these borderline channels. Will “tea” channels come under threat? Will creators who get involved with feuds ever be impacted? It’s unknown at this time, as these rules are open to interpretation.
YouTube says it’s also now changing how it handles harassment taking place in the comments section.
Both creators and viewers encounter harassment in the comments, which YouTube says not only impacts the person being targeted, but can have a chilling effect on the conversation.
Last week, it turned on a new setting that holds potentially toxic comments for review by default across YouTube’s biggest channels. It now plans to roll out this setting as enabled by default to all channels by year-end. Creators can opt-out, if they choose, or they can ignore the held comments altogether if they don’t want to make a decision about their toxicity.
YouTube says the early results from this feature are positive, as channels that enabled it saw a 75% reduction in user flags on comments.
The company acknowledges that it will likely make decisions going forward that will be controversial, and reminds creators that an appeals process exists to request a second look for any actions they believe were made in error.
“As we make these changes, it’s vitally important that YouTube remain a place where people can express a broad range of ideas, and we’ll continue to protect discussion on matters of public interest and artistic expression,” said Halprin. “We also believe these discussions can be had in ways that invite participation, and never make someone fear for their safety.”
Policies like this sound good on paper, but enforcement is still YouTube’s biggest issue. YouTube has around 10,000 people focused on controversial content, but its decisions to date have seen it defining what’s “harmful” in the narrowest of ways. It’s unclear if this tightening of policies will actually impact what YouTube actually does, rather what it says it will do.
Source: TechCrunch