Trends

TikTok says it removed 104M videos in H1 2020, proposes harmful content coalition with other social apps

As the future of ByteDance’s TikTok ownership continues to get hammered out between tech and retail leviathans, investors and government officials, the video app today published its latest transparency report. In all, over 104.5 million videos were taken down; it had nearly 1,800 legal requests; and received 10,600 copyright takedown notices for the first half of this year.

Alongside that, and possibly to offset the high numbers of illicit videos and to also coincide with an appearance today in front of a parliamentary committee in the UK over harmful content, TikTok also announced a new initiative — potentially in partnership with other social apps — against harmful content.
The figures in the transparency report underscore an important aspect around the impact of the popular app. The government may want to shut down TikTok over national security concerns (unless ByteDance finds a new non-Chinese controlling structure that satisfies lawmakers).

But in reality, just like other social media apps, TikTok has another not-insignificant fire to fight: it is grappling with a lot of illegal and harmful content published and shared on its platform, and as it continues to grow in popularity (it now has more than 700 million users globally), that problem will also continue to grow.
That will be an ongoing issue for the company, regardless of how its ownership unfolds outside of China. While one of the big issues around TikTok’s future has been related to its algorithms and whether these can or will be part of any deal, the company has tried to make other efforts to appear more open with regards to how it works. Earlier this year it opened a transparency center in the US that it said would help experts observe and vet how it moderates content.
TikTok said that the 104,543,719 total videos that TikTok removed globally for violating either community guidelines or its terms of service made up less than 1% of all videos uploaded on TikTok, which gives you some idea of the sheer scale of the service. 
The volume of videos that are getting taken down have more than doubled over the previous six months, a reflection of how the total volume of videos has also doubled.
In the second half of 2019, the company took down more than 49 million videos, according to the last transparency report published by the company (I don’t know why exactly, but it took a lot longer to publish that previous transparency report, which came out in July 2020.) The proportion of total videos taken down was roughly the same as in the previous six months (“less than 1%”).
TikTok said that 96.4% of the total number were removed before they were reported, with 90.3% removed before they received any views. It doesn’t specify if these were found via automated systems or by human moderators, or a mix of both, but it sounds like it made a switch to algorithm-based moderation at least in some markets:
“As a result of the coronavirus pandemic, we relied more heavily on technology to detect and automatically remove violating content in markets such as India, Brazil, and Pakistan,” it noted.
The company notes that the biggest category of removed videos was around adult nudity and sexual activities, at 30.9%, with minor safety at 22.3% and illegal activities at 19.6%. Other categories included suicide and self harm, violent content, hate speech and dangerous individuals. (And videos could count in more than one category, it noted.)

The biggest origination market for removed videos is the one in which TikTok has been banned (perhaps unsurprisingly): India took the lion’s share of videos at 37,682,924. The US, on the other hand, accounted for 9,822,996 (9.4%) of videos removed, making it the second-largest market.
Currently, it seems that misinformation and disinformation are not the main ways that TikTok is getting abused, but they are still significant numbers: some 41,820 videos (less than 0.5% of those removed in the US) violated TikTok’s misinformation and disinformation policies, the company said.
Some 321,786 videos (around 3.3% of US content removals) violated its hate speech policies.
Legal requests, it said, are on the rise, with 1,768 requests for user information from 42 countries/markets in the first six months of the year, with 290 (16.4%) coming from US law enforcement agencies, including 126 subpoenas, 90 search warrants and 6 court orders. In all, it had 135 requests from government agencies to restrict or remove content from 15 countries/markets.

Social media coalition proposal

Along with the transparency report, the harmful content coalition announcement is coming on the same day that TikTok appeared before a committee from the Department of Culture, Media and Sport, a UK parliamentary group.
Practically, that interrogation — which featured the company’s head of public policy in EMEA, Theo Bertram — doesn’t have a lot of teeth, but it speaks to the government gaining a growing awareness of the app and its impact on consumers in the UK.
TikTok said that the harmful content coalition is based on a proposal that Vanessa Pappas, the acting head of TikTok in the US, sent out to nine executives at other social media platforms. It doesn’t specify which companies, nor what the response was. We are asking and will update as we learn more.
Meanwhile, the letter, published in full by TikTok and reprinted below, underscores a response to current thinking around how proactive and successful social media platforms have been in trying to curtail some of the abuse of their platforms. It’s not the first effort of this kind — there have been several other attempts like this one where multiple companies, erstwhile competitors for consumer engagement, come together with a united front to tackle things like misinformation.
This one specifically is identifying non-political content and coming up with a “collaborative approach to early identification and notification amongst industry participants of extremely violent, graphic content, including suicide.” The MOU proposed by Pappas suggested that social media platforms communicate to keep each other notified of the content — a smart move, considering how much gets shared across multiple platforms, from other platforms.
The company’s efforts on the harmful content coalition is one more example of how social media companies are trying to take their own initiative and show that they are trying to be responsible, a key way of lobbying governments to stay out of regulating them. With Facebook, Twitter, YouTube and others continue to be in hot water over the content that is shared over their platforms — despite their attempts to curb abuse and manipulation — it’s unlikely that this will be the final word on any of this.
Full memo below:

Recently, social and content platforms have once again been challenged by the posting and cross-posting of explicit suicide content that has affected all of us – as well as our teams, users, and broader communities.
Like each of you, we worked diligently to mitigate its proliferation by removing the original content and its many variants, and curtailing it from being viewed or shared by others. However, we believe each of our individual efforts to safeguard our own users and the collective community would be boosted significantly through a formal, collaborative approach to early identification and notification amongst industry participants of extremely violent, graphic content, including suicide.
To this end, we would like to propose the cooperative development of a Memorandum of Understanding (MOU) that will allow us to quickly notify one another of such content.
Separately, we are conducting a thorough analysis of the events as they relate to the recent sharing of suicide content, but it’s clear that early identification allows platforms to more rapidly respond to suppress highly objectionable, violent material.
We are mindful of the need for any such negotiated arrangement to be clearly defined with respect to the types of content it could capture, and nimble enough to allow us each to move quickly to notify one another of what would be captured by the MOU. We also appreciate there may be regulatory constraints across regions that warrant further engagement and consideration.
To this end, we would like to convene a meeting of our respective Trust and Safety teams to further discuss such a mechanism, which we believe will help us all improve safety for our users.
We look forward to your positive response and working together to help protect our users and the wider community.
Sincerely,
Vanessa Pappas
Head of TikTok

More to come.
Source: TechCrunch

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button