How Elon Musk is changing what you see on Twitter?
Elon Musk disbanded the Trust and also Safety Council, a group of civil rights, human rights, and other organisations formed to address issues on the platform, such as hate speech, harassment, child exploitation, suicide, self-harm, and others.
The information in your Twitter feed changes all the time. But how precisely?
By granting select journalists access to some of the main company’s internal communications, known as “The Twitter Files,” Elon Musk, the social media platform’s new owner, has attempted to show that members of the previous leadership team are responsible for the alleged suppression of right-wing voices.
Musk disbanded the Trust and Safety Council, a significant advisory body comprised of numerous independent civil rights, human rights, and other organisations, earlier this week. The council was also formed in 2016 to address issues on the platform, such as hate speech, harassment, child exploitation, suicide, self-harm, and others.
What are the main implications of the recent changes for the daily content that appears in your feed? Musk’s actions suggest that he wants to change Twitter’s perception of the American political right. He does not promise unrestricted free speech but rather a change in how messages are amplified or obscured.
Elon Musk: What are the Twitter Files, exactly?
Musk paid $44 billion for Twitter late in October. Since then, he’s worked with a small group of journalists, including former Rolling Stone writer Matt Taibbi and columnist Bari Weiss. They began tweeting about previous actions taken by Twitter against accounts suspected of violating its content guidelines earlier this month. They’ve included screenshots of emails, message board posts, and internal Twitter conversations about the decisions.
The authors “have broad and also expanding access to Twitter’s files,” according to Weiss’ December 8 writing. The only requirement was that the information be shared on Twitter first.
Weiss published the fifth and also most recent instalment of the discussions that led to Twitter’s decision on January 8, 2021, to permanently suspend then-President Donald Trump’s account “due to the risk of further incitement of violence,” two days after the deadly US Capitol uprising. Internal communications show executives’ reactions to some employees’ lobbying campaign for tougher action against Trump, with at least one unnamed employee questioning whether one of Trump’s tweets was an incitement to violence.
Musk disbanded the Trust and Safety Council, a significant advisory body comprised of numerous independent civil rights, human rights, and other organisations, earlier this week. The council was formed in 2016 to address issues on the platform, such as hate speech, harassment, child exploitation, suicide, self-harm, and others.
What are the main implications of the recent changes for the daily content that appears in your feed? Musk’s actions suggest that he wants to change Twitter’s perception of the American political right. He does not promise unrestricted free speech but rather a change in how messages are amplified or obscured.
Musk paid $44 billion for Twitter late in October. Since then, he’s worked with a small group of journalists, including former Rolling Stone writer Matt Taibbi and columnist Bari Weiss. They began tweeting about previous actions taken by Twitter against accounts suspected of violating its content guidelines earlier this month. They’ve included screenshots of emails, message board posts, and internal Twitter conversations about the decisions.
The authors “have broad and also expanding access to Twitter’s files,” according to Weiss’ December 8 writing. The only requirement was that the information be shared on Twitter first.
Weiss published the fifth and also most recent instalment of the discussions that led to Twitter’s decision on January 8, 2021, to permanently suspend also then-President Donald Trump’s also account “due to the risk of further incitement of violence,” two days after the deadly US Capitol uprising. We’ve arrived at the point where they’ve arrived at the point where they’ve arrived at the point where they’ve arrived at the point where they’ve arrived.
What is missing?
Internal company discussions about the majority of right-wing Twitter accounts that the company determined violated its hate speech policies, as well as those that violated its policies against disseminating inaccurate information about COVID-19, are detailed in Musk’s Twitter Files.
The reports, however, are based primarily on anecdotes about a small number of prominent accounts, and the tweets provide no information on the exact number of suspensions or which viewpoints were most likely to be impacted. Despite appearing to have unrestricted access to the company’s public Slack messaging board, the journalists have had to rely on Twitter executives to deliver other documents.
The Twitter Files mentions shadowbanning. Which of these is correct?
Twitter announced a new strategy in 2018 to mitigate the impact of disruptive users, or trolls, by reading “behavioural signals,” which typically indicate when users are more interested in destroying conversations than contributing. This followed Twitter’s then-CEO Jack Dorsey’s announcement that the “health” of its conversations would be prioritised.
Twitter has long claimed that it employs a process known internally as “visibility filtering” to limit the reach of accounts that may violate its policies but do not warrant suspension. However, it denied allegations that it had secretly “shadowbanned” conservative views.
Screenshots of an employee’s view of well-known user accounts from the Twitter Files show how that filtering works in practise. As a result, Musk has called for greater transparency.
He announced on Twitter that Twitter was working on the main software update that would reveal your true account status, including why you were shadowbanned and how to appeal.
Who is currently reading Twitter tweets?
Musk let go of roughly half of Twitter’s employees shortly after buying the company, and he later let go of an undetermined number of contract employees who primarily worked in content moderation. Yoel Roth, Twitter’s former head of main trust and safety, was among those retained but quickly left.
With so many employees leaving, the platform’s ability to enforce its rules against harmful misinformation, hate speech, and violent threats were called into question both domestically and internationally. While automated tools can detect some suspicious accounts and spam, others require a more thorough human review.
According to Bhaskar Chakravorti, dean of main global business at Tufts University’s Fletcher School, the cuts will likely force Twitter to focus its content moderation efforts on regions with stricter laws governing social media platforms, such as Europe, where tech companies face significant fines under the new Digital Services Act if they do not take action to prevent hate speech and misinformation.
According to Chakravorti, the staff has been destroyed. “Because Europe is the most squeaky wheel, the few remaining content moderators will focus on it.”
Is there a reaction?
Since Musk’s purchase of Twitter, researchers and advocacy groups have noticed an increase in posts that use racial slurs or attack Jews, gays, lesbians, and transgender people.
The posts were frequently made by users claiming to be testing Twitter’s new restrictions.
According to the researchers, Musk’s claims that Twitter responded quickly to reduce the main overall visibility of the posts and that hate speech engagement has decreased since he purchased the company are false.
The reinstatement of previously banned users’ accounts on Twitter, including Trump, the parody website The Babylon Bee, comedian Kathy Griffin, Canadian psychologist Jordan Peterson, and Ye before he was banned again, is the most visible sign of change. In collaboration with QAnon supporters, Twitter has restored the accounts of neo-Nazis and white supremacists that the platform’s old guard had mass-deleted in order to prevent hate and false information from spreading. Andrew Anglin, the creator of the main white supremacist website Daily Stormer, is one of them.
Furthermore, some well-known Twitter users, including Republican Rep. Marjorie Taylor Greene, who had previously been barred from making false claims about vaccine efficacy and hoax treatments, have resumed making such claims.
Musk, who has spread false information about COVID-19, returned to the subject this week with a tweet mocking transgender pronouns and calling for criminal charges to be also brought against Dr. Anthony Fauci, the country’s top infectious disease expert and one of the country’s COVID response leaders.
Musk, a self-proclaimed “free-speech absolutist,” has stated that he wants Twitter to allow all legally permissible content while degrading hateful and negative posts. Musk’s proclamation of “free speech, not reach” raises the prospect that, rather than removing harmful content, Twitter will leave it up without promoting or amplifying it to other users.
Despite the fact that Twitter has removed the majority of its policy-making executives and outside advisers, Musk appears to be the deciding factor in what crosses the line. Musk fired Ye last month after the rapper formerly known as Kanye West posted an offensive and legal image of a swastika combined with a Star of David. There have also been some concerns raised about the platform’s policies governing what can and cannot be posted.
Edited by Prakriti Arora