Twitter, the social media platform owned by billionaire Elon Musk, has been called out by Australia’s cyber watchdog for its handling of online hate. Julie Inman Grant, Australia’s eSafety Commissioner, has identified Twitter as the platform with the most complaints, prompting the regulator to demand an explanation from the company within 28 days or potentially face significant fines.
Despite having fewer users compared to platforms like TikTok, Facebook, and Instagram, Twitter accounted for one-third of all complaints received about online hate. This prompted the online safety commissioner, Julie Inman Grant, to issue a legal notice to Twitter.
Grant expressed concerns about Twitter’s apparent failure to address hate speech effectively, highlighting reports of previously banned accounts being reinstated, thereby emboldening extremist and hateful individuals, including neo-Nazis both in Australia and abroad.
The regulator’s demand for accountability follows its ongoing campaign to hold social media companies more responsible for their platforms. Twitter, however, did not provide a statement in response to the announcement when approached by the BBC for comment.
Twitter’s management of content moderation has been under scrutiny since Elon Musk acquired the company for $44 billion last year. In June, Ella Irwin, Twitter’s second head of trust and safety under Musk’s ownership, resigned. Her predecessor, Yoel Roth, had left the position in November 2022, a month after Musk assumed control.
The head of trust and safety at Twitter is responsible for content moderation, a crucial aspect that has garnered attention since the ownership change. Although Irwin did not publicly disclose her reasons for leaving, her departure followed Musk’s public criticism of a content moderation decision.
Musk labeled the decision to limit the visibility of a video due to allegations of misgendering as “a mistake by many people at Twitter.” He argued that not using someone’s preferred pronouns, while potentially rude, does not break any laws.
Shortly after Irwin’s resignation, Linda Yaccarino, the former head of advertising at NBCUniversal, assumed the position of Twitter’s chief executive, replacing Musk. Yaccarino is known for navigating NBCUniversal through technological disruptions, revamping advertising sales, and contributing to industry-wide discussions about data gaps as audiences shift online.
Since Musk’s acquisition, Twitter has witnessed a significant reduction in its workforce, with approximately 75% of employees being let go, including teams responsible for tracking abusive content. Musk has also implemented changes to the company’s verification process. In addition, Twitter has experienced a notable exodus of advertisers.
The Australian cyber watchdog’s demand for Twitter to address concerns about online hate represents a growing trend of regulators holding social media platforms accountable for the content shared on their platforms. As the debate around online hate speech intensifies, it remains to be seen how Twitter will respond to the regulator’s request and what actions will be taken to address the issue effectively.
The regulator’s demand for Twitter to address concerns about online hate adds to the mounting pressure on social media platforms to take more proactive measures in combating harmful content. As public scrutiny grows, platforms like Twitter are being held accountable for the potential consequences of allowing hate speech to proliferate unchecked.
Twitter’s response to the Australian cyber watchdog’s inquiry will be closely watched, as it may set a precedent for how social media companies are expected to handle online hate in the future. The potential fines of up to A$700,000 per day for ongoing breaches serve as a stark reminder of the financial repercussions that platforms could face if they fail to address the issue adequately.
While Elon Musk pledged to protect free speech on Twitter when he acquired the company, striking a balance between free expression and preventing the spread of hate speech has proven to be a complex challenge. As a platform that thrives on real-time conversations and the amplification of voices, Twitter must grapple with the responsibility of curbing harmful content while respecting users’ right to express their opinions.
The departure of Twitter’s previous heads of trust and safety under Musk’s ownership raises questions about the company’s internal dynamics and its commitment to effective content moderation. It remains to be seen how Linda Yaccarino, the newly appointed chief executive, will navigate these challenges and shape Twitter’s approach to combating online hate.
Moreover, Twitter’s dwindling advertiser base underscores the significance of addressing this issue promptly. Advertisers are increasingly concerned about brand safety and reputation, and they are likely to shift their advertising investments to platforms that demonstrate a stronger commitment to creating a safe and inclusive online environment.
The case of Twitter in Australia also highlights the broader global debate surrounding the responsibility of social media platforms in addressing online hate speech. Governments and regulators worldwide are considering legislative measures to hold platforms accountable and ensure they actively combat hate speech, misinformation, and other harmful content.
Ultimately, the outcome of the Australian cyber watchdog’s inquiry into Twitter’s handling of online hate may have far-reaching implications for the regulation of social media platforms worldwide. It could contribute to shaping new policies and standards that prioritize user safety and the promotion of positive online discourse.
As the deadline for Twitter’s response approaches, all eyes are on the social media giant to see how it will address the concerns raised by the Australian regulator. The actions taken by Twitter in the aftermath of this inquiry will not only determine its standing in Australia but also influence the global discourse on combatting online hate and shaping the future of social media.