Twitter’s new owner might not change the platform as much as either side imagines. Under Jack Dorsey, Twitter always had and still has the laxest content moderation rules and least sophisticated enforcement of any major social media platform. For years, its official internal policy was to allow Republican politicians to tweet white supremacist talking points, and it long thought that "counter speech" could be used to combat racism and hate speech, which is the favored and failed strategy of free speech absolutists. Twitter has been notoriously bad at finding and deleting the accounts of literal terrorists from ISIS and avowed neo-Nazi militias. Its enforcement on harassment and threats has been objectively terrible by any measure, and its latest strategy has largely been to give users the ability to more easily hide threats and harassment against them rather than ban the accounts altogether. Twitter's spam and crypto-scam problem is well documented. Its moderation in any language besides English is horrendous.
As Motherboard has reported in-depth, advertisers do not like putting their brand alongside hate speech, harassment, violence, terrorism, suicide, self-harm, violent, or otherwise explicit content. Legally, Twitter will have to continue to remove things like child porn and copyrighted material (if hit with a copyright takedown request), meaning the company will have to have some rules. Twitter was run as a for-profit business, and Musk has indicated that he intends to make the company more profitable. So unless he wants to run Twitter as a charity to free speech absolutists amid a potential mass advertiser exodus, Musk will face the same problem that every other social media company has faced: balancing "free speech" with its ability to run a sustainable and profitable company. Additionally, Apple and Google have shown they are willing to ban social media apps that allow hate speech and other violent content.
"Twitter and all other user-generated content services must constantly classify content as illegal, 'lawful but awful,' or completely permissible on the service. Things like child sexual abuse material and copyright infringing files are illegal and usually must be removed when the service recognizes their illegality. Completely permissible content isn't a problem," Eric Goldman, Associate Dean for Research and Professor, Santa Clara University School of Law, told Motherboard. "It's the middle category, 'lawful but awful' content, that poses so much trouble for everyone. Most 'harassing,' 'threatening,' or 'violent' content fits into this category (except in extreme cases). Because it's lawful, there's usually no obligation to remove the content; indeed, the Constitution may prohibit imposing any liability. Nevertheless, most regulators want that content removed; as do advertisers and many users."
"If Musk thinks he can change Twitter's procedures to accept more lawful-but-awful content, the law may permit this choice, but I don't expect it will be a financially prudent one," he continued. "Instead, by driving away customers and advertisers, a choice to embrace lawful-but-awful content could reduce Twitter's overall valuation substantially."
No comments:
Post a Comment