At 9.13 pm on 3 April 2019, the Manager of Government Business introduced the Sharing of Abhorrent Violent Material Bill 2019 into the Australian Senate. It passed at 9.15 pm. No debate. No amendments. No referral to committee. The next day the House of Representatives performed its scrutiny duty with a similar level of rigour: four brief speeches. It received royal assent the day after.
This was a serious piece of law, unlike anything else in the world. It requires social media sites, ISPs and web hosts to remove ‘abhorrent violent material,’ such as terrorism, murder, attempted murder, torture, rape or kidnapping. This is content that is otherwise legal to show in television or print in newspapers.
Social media company executives who fail to comply could be fined up to $2.1 million and face three years in jail; companies could face fines of up to $10.5 million or 10 per cent of annual turnover. This was justified in the aftermath of the horrific Christchurch mosque shootings which led to (false) claims that Facebook did not take down the material for hours.
This law raises serious free speech issues. The United Nation’s special rapporteurs on counterterrorism and human rights and freedom of expression wrote to the Australian government warning that the law endangers free speech because it could encourage social media companies to excessively remove content to avoid liability.
Those who are concerned about online free speech have – often rightly – pointed to problems with the content moderation policies of technology firms, the apparent political bias of algorithms, and the issues presented by tech groupthink.
While the actions of social media companies may be less than ideal, the real threat to online freedom comes from politicians hell-bent on changing the rules of the internet in response to a moral panic that blames the internet for all our social ills.
It is now commonplace to hear Republican politicians like Ted Cruz call for the removal of Section 230 for ‘biased’ companies. Section 230 means that content hosts are not legally liable – think defamation or harassment – for material they host until they are informed about the content’s illegality, at which point they must take it down. This means they do not have to pre-moderate all online material, which would be prohibitively expensive and encourage excessive removal of content to avoid legal costs.
Cruz and others say this is a special privilege. It is not. It is the equivalent of not holding postal companies liable for the contents of all letters – as otherwise, they would have to open every letter before delivery and make a judgement about the content.
The Section 230 debate raises how deeply unwise it would be to respond to concerns about tech company censorship with state power. It is almost certain to backfire. Once a greater role for the state is established in the online space the inevitable result will be less freedom, not more. You do not create additional state powers that you cannot imagine entrusting to your political opponent – as more likely than not they will someday be in power.
The solution to concerns about the actions of private technology companies – which we must admit, have every right to decide what goes on their platform – is to develop alternatives, such as Jordan Peterson’s Thinkspot.
The risk of establishing an online content regime – like that proposed in the UK’s Online Harms White Paper, which creates an expansive ‘duty of care’ on tech firms and would lead to censorship – is that it would apply to alternative websites as well.
It is most important at this point to avoid establishing the premise that the government has any further role, beyond ensuring the enforcement of existing laws, in moderating what can and cannot be read online.
The alternative is unlikely to end in anything but tears for supporters of free speech and a free society.