Is Online Harms/Safety Legislation really about ‘hate speech’?

Or is it just the culmination of a long game against our digital rights and freedoms?

I and several other legally minded folk have spent a few days looking at the various pre-trial and new appeal- and retrial-related reporting restrictions on the Letby case.

The advice I’ve received is that unlike some others who have unfortunately been served by the Cheshire Constabulary (one of whom even seems to think incorporation as a company will magically provide a protective shield), my careful approach appears to have stayed on the legal side of this situation.

That said, and with return of a warning banner at the top of posts and some careful avoidance of particular issues that we will have to discuss only after any retrial, my series on the Letby case will resume in the next few days.

So… What have I been doing?

The last few days while awaiting legal opinions I have been looking into the Online Harms/Online Safety laws being rolled out in countries like Australia and the United Kingdom (UK).

First, I was surprised to see that many people (and many in the mainstream media) seem unaware of some of the additional ‘gotchas’ buried in these laws. For example, it seems the UK government solved their ‘encryption issue’ by granting themselves what might amount to legislative backdoor access to all end-to-end encryption in their version of the online harms law.

Second, I was taken aback by how many people seem to be blinkered – believing that online harms/online safety legislation is a new and singular regulatory approach all about protecting people from the nebulous concept known as hate speech – and remaining completely blind to the fact that these laws are simply the culmination of two or more decades of gradual and directed encroachment on your digital rights and freedoms.

Step 1: Government Mandated Firewalls

That gradual encroachment started with government mandated firewalls in the 1990s.

People are usually familiar with China’s Golden Shield Project (also known as The Great Chinese Firewall) that that employs hundreds of thousands of people and AI-based systems to censor online content, systematically probe for and shut down programs and VPNs that might aid Chinese people to access outside information or websites on the dark web, and which monitors the digital communications of all Chinese people.

In western countries like Australia, The UK and Denmark we were told these firewalls were going to block illegal child pornography. However, they have been used more alike a government-run censorship list, blocking access to domains and websites that politicians of the day decide they don’t want you to see.

For example, while they have been used to block a wide variety of morality issues such as content that a standing government official or member of a royal family felt was lese majeste 1, they have also been abused by politicians to block otherwise perfectly legal content like online poker and anti-abortion websites, Wikipedia entries, websites about euthenasia and suicide, religious websites, and even the websites of a tour operator and an inocuous Australian dentist (here).

Step 2: Compelling Passwords

In the early 2000s courts around the world were regularly hearing complaints from police and prosecutors seeking orders to compel suspects to provide passwords that would enable access to devices like phones and laptops.

While courts in countries like the United States of America (USA), Germany and Canada have generally made it difficult for law enforcement to make people hand over their passwords predominantly on the argument that it breaches a person’s right against self-incrimination, governments in the UK and Australia were enacting laws that, on production of a warrant, compel this disclosure.

Updated versions of these laws also compel production of digital keys and assistance with decryption of encrypted devices and files.

Step 3: Data Retention

During 2006 the European Union (EU) ratified the Data Retention Directive. While recognising its own inconsistency with previous EU Directives intended to maintain the privacy of an individual’s personal electronic communications data, the Data Retention Directive prescribed collection of data about communications between legal entities and natural persons including location data and information necessary to identify the subscriber or registered user, but excluding the actual content of the communication.

The Directive was extended to cover internet access, internet email and internet (VoIP) telephony which member states had an extended deadline of 15 March 2009 for implementing into local law.

The EU Data Retention Directive did not see smooth transposition into the laws of some member states. Constitutional courts in the Czech Republic, Germany and Romania quickly moved to annul domestic legislation based on the directive because they found operation of these laws to be an unconstitutional infringement on people’s right to privacy.

Even the EU’s own European Court of Justice (ECJ) ruled the Directive incompatible with ECHR Articles 7 and 8. Undeterred, countries both within and outside the EU have continued to enact, maintain and expand data retention laws consistent with the Data Retention Directive.

Newer versions in jurisdictions like Australia mandate a minimum two years retention of a expansive set of data points and provide warrantless access to telecommunications and ISP metadata for a very broad group of government, law enforcement, private and non-government organisations 2.

Step 4: Digital Identification

During the late 2010s several countries including Australia, Canada and the USA began trialling and implementing digital state identification, predominantly the digial driver’s license.

Digital or mobile driver’s licenses (DDL or MDL) move us away from the physical document or photocard to using a state-operated mobile ‘app’ on our smartphones, with the potential for serious privacy and security issues that governments hope we haven’t noticed them intentionally overlooking.

These incude situations where for one reason or another your smart device cannot access the government server when needed to verify your digital ID (such as when you are out of range of cell service, have no data credit on your account, or the servers or network that provide services are unavailable or ‘down’) and you are potentially detained by law enforcement until your identity and, in this case, legal right to drive can be verified.

More seriously, it also extends to the fact that law enforcement have been trying to compel us to unlock our smart devices for years (see Step 2) and several trialled state-run digital apps require the user to unlock their device in order to access and display the digital ID and verification screen to officers.

It has been suggested that a situation could arise whereby the officer insists his hand scanner is not working and that he must ‘take your [unlocked] device back to his vehicle to use the vehicle’s data terminal’ – thus removing the device from your protection and potentially allowing an unlawful search of the information stored within it.

In the Australian context, proposed Digital ID laws incorporated a need for digital online ID – requiring all access to the internet (social media user accounts, online transactions, access to online banking etc.) to be ‘verified’ through the use of a government digital identity service and effectively placing the government at the centre of everything we do on the internet.

Step 5: Encryption Backdoors

One of the final remaining unsolved pieces of the digital surveillance puzzle is the problem posed to the surveillance state by ubiquitous and integrated end-to-end encryption (E2EE). Users at each end of an E2EE communication have a public and private key pair.

The public key is shared and is used by other users to encrypt messages being sent to the intended recipient user, while the private key is necessary to decrypt those messages and is known only to the recipient user’s computing device.

E2EE is a cheap and effective way to prevent the app provider’s servers, internet service providers and anyone who intercepts the communication in transit from being able to read its content.

For many years government departments, regulators and even the UK’s central bank, the Bank of England, recommended or required the use of E2EE to secure critical activities like online and mobile banking or email communications.

However, governments rapidly realised that encryption was a clear threat to ongoing intelligence programs that, under the nebulous banner of national security and terrorism prevention, were collecting and analysing vast quantities of information generated by their citizens.

Some governments have called for an outright ban on encryption. Others are demanding technology and telecommunications companies provide ways into our encrypted communications, including so-called ‘backdoors’ that would allow State actors to decrypt our encrypted communications, and ghost protocols added to the E2EE protocol that would see a communication encrypted twice with the second copy delivered to a third ‘end’, such as an app developer, telecommunications company or government server.

For almost a decade governments have repeatedly proposed legislation mandating backdoors in encryption.

It started with the failed 2016 Burr-Feinstein Bill the US Senate that demanded weakened security or backdoor access in all apps and services and was opposed as being ‘ludicrous’, ‘dangerous’, and ‘technically illiterate’; continued with the failed Graham-Cotton-Blackburn 2020 Lawful Access to Encrypted Data Act (LAED) that sought to put everyone’s privacy and security at risk by banning both E2EE and any device or service that could not be decrypted for law enforcement; and has led to recent arguments in the Council of the EU regarding whether it will be legal to mandate service providers that have incorporated E2EE encryption into their communication services weaken it sufficiently that they can proactively access encrypted communications passing over their networks under the guise of monitoring for chatter that might suggest child sex offences are taking place.

Step 6: Online Harms/Online Safety

Since 2018, the UK and Australia are leading the charge in making the internet a so-called ‘safe space’.

A place where having any sort of informed opinion that deviates from that which the government, mainstream media or law enforcement officer of the day say is ‘right and proper’ is unlawful and could see you disconnected from online services, fined, or thrown in jail.

So, this is where we are right now…

Conclusion

I hope that it becomes clear now that we haven’t ended up at these Online Harms/Online Safety laws as a coincidental random or reactionary response, but rather as what is the sixth and most recent step in what has been a decades-long long game being played out against our digital rights and freedoms.

See more here substack.com

Header image: The Guardian

Please Donate Below To Support Our Ongoing Work To Defend The Scientific Method

PRINCIPIA SCIENTIFIC INTERNATIONAL, legally registered in the UK as a company incorporated for charitable purposes. Head Office: 27 Old Gloucester Street, London WC1N 3AX. 

Trackback from your site.

Comments (1)

  • Avatar

    Howdy

    |

    “The blacklist is maintained by ACMA and provided to makers of internet filtering software that parents can opt to install on their PCs.”
    Opt being the choice.
    If your ISP uses filtering (probably does) but that cannot be disabled, or edited, and it impedes your browsing, or anything else, then it is likely done at DNS level. In this case, use a different DNS server. I use secure DNS. While I’ve read about the supposed cons of such services, It suits me just fine. Add a few browser plugins and you’re allready making a better place for yourself, but it requires knowledge.

    There are DNS I would never use, but it’s up to you. Does the company running the service have a good rep of privacy? Will the service combat malware and bad actors? Is it speedy enough? Does it have family safety features? Some do specifically, but not all.
    There are quite a few to choose from, but do your homework and make an informed choice.,

    Sony has a case against ‘Quad9’ DNS service that is being challenged as a first because quad9 were forced to block IPs belonging to perpetrators providing counterfeit media. This should not be possible since a DNS provider is simply a ‘pass through’ of sorts and not an enforcer. It is up to the ISP to enforce that as the provider. The DNS service is supposed to be free of that as I understand it. Still, you know what media companies the likes of Sony are, and the depths they will stoop to.

    Reply

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via