Riot police officers push back anti-immigration protesters outside on August 4, 2024 in Rotherham, UK
Christopher Furlong | Getty Images
LONDON — Ofcom, the U.K.’s media regulator, was last year chosen by the government as the regulator tasked with policing harmful and illegal content on the Internet under tough new internet safety regulations.
But even as online disinformation related to stabbings in the UK has led to real-world violence, Ofcom, Britain’s internet safety regulator, has found itself unable to take effective enforcement action.
Last week, a 17-year-old knifeman attacked several children attending a Taylor Swift-themed dance class in the English town of Southport on Merseyside.
Three girls were killed in the attack. Police later identified the suspect as Axel Rudakubana.
Shortly after the attack, social media users were quick to falsely identify the attacker as an asylum seeker who arrived in the UK by boat in 2023.
In X, posts sharing the false name of the perpetrator were actively shared and viewed by millions.
This in turn helped spark far-right, anti-immigrant protests, which have since turned violent, with shops and mosques attacked and bricks and petrol bombs thrown.
Why can’t Ofcom take action?
British officials then issued warnings to social media companies, calling on them to tackle false information online.
Peter Kyle, the UK’s technology minister, has held talks with social media companies such as TikTok, the parent company of Facebook After, Google and X on handling misinformation spread during riots.
However, Ofcom, the regulator tasked with taking action on the failure to tackle misinformation and other harmful material online, is unable at this stage to take effective action against the tech giants allowing harmful posts that incite ongoing riots, because not all powers have come from the act into force.
The new duties on social media platforms under the Internet Safety Act that require companies to actively identify, mitigate and manage the risks of harm from illegal and harmful content on their platforms have not yet come into effect.
Once the rules are fully in place, Ofcom will have the power to impose fines of up to 10% of companies’ global annual revenue for breaches, or even jail time for individual senior executives in cases where there are repeat breaches.
But until that happens, the watchdog is unable to punish companies for cyber security breaches.
Under the Internet Safety Act, sending false information with intent to cause non-trivial harm is a criminal offence. This will likely include disinformation aimed at inciting violence.
How did Ofcom react?
An Ofcom spokesman told CNBC on Wednesday that it is moving quickly to implement the law so it can be enforced as soon as possible, but new duties on tech companies requiring them by law to actively screen their platforms for harmful content will not be fully put in place. in application. valid until 2025.
Ofcom is still consulting on risk assessment guidelines and codes of practice on unlawful harm, which it says it needs to put in place before it can effectively implement the Internet Safety Act’s measures.
“We are speaking to the relevant social media, gaming and messaging companies about their responsibilities as a matter of urgency,” the Ofcom spokesman said.
“Although platforms’ new duties under the Internet Safety Act don’t come into effect until the new year, they can act now — we don’t have to wait for new laws to make their websites and apps safer for users ».
Gill Whitehead, Ofcom’s group director of internet safety, echoed that statement in an open letter to social media companies on Wednesday, which warned of the increased risk of platforms being used to incite hatred and violence amid recent incidents violence in the United Kingdom.
“In a few months, new security duties under the Internet Safety Act will come into force, but you can act now – you don’t have to wait to make your websites and apps safer for users,” said Whitehead.
He added that while the regulator is working to ensure companies rid their platforms of illegal content, it still recognizes the “importance of protecting free speech”.
Ofcom says it plans to publish its final codes of practice and guidance on online harm in December 2024, after which platforms will have three months to carry out risk assessments of illegal content.
The codes will be subject to scrutiny by the UK Parliament, and unless lawmakers oppose the draft codes, the online safety obligations on the platforms will become enforceable shortly after this process is completed.
Provisions to protect children from harmful content will come into force from spring 2025, while tariffs on larger services will come into force from 2026.