What Is the Online Safety Bill And Will It Protect Social Media Users?

The legislation will try to ban “legal but harmful” content amid warnings it will create a “censor’s charter”.
The online safety bill is being introduced in parliament after a number of delays
The online safety bill is being introduced in parliament after a number of delays
NurPhoto via Getty Images

For years now, MPs and members of the public have complained about the “Wild West” that has been allowed to flourish on the internet due to a lack of regulation.

Social media firms have long been able to escape responsibility for the content that is published on their sites by arguing that they are simply neutral platforms and not publishers.

But that could soon be about to change with the publication of the online safety bill on Thursday, which is designed to strengthen the duty of care social media giants owe to their users for what they view online.

As well as giving sweeping new powers to media regulator Ofcom, the bill will also make it a criminal offence for social media bosses not to take action over harmful content posted on their sites — with a failure to act leading to a potential jail sentence.

Here HuffPost UK takes you through the main talking points in the bill and how it has been received.

What does the bill do?

At its heart, the online safety bill aims to create a “duty of care” for social media giants to abide by when it comes to their users’ experience of harmful and illegal content.

Gone are the days of so-called self-regulation: instead, social media platforms like Meta’s Facebook, Twitter and Instagram will be legally obliged to protect children and adults from viewing content that breaches their own internal terms and conditions.

Key measures in the bill include:

  • A new legal duty requiring the largest social media platforms and search engines to prevent paid-for fraudulent adverts appearing on their services

  • Making sure all websites which publish or host pornography, including commercial sites, carry out robust checks to ensure users are aged 18 or over

  • Making cyberflashing a criminal offence

  • A duty on companies to report child sexual exploitation and abuse they detect on their platforms to the National Crime Agency

How will social media giants be held to account?

Ofcom will be given powers to demand information and data from tech firms, including on how their algorithms select and display content that may be harmful so that they can be tackled.

The regulator will also be able to enter companies’ premises to access data and equipment, demand interviews with employees and force companies to undergo external assessments on how they are keeping users safe.

Any senior manager who destroys evidence, fails to attend an Ofcom interview or or provides the regulator wilth false information could also be held criminally liable.

Sites that fail to comply with Ofcom’s demands face being blocked and fined up to 10 per cent of global turnover, while social media bosses who fail to take action face up to two years in prison within two months of the bill becoming law.

What does the bill not do?

Ministers ultimately resisted call to ban anonymous accounts online, arguing that this could inadvertently shut out people who need to conceal their identity — like whistleblowers or victims of domestic abuse, for example.

Instead, social media users will be given more control over what they see online and who can interact with them.

The bill also does not — at this stage — make it a criminal offence to encourage self harm and suicide, as called for by some campaigners.

However, online giants will be responsible for ensuring such “legal but harmful” content does not appear on their platforms in the first place.

What is deemed “legal but harmful” will now be set out in secondary legislation approved by parliament, which the government says will prevent social media executives determining what appears.

What is the government saying?

Ahead of the bill being unveiled in parliament, culture secretary Nadine Dorries said: “The internet has transformed our lives for the better. It’s connected us and empowered us.

“But on the other side, tech firms haven’t been held to account when harm, abuse and criminal behaviour have run riot on their platforms. Instead they have been left to mark their own homework.

“We don’t give it a second’s thought when we buckle our seat belts to protect ourselves when driving.

“Given all the risks online, it’s only sensible we ensure similar basic protections for the digital age. If we fail to act, we risk sacrificing the wellbeing and innocence of countless generations of children to the power of unchecked algorithms.

How has it been received?

The bill has had a frosty reception from free speech campaigners who are concerned about the “legal but harmful” and the impact it could have on freedom of expression.

Journalists too have expressed concern that online giants could unfairly remove their posts without justification, but Dorries has said that journalists will have an expedited right to appeal if this happens,

Jim Killock, executive director of the Open Rights Group, said using the term amounted to the creation of a “censor’s charter”.

“Unbelievably while acknowledging the sheer amount of power (Facebook executive) Nick Clegg and other Silicon Valley bigwigs already have over what we can say online, Nadine Dorries has created a bill that will grant them even more,” he said.

Meanwhile, Labour said the government’s delay to implementing the bill had already allowed disinformation from Russia to spread online.

“Delay to the online safety bill has allowed the Russian regime’s disinformation to spread like wildfire online,” shadow culture secretary Lucy Powell said.

“Other groups have watched and learned their tactics, with Covid conspiracy theories undermining public health and climate deniers putting our future at risk.

“The big tech companies will not regulate themselves. The government must ensure the bill can tackle disinformation online.”

Julian Knight, chair of the Digital, Culture, Media and Sport Select Committee, said: “We welcome the introduction today of legislation that has been a long time in the making.

“That the government has listened to our concerns, particularly around cyber-flashing and age assurance, shows the real value of our pre-legislative scrutiny.

“We are particularly pleased that parliament and not tech companies will play the key and deciding role on what constitutes legal but harmful content.

“We look forward to continuing to help shape the bill as it goes through parliament to ensure it strikes the right balance between protecting people online from the most pernicious types of harmful content whilst also preserving freedom of speech.”


What's Hot