×
google news

Government will require platforms to take down revenge images within two days

The uk government will amend the Crime and Policing Bill so platforms must take down intimate images shared without consent within 48 hours or face fines and potential blocking

The UK government has introduced an amendment to the Crime and Policing Bill that requires online services to remove non-consensual intimate images within a maximum of 48 hours after they are reported. The change treats such images with the same severity as child sexual abuse material and terrorism content, increasing legal and technical obligations for platforms.

Regulators and providers will be expected to deploy digital tools to prevent reposting and to take enforcement action when services fail to comply.

What the amendment requires

Under the amendment, platforms must acknowledge reports promptly and remove confirmed non-consensual intimate images within the stipulated 48-hour window.

The proposal ties legal priority to operational measures, including the use of automated filters, hashing databases and other technical means to detect and block reuploads. Regulators would gain powers to demand evidence of compliance and to order further measures where removal mechanisms prove inadequate.

Consequences for repeated non-compliance can include fines and targeted network measures, such as service blocking, aimed at sites that consistently ignore takedown requirements. The amendment also encourages providers to share intelligence and blocking information with authorities to reduce the risk of persistent reposting.

Single-report mechanism for victims

The amendment introduces a single-report mechanism designed to let victims notify one service and prompt wider removals. Platforms would be expected to act on that single report to identify and remove the same intimate image across linked services and mirror sites.

Under the proposal, regulators and designated industry bodies would coordinate information sharing to reduce repeated reposting. The mechanism would rely on verified reporting channels, automated matching tools and agreed procedures for escalating complex or cross-border cases.

Safeguards are outlined to minimise wrongful takedowns. These include verification checks before removal, appeals processes for account holders and requirements for transparency reporting on takedown decisions. Law enforcement and victim support organisations would be notified when a report indicates criminality or immediate risk.

Campaigners and privacy experts have urged clear rules on data retention and strict limits on how intelligence is reused. Regulators will need to balance rapid removals with due-process protections and safeguards against abuse of the reporting system.

Implementation details, including technical standards for automated detection and the roles of different regulators and platforms, will be set out in guidance accompanying the legislation.

Single report to reduce victim burden

The government says victims will need to report an abusive image only once, rather than notifying every platform where it appears. This change aims to reduce the administrative burden on victims and remove the need for continuous monitoring.

Ofcom is considering the use of digital marking techniques so that a marked image can be automatically identified and removed when it is reuploaded. The proposed system would link a single report to platform-level removal actions across services.

Enforcement and technical measures

The government will publish guidance for internet service providers setting out how to block access to sites that host flagged material. The guidance targets websites that may operate outside the scope of the Online Safety Act.

The government intends to combine digital marking, provider-level blocking and statutory takedown timelines to create multiple layers of protection. Technical standards for automated detection and the responsibilities of regulators and platforms will be set out in accompanying guidance.

Fines and blocking as incentives

Statutory timelines for removal are to be backed by enforcement powers, including fines and blocking of non-compliant sites. Regulators will be expected to use these measures to encourage prompt action by platforms and providers.

Officials say the layered approach is designed to speed removals, limit repeat circulation and reduce harm to victims. Further detail on implementation and oversight will appear in the guidance accompanying the legislation.

The government will enforce the new rules with substantial penalties aimed at large global platforms. Under the proposals, firms face fines of up to 10 percent of qualifying worldwide income or, alternatively, having their service blocked in the UK.

Ministers say the measures are intended to create a strong incentive for rapid compliance and to close a period in which tech companies held what officials call a “free pass”. The government argues that financial exposure or service disruption will prompt faster removal of harmful material.

Industry sources caution that the sanctions could have wide-reaching consequences for platform operations and cross-border data flows. Legal teams at several global firms say they will seek clarity on how liability and fines will be calculated.

Campaigners welcome tougher enforcement but urge robust safeguards and independent oversight. Rights groups stress the need for clear appeal mechanisms and transparency about takedown decisions to protect lawful speech.

Further detail on implementation and oversight will appear in the guidance accompanying the legislation, which will set out enforcement thresholds, appeals processes and the role of regulators in monitoring compliance.

Context and reactions

The policy response follows incidents in which generative AI tools produced sexualized images of real people without consent. Public outrage and regulatory scrutiny followed those incidents. One high-profile case involved a chatbot linked to Elon Musk that generated explicit images, prompting a public reversal by the company involved. The European Union is investigating related behaviour under the Digital Services Act, underscoring transnational concern about AI-generated explicit content and deepfakes.

Companies named in the EU probe have issued statements committing to safety and to a zero tolerance approach toward child sexual exploitation and non-consensual material. Watchdogs and victim advocates say those statements fall short. They call for enforceable timetables, independent audits and transparent reporting mechanisms.

Voices from legal and victim communities

Legal experts warn that existing liability gaps could allow harmful content to spread before platforms act. They urge clearer rules on platform responsibility and faster notice-and-takedown processes. Scholarly commentators highlight the technical difficulty of distinguishing manipulated from genuine images at scale, and they recommend mandatory third-party testing of generative systems.

Groups representing survivors demand stronger remedies and access to redress. They seek fast takedown rights, mandatory notification when identifiable images are generated, and statutory obligations for platforms to fund victim support services. Advocates also press for clearer child-protection standards embedded in AI safety designs.

Regulators and campaigners propose several policy measures. These include binding reporting timelines, independent compliance audits, whistleblower protections and penalties proportional to global turnover for persistent breaches. Those proposals align with the upcoming legislative framework, which will set out enforcement thresholds, appeals processes and the role of regulators in monitoring compliance.

Legal representatives for victims of so-called revenge porn welcomed the government’s announcement. One lawyer described the policy as a step forward but questioned whether a 48-hour maximum is adequate. She said each hour content remains online increases harm. She urged platforms to publish clear, accessible reporting routes so victims can find and use them quickly.

Charities and campaigners called for faster takedowns and stronger platform accountability. They argued that technical measures must be paired with user-facing transparency and simpler reporting options. Campaigners stressed that victims often cannot identify platform contact points and that automated content marking and cross-platform coordination are essential to prevent repeated circulation.

What to expect next

Regulators will translate the policy into an enforcement regime that defines thresholds for action, notification requirements and appeals processes. The upcoming legislative framework will specify how regulators monitor compliance and impose sanctions. Platforms will be required to demonstrate both technical safeguards and accessible user remedies.

Experts expect platforms to roll out three core changes. First, clearer reporting pathways prominently visible across apps and websites. Second, improved automation to flag and remove non-consensual intimate imagery more rapidly. Third, systems for sharing alerts and verified takedown requests across services to limit reuploads.

Charities said they will press for independent audits of platform compliance and for publicly available transparency reports. Legal groups indicated they will monitor whether the 48-hour window is met in practice and whether appeals procedures protect victims from further harm.

Continuing from scrutiny over whether the 48-hour window will be met in practice, the amendment would require platforms operating in the UK to overhaul moderation workflows and reporting interfaces.

Under the proposal, companies must deliver rapid removal processes and stronger inter-platform cooperation to stop images spreading across services. Regulators such as Ofcom could set technical standards for digital marking and cross-platform detection, while internet providers might face powers to block repeat offenders.

Victims and campaigners will prioritise speed, clear reporting pathways and enforceable remedies. Faster takedowns, transparent appeals processes and proportionate penalties aim to limit circulation of abusive content and increase accountability for platforms that fail to act.

The package combines legal classification, technical measures and financial sanctions to reduce harm and raise the cost of sharing non-consensual intimate images. Independent monitoring and regular compliance reports are expected to determine whether the new rules achieve those goals.


Contacts:

More To Read