Bank of England and the FCA will probe AI agents in trading, focus on herding and press HM Treasury on the Critical Third Parties Regime

The UK financial system has moved a step closer to formal testing of algorithmic behaviour. In a response to the Treasury Committee, the Bank of England confirmed it will run trials to understand how AI agents behave in trading environments, with a particular focus on correlated actions known as herding.
The exchange also set out how the Financial Conduct Authority (FCA) plans to circulate examples of good practice to the sector. These developments come amid sustained parliamentary pressure for clearer oversight and targeted stress-testing of autonomous systems in finance.
What regulators have pledged
The official replies make three things clear: the Bank of England will investigate AI-driven market risks, the FCA will offer practical guidance to firms, and HM Treasury has so far declined to set a binding deadline for designating major providers under the Critical Third Parties Regime.
The Bank says the Financial Policy Committee (FPC) will continue to track how the Critical Third Parties Regime is used and whether it strengthens systemic resilience. Despite that oversight commitment, the Treasury Committee criticised the pace of progress and highlighted the potential harm from outages at large cloud and AI suppliers if regulatory powers remain unused.
FPC scenario work and the herding risk
What the Bank is modelling
The FPC noted in its April 2026 record that advanced AI is not yet creating systemic instability, but warned the risk could rise quickly as adoption accelerates. The Bank has begun bespoke scenario analysis and simulations to explore how multiple AI agents might synchronise trading decisions and amplify price moves — a phenomenon often described as herding. Officials emphasise that such behaviour could produce fast, self-reinforcing sell-offs, similar in impact to a flash crash but driven by automated decision-making rather than human panic. The Bank is also collaborating with overseas authorities to examine how cross-border agent behaviour could transmit shocks across markets.
Parliamentary pressure and key concerns
The Treasury Committee has been vocal, arguing that regulators must move from a reactive stance to proactive testing and supervision. Committee chair Dame Meg Hillier highlighted recent AI developments such as Anthropic’s Project Mythos as evidence of rapid technological change and warned that the UK’s financial stability institutions need to understand the risks before an incident occurs. The committee urged designation of major AI and cloud firms under the Critical Third Parties Regime, but reported that HM Treasury would not commit to doing so before the end of 2026 — a delay that alarms many policymakers.
How the FCA is engaging firms
The FCA has reaffirmed that it expects firms to align AI deployment with existing rules, rather than wait for a bespoke AI rulebook. To help with that, the regulator will publish concrete examples of how to implement conduct requirements when using AI. The FCA also launched its voluntary AI Live Testing service in April 2026, which lets firms trial models in a controlled, real-world setting before full roll-out. Under the Senior Managers and Certification Regime, senior staff retain accountability for functions delegated to algorithms, underscoring the need for clear governance and documented oversight.
What firms should do today
Businesses should treat the regulator signals as a call to action. Practical steps include creating an inventory of active AI systems, assigning named accountable owners, and running targeted scenario or stress tests that consider synchronized agent behaviour. Firms operating or selling into the EU must also track the EU AI Act timetable: obligations for high-risk systems in Annex III are set to become enforceable on 2 August 2026, with proposals discussed to delay some requirements to December 2027. Meanwhile, as of March 2026 only eight EU member states had named national enforcement authorities, so cross-border compliance planning remains essential.
In short, regulators have moved from words to concrete steps: the Bank of England will stress-test market agent risk, the FCA will share best-practice examples and run live-testing facilities, and Parliament will continue to press HM Treasury over the pace of designating critical providers under the Critical Third Parties Regime. For firms, the right response is pragmatic: document what is in use, who is responsible, and whether current controls would withstand regulatory scrutiny. Those that act now will find themselves far better placed when supervisors turn testing into enforcement.
