×
google news

Why many savers rely on AI chatbots for investing and what regulators plan to change

More Britons, notably Gen Z and millennials, are consulting AI chatbots for investing and money management, prompting concerns from industry experts and a regulatory push for affordable, tailored support

AI chatbots are reshaping where many people, especially younger cohorts, seek personal finance guidance in the UK. The shift reflects lower cost, easier access and the rapid rise of conversational models that operate largely outside the regulated advice framework. The change carries implications for consumer protection, market conduct and regulatory policy.

Industry bodies and financial professionals caution that chatbots can clarify concepts and outline broad strategies. They note these tools do not provide the personalised, regulated safeguards available from authorised human advisers. From a regulatory standpoint, the Financial Conduct Authority is developing reforms to broaden lower-cost, practical guidance through what it calls targeted support.

Who is using AI chatbots and why it matters

Young adults are among the most frequent users of conversational tools for pensions, savings and investment queries. In my Deutsche Bank experience, younger cohorts prefer fast, on-demand answers and lower fees.

Anyone in the industry knows that ease of access and cost sensitivity drive behaviour.

The numbers speak clearly: cost barriers and limited access to traditional advisers push consumers toward informal channels. Chatbots fill a demand for quick explanations and basic planning help. Yet they do not perform regulated suitability assessments or the due diligence expected from authorised advisers.

Industry representatives warn that reliance on these tools can create gaps in protection. Without regulated oversight, users may receive generic guidance that overlooks individual circumstances, liquidity constraints or tax implications. From a regulatory standpoint, those gaps raise concerns about consumer harm and market integrity.

Marco Santini, former Deutsche Bank analyst now focused on fintech, says these trends echo lessons from the 2008 crisis. “In my Deutsche Bank experience, weak oversight and unchecked products amplified risk,” he said. “The current mix of innovation and limited regulation requires careful calibration of compliance, transparency and consumer safeguards.”

Reporting in subsequent sections will examine how the FCA’s targeted support proposals seek to balance affordability and protection, how fintech firms are responding, and what practical steps advisers and platforms can take to mitigate risks.

Younger users increasingly rely on AI for everyday financial decisions

Recent surveys indicate a clear generational shift in how people handle basic money tasks. A Finder study found that 65% of Gen Z (18–28) and 61% of millennials (29–44) use AI tools for personal finance. Across the UK, about 40% of adults consult unregulated AI assistants such as ChatGPT, Google Gemini or Microsoft Co-pilot when making financial choices.

Cost and convenience are the main drivers. Many consumers turn to AI because human advisers increasingly focus on larger portfolios, leaving smaller investors to seek affordable alternatives. In my Deutsche Bank experience, service models evolve toward higher-margin clients after liquidity and compliance pressures rise.

Anyone in the industry knows that rapid adoption creates new operational and consumer-protection questions. The numbers speak clearly: widespread use of unregulated assistants raises concerns about accuracy, liability and suitability for complex products. Firms and platforms must balance accessibility with robust due diligence.

From a regulatory standpoint, the trend tests existing frameworks and supervision tools. Financial firms face tensions between innovation and the need for disclosure, auditability and clear red lines on automated advice. Lessons from the 2008 crisis underscore the importance of transparency in risk allocation and client protection.

Advisers and digital platforms can mitigate risks by enhancing disclosure, retaining human oversight for high-risk cases and deploying audit logs for AI outputs. Market observers expect firms to publish compliance frameworks and usage metrics as evidence of responsible deployment.

Market observers expect firms to publish compliance frameworks and usage metrics as evidence of responsible deployment. Data from asset manager Schroders highlights the exclusion driving demand: over six years the share of advisers accepting clients with under £50,000 in investable assets fell from 52% to 25%, while the proportion serving only those with £200,000 or more rose from 11% to 30%. For people priced out of traditional advice, AI often appears the only accessible option.

Strengths and limitations of chatbot-driven guidance

AI assistants can explain financial concepts, model scenarios and propose basic asset-allocation frameworks. In controlled tests, chatbots delivered sensible high-level principles such as assessing risk tolerance, diversifying holdings and using tax-efficient wrappers like ISAs. They also generated starter portfolios and suggested platforms that match simple investor profiles.

Where chatbots perform well

Chatbots excel at scaling basic education. They can convert technical terms into plain language within seconds. In my Deutsche Bank experience, that rapid clarification is what junior clients value most. Anyone in the industry knows that clear explanations reduce entry barriers for new investors.

Chatbots also provide consistent, low-cost access. The numbers speak clearly: automated guidance can reach thousands at marginal cost. From a regulatory standpoint, this matters when access and inclusion are policy goals.

Key limitations and risks

Chatbots struggle with personalization beyond surface-level metrics. They lack the contextual judgement a human adviser brings to complex cases such as concentrated stock positions, illiquid assets or multi-generational wealth. The models may understate execution costs, tax frictions or behavioural biases that affect real-world outcomes.

Data quality and model transparency remain concerns. Poor inputs produce poor recommendations. Firms must show due diligence on training data, backtesting and error rates. From a regulatory standpoint, that transparency will determine whether chatbots are treated as tools or as regulated advice.

There are also liability and compliance gaps. If an automated recommendation causes loss, accountability is unclear when platforms, model vendors and platform users all play a role. Market participants expect clearer reporting on usage, error rates and remediation procedures.

Implications for younger investors

For Gen Z investors priced out of human advice, AI fills an important gap. Yet the trade-off is simpler guidance in place of tailored planning. Chi lavora nel settore sa che scaling advice without robust oversight can amplify harm.

Regulators and firms must therefore balance access with protection. Firms that publish compliance frameworks and usage metrics will likely shape market standards. The expectation is for measurable disclosures on model performance, client outcomes and remediation pathways.

The next milestone to watch is whether major platforms adopt standardized reporting on chatbot accuracy and client suitability. Those metrics will drive whether automated guidance becomes a credible complement to traditional financial advice.

Regulatory response: targeted support and consumer protection

Those metrics will drive whether automated guidance becomes a credible complement to traditional financial advice. Regulators are moving to close gaps that could leave retail investors exposed.

From a regulatory standpoint, authorities are focusing on three priorities: accuracy of inputs, transparency of model limitations and clear liability channels. Anyone who works in the sector knows that flawed data feeds and opaque training sets can produce plausible but unsafe outputs. In my Deutsche Bank experience, even small model biases can amplify into large portfolio distortions.

Practical consumer safeguards under consideration

Regulators and consumer bodies are proposing measures to force clearer disclosures about an AI tool’s data vintage, typical error rates and scenarios where human advice remains necessary. Firms may be required to publish usage metrics and post-deployment monitoring results similar to the compliance dashboards asset managers already use.

Proposals include mandatory suitability checks before a recommendation is delivered, explicit cost and tax-impact summaries for retail investors and limits on automated portfolio concentration. The numbers speak clearly: concentrated exposures increase tail risk and raise an investor’s effective spread and liquidity costs.

Liability and redress

Policy makers are debating how to assign liability when an AI-driven suggestion proves inappropriate. Current consumer protection frameworks assume a regulated adviser sits behind recommendations. Automated tools break that assumption and create a regulatory blind spot.

Suggested remedies range from minimum guaranteed oversight by a licensed adviser to compulsory compensation mechanisms funded by firms offering automated guidance. From a compliance standpoint, firms will need stronger due diligence on third-party models and clearer audit trails to demonstrate safe deployment.

Industry reaction and next steps

Firms are preparing for more stringent rules by tightening model governance and increasing transparency in product marketing. Chi lavora nel settore sa che operationalising these controls adds cost and complexity, but it also reduces the risk of reputational and regulatory penalties.

Expect regulators to consult industry and publish draft standards that require demonstrable evidence of consumer benefit, robust monitoring and effective redress channels. The forthcoming guidance will determine whether automated advice scales without repeating past market failures from the 2008 crisis.

The Financial Conduct Authority is proposing a new middle option between costly one-to-one advice and generic leaflets. The reforms would introduce targeted support, standardised scenario-based guidance tailored to common customer situations such as decumulation or windfalls. Regulators say the model aims to be more prescriptive than educational materials while remaining cheaper than full suitability assessments.

From a regulatory standpoint, the change seeks to widen access to regulated guidance and to curb reliance on unregulated AI outputs. The FCA argues the approach could enable firms to deliver more useful help at lower cost, potentially improving decision-making for millions who would not otherwise pay for personalised advice.

Industry perspectives

In my Deutsche Bank experience, a gap like this is familiar: firms need a standard playbook that still allows for individual circumstances. Industry figures broadly welcome a clearer classification that lets firms scale guidance without crossing into advice where suitability obligations apply.

Those working in the sector note practical trade-offs. Standardisation reduces delivery costs and can improve liquidity of services, but it narrows the scope firms can cover without increased compliance risk. The numbers speak clearly: firms will measure take-up, error rates and complaint volumes to judge whether the model reduces consumer harm.

From a regulatory standpoint, the details will matter. Rules on disclosure, consumer eligibility and permissible use cases will define how prescriptive firms can be. Any allowance for algorithmic tools will carry due diligence and governance requirements to limit reliance on unregulated outputs.

The forthcoming FCA guidance will determine whether firms can scale automated and semi-automated services while maintaining consumer protection and clear boundaries between guidance and full advice.

Practical guidance for consumers

Consumers face a choice between newly proposed targeted support and traditional one-to-one advice. The regulator’s option aims to sit between generic information and full advisory services. Firms should make those differences explicit at the first contact.

In my Deutsche Bank experience, clarity about scope reduces consumer harm. Firms must state plainly whether a service offers personalised recommendations or only scenario-based guidance. That disclosure should appear before any data entry or payment.

Anyone in the industry knows that technology can cut delivery costs and broaden access. At the same time, many consumers undervalue the reassurance that comes from a professional’s tailored questions. Firms should therefore combine automated interfaces with easy access to human review for complex or high-stakes decisions.

The numbers speak clearly: measurable error rates and misinterpretation risks rise when consumers rely solely on free tools with limited scope. From a regulatory standpoint, providers should publish simple metrics on accuracy, typical outcomes and escalation pathways so users can compare options.

Operationally, firms should adopt a clear consumer journey. Start with a plain-language scope statement, follow with an interactive checklist that flags complexity, and offer an immediate route to a qualified adviser where needed. This preserves consumer protection while allowing firms to scale semi-automated services.

Regulators will focus on boundaries between guidance and advice and on firms’ due diligence processes. Expect supervisory scrutiny of firms’ disclosures and escalation procedures as the new option is implemented.

Using AI as a research assistant, not a substitute

Expect supervisory scrutiny of firms’ disclosures and escalation procedures as the new option is implemented. Treat AI guidance as a starting point, not a final arbiter. Use chatbot outputs to learn terminology, sketch scenarios and flag topics to discuss with a regulated adviser.

Due diligence matters. Prioritise source verification, check the timeliness of data and confirm assumptions with qualified professionals for complex areas such as retirement planning, tax-efficient wrappers and risk profiling. Anyone in the industry knows that DIY research without regulated oversight can widen spreads in outcomes and expose consumers to unexpected liquidity or compliance gaps.

In my Deutsche Bank experience, layering professional input onto preliminary research improves outcomes. The combination of selective professional advice and careful self-directed research can bridge the gap until wider, low-cost regulated solutions become more available.

From a regulatory standpoint, firms should strengthen disclosure clarity and escalation pathways. The practical test will be whether disclosures translate into measurable improvements in consumer outcomes and whether firms can demonstrate robust audit trails and escalation metrics.

The numbers speak clearly: regulators will look for documented due diligence, evidence of timely data refreshes and transparent limits to automated guidance. Firms that embed those controls should reduce supervisory friction and improve consumer trust.

Consumers using AI should keep one principle in mind: digital tools can increase access, but they do not yet replace tailored, accountable regulated advice. Expect a hybrid landscape where digital assistants and regulated products coexist, with ongoing supervisory review shaping how the market evolves.


Contacts:

More To Read

how ai reshapes biodefense and increases pathogen risk 1770957468
Science & Technology

How ai reshapes biodefense and increases pathogen risk

13 February, 2026
As ai capabilities accelerate, the potential for misuse in biology grows. This article outlines the risks, policy priorities, and practical steps for strengthening biodefense and preserving public trust.