Trust Over Fear: Overcoming AI Anxiety in Asset Finance

How leaders can navigate the emotional resistance to automation and build teams ready for an AI‑enabled future

Across the UK asset finance ecosystem, the biggest barrier to AI adoption isn’t capability, it’s confidence. This article examines how leaders can turn apprehension into acceptance by pairing credible use cases with clear governance and humane workforce design. The payoff is not just faster funding cycles, it’s a workforce that trusts the tools shaping tomorrow’s decisions.

Introduction: the promise, and provocation, of AI

Across the Asset Finance industry, AI has become both a promise and a provocation. While leaders see opportunity in automation, analytics, and intelligent decisioning, many teams feel the quiet unease of replacement anxiety. It’s not the technology that’s stalling progress, but rather the human response to it. True digital transformation will depend less on system architecture and more on emotional architecture: trust, communication, and the willingness to redefine roles in an AI‑augmented world.

“AI has become a much‑misunderstood term – often used as a catch‑all for any tech‑enabled change or automation. Its capabilities are undeniably impressive, and the pace of progress is extraordinary. But with that comes natural hesitation. Like any major shift, it presents as much of an emotional and cultural challenge as it does technical. The key is not to fear replacement, but to redefine relevance. By engaging with realistic use cases and focusing on outcomes rather than legacy processes, teams can move from feeling threatened by AI to feeling empowered by it. If AI can help you reach the same outcome in a more efficient way, the real question becomes: how will you reinvest that saved time to create even greater value?”

– Jim Higginbotham, CEO, NACFB

1) Where the UK really is on AI (and why that matters for asset finance)

Two things can be true at once: AI uptake across UK businesses is still early, and UK financial services are further ahead than most sectors. Boards are asking for productivity gains while teams read headlines about job risk; the gap between ambition and anxiety is the real programme risk.

For asset finance, that means starting where the evidence is strongest, setting expectations for human‑in‑the‑loop decisioning, and showing the workforce how success will be measured and shared.

2) The psychology of social rejection: why good AI still gets blocked

Even when algorithms outperform humans, people often refuse to use them after seeing a single error – classic algorithm aversion. The remedy isn’t just better models; it’s agency, transparency and sensible guardrails. Let staff edit, escalate and override; publish simple, comprehensible model summaries; and share examples where human judgement prevails.

A parallel dynamic is ‘shadow AI’: staff adopt tools informally when trust and governance are unclear. The fix is sanctioned tools, role‑based training, and clear guidance, not more prohibition.

3) What credible adoption looks like in asset finance & auto finance

A) Funding & documentation (IDP + workflow): Tackle NIGO defect rates, time‑to‑fund and rework hours. Intelligent Document Processing (IDP) can classify documents, extract fields and push clean files into the LOS, often the lowest‑risk, fastest‑evidence starting point.

B) Underwriting augmentation (not wholesale replacement): LLMs summarise files, draft credit memos and propose actions; humans set cut‑offs and approve exceptions. If a model contributes to a decline, ensure reasons are specific and explainable, good discipline for internal trust and regulatory outcomes.

C) Servicing & collections triage: Conversational AI handles simple queries and flags possible vulnerability; human agents manage complex negotiations. Design for assisted work, not bot‑only channels.

D) Risk, fraud and AML pre‑checks: Combine bureau data, bank feeds and explainable features to surface high‑quality alerts with audit trails.

4) Two anonymised management stories (UK examples)

Case 1 – “Orion Auto Finance” (mid‑tier UK auto)

  •  Problem: 8‑hour average time‑to‑fund; rising broker complaints.
  • Intervention: IDP for stips; checklist bots; six‑week shadow mode before any auto‑action.
  • Outcome (9 months): median time‑to‑fund <1 hour on clean files; NIGO −35%. Four roles consolidated; eight redeployed into dealer success; four exits managed with severance and training vouchers.
  • Human moment: A simple model card and weekly drop‑in clinic for staff turned skepticism into suggestions.

Case 2 – “NorthRiver Equipment Finance” (independent UK lender)

  • Problem: Inconsistent credit‑memo quality; backlogs.
  • Intervention: LLM drafts memos; factor‑level ‘why’ for declines; four‑eyes remains mandatory.
  • Outcome (6–9 months): underwriter throughput +60%; straight‑through decisions at ~40% for small‑ticket; attrition absorbed workload.
  • Human moment: Monthly ‘error show‑and‑tell’ normalised model fallibility—reducing algorithm aversion.

“The fear around AI is completely understandable – it’s not just the natural human resistance to change, but the relentless drumbeat of headlines proclaiming that AI will make entire professions obsolete. Every day brings another breathless announcement about incredible AI capabilities, creating an atmosphere of existential anxiety that’s almost impossible to escape.

But in my experience, the reality is quite different from the headlines: yes, AI can replace or even eliminate lots of tasks, but I’ve yet to see any AI that can even come close to doing an entire role. The nature of roles might change – less manual data entry, fewer repetitive document reviews – but the human in that role remains irreplaceable: human relationships, contextual judgment, and the ability to navigate ambiguity are fundamental to how business actually works.

The organisations that succeed with AI adoption will be those that are transparent with their teams about their plans – explaining to teams where and how AI will be deployed, how it will change their day-to-day work, and ensuring they have the new skills required. And remember that “AI” is quite an intangible concept to most people, so I strongly recommend taking a ‘show, don’t tell’ approach; once teams see AI in action as their assistant rather than their replacement, you’ll find the conversation shifts from ‘will I lose my job?’ to ‘what else could we improve with this?’.”

– Richard Huston, MD, VAMOS

5) Named case references (outside the UK, but instructive)

  • All In Credit Union (US): a multi‑year journey pairing guardrails with automation, now reporting ~70% automated consumer lending decisions across products. The lesson isn’t the number; it’s that governance and clear reason codes came first.
  • Santander US Auto: modernised credit‑risk workflows on a single analytics platform, underscoring that governance and tooling, not just models, drive speed and scale.

6) Regulation & trust: build for the rules you actually face

  • FCA Consumer Duty: outcomes and fairness first. Evidence how automation improves understanding and outcomes—not just speed. Boards remain accountable for ongoing monitoring and annual assessment.
  • UK GDPR (Article 22): be cautious with solely automated decisions that have legal or similar effects. Ensure lawful basis, meaningful human involvement, and a visible path to human review.
  • EU AI Act (if relevant to your markets): credit scoring/creditworthiness systems are high‑risk, triggering obligations on data quality, documentation, human oversight and robustness.
  • Direction of travel: the FCA aims to enable safe adoption under existing rules—and is itself using AI as a smarter regulator. Leaders should expect guidance to evolve; document and monitor.

“Lending institutions increasingly are operating like technology companies, managing vast amounts of data. However, data silos, poor quality data, ever-changing regulations, and other challenges hinder their ability to turn data into actionable insights that drive decision-making and revenue growth.

A new data-driven business model – integrating cloud computing, machine learning, and AI – is reshaping the industry. Lending businesses have evolved from legacy monolithic systems to AI first platforms with customer centric features, poised to transform the financial landscape. This shift paves the way for a fully digital financial services ecosystem, where data, not money, drives operations.”

– Stuart Taylor, Senior Director, Oracle Financial Services

7) A one‑year people‑first blueprint (you can copy this)

Days 0–90 – Build confidence

  • Pick two low‑controversy use cases: IDP in funding; a policy/knowledge assistant for staff.
  • Run shadow mode for 4–6 weeks; publish weekly findings and error types.
  • Stand up an AI Working Group (Credit, Ops, Risk, Compliance, HR).
  • Draft your Responsible AI standard: model cards, monitoring, Article‑22 escalation, adverse‑action templates.

Months 4–6 – Wire in governance

  • Move from recommend → approve with thresholds; keep four‑eyes on edge cases.
  • Evidence Consumer Duty improvements for any AI‑touched journeys (understanding, support, outcomes).
  • Launch role‑based training: underwriters (policy + XAI), funding ops (IDP QA), frontline (AI‑assisted service scripts).

Months 7–12 – Scale & restructure humanely

  • Re-tier queues by complexity; target exceptions‑based underwriting in small‑ticket.
  • Publish a redeploy‑reskill‑hire‑exit plan by role family; measure internal fills into new roles (AI Product Owner, Model Risk).
  • Share a simple trust KPI set monthly: override rates, human‑review SLA, staff sentiment, customer complaints related to automation.

8) Evidence you can show the board (and your people)

Operational value: time‑to‑fund (median & p90), NIGO rate, straight‑through % with human override rate, approval‑lift at constant loss, cure/roll‑rate in collections.

Human outcomes: % roles reskilled/redeployed/hired/exited; training coverage; internal fills into new roles.

Governance: model stability/drift, factor‑level reasons on declines, Article‑22 reviews completed, Consumer Duty outcomes pack.

9) Talking about exits without breaking trust

If automation reduces manual workloads, some roles will change. The difference between resentment and respect usually comes down to three behaviours:

1) Name the work, not the person (which tasks are automated; which human tasks grow).

2) Offer agency (training, redeployment windows, fair exits).

3) Keep humans in the loop (visible escalation; publish examples where human judgement overrode the model).

10) The big objections you’ll hear, answered

“We can’t prove ROI yet.” In early deployments, ROI concentrates where the work changes (queue re‑tiering, exception design), not where tools are trialed. Start with NIGO and time‑to‑fund, then expand.

“The regulator isn’t clear.” UK rules already cover outcomes (Consumer Duty) and automated decisions (UK GDPR). Don’t wait for a bespoke AI rulebook—document, evidence, monitor.

“Staff will resist.” They’re often already using AI—sometimes covertly. Replace shadow AI with safe, sanctioned tools and role‑based training; measure trust like a KPI.

“Data is the New Gold and Data Quality is mission critical if you want to adopt AI successfully into your Lease or Loan Management System for automation. Almost everyone will adopt AI in some form to their operations over the next 24 months; however, the most successful will be those that create / demand the best data quality –  without this then human oversight will be required. Until then I see most using AI to supplement human-led activities where AI provides insight and short-cuts to achieve the intended outcome. Whether or not you trust or fear AI it will, and already is, part of your everyday life, the key is to understand where to deploy AI for the best RoI AND where the data quality will give trust to the outcome.”

– Robert Taylor ACIB, FLF, FRSA, Managing Director, LTi UK

Conclusion: Resilient’s view

Resilient’s perspective: Start with people, not pilots.

The firms that win in 2026–27 won’t be those that rushed to deploy the most models—they’ll be the ones that invited their teams in early. When you include underwriters, funding ops, risk, compliance and front‑line colleagues in the evaluation and adoption process, three things happen:

  • Fear drops—because people can see and shape the change.
  • Value arrives faster—because you redesign roles and workflows together.
  • Trust grows—because explainability, Consumer Duty evidence and Article‑22 safeguards are baked into the journey, not bolted on at the end.

Resilient partners with lenders, lessors and vendors to do exactly that—build the talent, structure and leadership behaviours that make AI adoption credible, compliant and human‑centered.

“The people closest to your customers and processes know where the friction lives and what builds trust—involve them early and they become your strongest advocates, spotting use cases that drive operational effectiveness and financial growth. That’s the real competitive edge: not the models you deploy, but the culture you build—a workforce that’s curious, confident and capable of continuous improvement. Trust earned through inclusion de-risks the transformation and turns a one-time project into sustained competitive advantage.”

– Colin Tovey, Managing Director, Resilient Management Solutions

 

If you wish to take part in the ‘AI Adoption & Workforce Impact Survey – 2025’, please click here.

By taking part, you will be sent a copy of the Survey Results and a full report of the findings.

 

Trust over fear: overcoming AI anxiety in asset finance

https://www.linkedin.com/posts/asset-finance-international_trust-over-fear-overcoming-ai-anxiety-in-activity-7397282084026986497-l6tW