Leveraging Artificial Intelligence to Enhance and Understand Regulatory Compliance in the Investment Management Industry 

Completed Project Case Study: Leveraging Artificial Intelligence to Enhance and Understand Regulatory Compliance in the Investment Management Industry  

The Challenge 

Regulatory compliance in investment management is increasingly complex. Since the Global Financial Crisis, financial regulation has expanded significantly across jurisdictions, creating overlapping, evolving, and often ambiguous rule sets. Firms must interpret lengthy regulatory texts, translate them into operational controls, and ensure ongoing compliance while managing costs and risks. 

At the same time, Artificial Intelligence (AI) vendors are promoting automated compliance solutions, yet firms lack robust frameworks to evaluate whether these tools are reliable, auditable, and safe for deployment. A critical question emerges: 

Can AI meaningfully support regulatory rule extraction and compliance workflows without introducing new systemic risks? 

This project addressed that challenge by investigating whether AI systems can extract structured, machine-readable compliance rules from regulatory texts and, crucially, how safely they can do so. 

The Project 

Led by Dr Barry Quinn (Queen’s University Belfast) in collaboration with Funds Axis Ltd, the project evaluated six AI architectures on a benchmark set of 36 European Fund Classification rules  

Instead of asking AI to simply summarise long regulatory documents, the project focused on something more practical: turning complex rules into clear, structured instructions that compliance teams can search, check, and use in their systems. This direction came directly from industry partners, who explained that compliance teams need rules they can track, audit, and apply consistently. 

To support this, the team built: 

  • A structured testing system to compare how well different AI models perform 
  • A carefully prepared set of example regulatory rules to act as a benchmark 
  • A framework to ensure AI tools are used safely and responsibly 
  • A model to decide which tasks can be automated, which need human review, and which should stay fully human-led 

The key question was “Can it do this in a way that is accurate, dependable, and safe to use in a regulated financial environment?” 

Key Findings  

AI is good at structure, but bad at “language drift” 

The researchers found that AI is quite good at turning complex legal text into organised maps called rule graphs. However, AI often struggles with using the exact same words consistently. This “vocabulary drift” is a major problem because, if the AI uses slightly different terms from existing systems, the rules it creates can’t be easily used or searched. 

AI can be “dangerously confident” 

One of the most important findings was that AI is not always a good judge of its own work. In many cases, the AI was most confident when it was most wrong. This means you cannot simply trust an AI just because it “thinks” it has the right answer, as this could lead to serious automated errors in financial compliance. 

The “Traffic Light” approach to automation 

To keep things safe, the project suggests a graduated approach to using AI, rather than letting it do everything at once: 

  • Green (Full Automation): Only for very simple, clear-cut mathematical rules. 
  • Amber (AI Drafts): For rules that are a bit more complex, the AI writes a draft that a human then checks. 
  • Red (Human-Led): For very complicated or “grey area” rules, humans should still lead the process from the start. 

Practical tools for the industry 

The project created a toolkit for companies to test how reliable different AI tools actually are. This helps firms see past marketing claims and understand where an AI tool might fail before using it in real-world regulation. 

Collaboration 

The collaboration with Funds Axis Ltd was central to the project’s success. Funds Axis brought operational knowledge of fund classification workflows, compliance bottlenecks, and client requirements. Early discussions led to a critical reframing of the task: rather than optimising for natural-language answers, the system needed to generate structured, query able rule graphs. 

This industry grounding ensured that evaluation criteria reflected deployment realities, calibration, auditability, and integration rather than abstract accuracy metrics. 

A key lesson from the collaboration was the importance of investing time upfront to understand the partner’s workflow constraints. Academic framing often begins at a theoretical level; industry partners sharpen the focus toward operational relevance. 

The Impact 

For the industry, the project provides a rigorous evidence base for evaluating AI-assisted compliance tools. Rather than relying on vendor claims, Funds Axis now has a structured benchmark dataset, evaluation framework, and calibration diagnostics that can be used to assess AI systems more objectively. The graduated automation design offers a practical governance model for the responsible adoption of AI in regulated environments. 

For academia, the project reoriented research toward decision-theoretic evaluation of AI systems. It demonstrated that metrics such as calibration, vocabulary alignment, and auditability are critical for safe deployment, yet are often overlooked in traditional machine learning benchmarks. The findings reinforce the need for stronger statistical foundations in AI governance, particularly in uncertainty quantification and cost-sensitive decision-making. 

At a broader level, the project has begun influencing regulatory thinking. Engagement with the Financial Conduct Authority is planned to discuss the implications of AI reliability and inverse calibration for compliance oversight  

The work also contributes to national conversations on AI governance in financial services. 

Going Forward 

Discussions are underway with Funds Axis to operationalise the graduated automation model within their platform, evaluate real-world routing distributions and review times  

Future funding applications aim to: 

  • Expand the benchmark dataset to include FCA and SEC rules 
  • Pilot AI-assisted compliance workflows in production settings 
  • Develop authoritative regulatory rule encodings in collaboration with regulators 

The long-term ambition is not full automation of compliance but safer, calibrated, and auditable AI support systems that enhance regulatory adherence without compromising trust. 

UKFin+ Role 

UKFin+ feasibility funding enabled the project to test a high-risk, high-impact idea in a structured and collaborative way  

By supporting early-stage experimentation grounded in industry reality, UKFin+ helped surface a critical insight about AI reliability that might otherwise have remained undetected. 

The project demonstrates how targeted academic–industry collaboration can transform a technical feasibility study into a meaningful contribution to AI governance and regulatory innovation. 

Completed Project Video

Following the completion of the project Dr Barry Quinn has shared his findings and experience collaborating with thier non – HEI partner.


Original Project Summary

Our goal is to create an AI system that simplifies and improves regulatory compliance in global investment management whilst maintaining the highest standards of responsible and trustworthy AI development. We will use advanced AI techniques to convert complex regulatory texts into clear, consistent rules that reflect current international standards. Our system will employ probabilistic methods to identify regulation inconsistencies and generate accurate compliance rules based on reliable data. 

We prioritise the development of AI models with enhanced reasoning capabilities to ensure the coherent generation of regulatory content. By incorporating economic principles and rigorous testing procedures, we aim to create a reliable tool that maintains regulatory adherence, enhances risk evaluation, and operates ethically and responsibly. Throughout the project, we will emphasise human oversight, accountability, and transparency in the AI-generated content, establishing clear protocols for human intervention and decision-making, and implementing methods to enhance interpretability and enable stakeholders to understand and audit the decision-making process behind the generated content. 

Meet The Team

Dr Barry Quinn CStat

Queens University Belfast

Senior Lecturer of Finance, Technology, and Data Science, Queen’s Business School 

Jesus Martinez Del Rincon     

Queens University Belfast

Senior Lecturer of Computer Science, The Centre for Secure Information Technologies (CSIT) at Queen’s University Belfast 

Partner Organisation

Funds Axis

Research Showcase 2025 Video

Presented by Dr Barry Quinn – Leveraging Artificial Intelligence to Enhance and Understand Regulatory Compliance in the Investment Management Industry.