Introduction
During our recent event, Douglas Dick and India Bullock, leading experts in AI governance and risk presented to a room full of Heads of Audit, discussing the evolving role of AI in internal audit. The session covered the fundamental differences between traditional and generative AI, the associated risks, and the importance of governance frameworks to manage these challenges effectively. This summary provides a more detailed breakdown of the key topics covered.
Defining AI in Audit
One of the key challenges organisations face is the interpretation of AI. Leaders across industries define AI differently, leading to inconsistencies in strategic implementation. To fully recognise AI’s value, it is essential to establish a clear definition within the organisation and differentiate between traditional and generative AI.
- Traditional AI learns from static data, producing consistent outputs based on past patterns.
- Generative AI evolves dynamically, meaning its outputs can change over time, introducing complexities in governance and compliance.
Many organisations struggle with AI definitions, affecting their ability to implement effective controls and strategic oversight.
Key Considerations
- How does your organisation define AI?
- Does your strategy account for both traditional and generative AI models?
- Are governance frameworks designed to accommodate AI’s evolving nature?
Challenges in AI Governance and Risk Management
Many organisations approach AI risk management by applying traditional regulatory frameworks, such as financial risk assessments. However, AI introduces new challenges, including:
- Continuous Model Evolution – Unlike static models, generative AI evolves with new data, requiring ongoing monitoring to ensure outputs remain relevant and accurate.
- Data Integrity and Source Verification – Ensuring that AI-generated outputs are based on accurate and complete data is critical. Inconsistent or biased data can lead to flawed decision-making.
- Regulatory Compliance & Auditing Challenges – AI risks extend beyond financial impacts, including ethical considerations, bias, and data privacy.
- Explainability & Transparency – AI models must be transparent in their decision-making process. Internal audit teams must evaluate whether these decisions align with regulatory and ethical standards.
- Bias & Fairness – AI models trained on historical data may reinforce past biases. Addressing bias requires rigorous testing and monitoring of AI decisions.
- Cybersecurity & AI – AI systems can be vulnerable to cyber threats, necessitating stronger security protocols to prevent model manipulation.
Practical Example: AI in Banking
The session included a real-world example of an AI-powered customer service chatbot. This chatbot analysed bank policies and provided automated responses to customer inquiries. While AI enhanced efficiency, it also introduced several risks:
- Data Quality Issues – Ensuring that the chatbot referenced accurate and up-to-date policy documentation.
- Auditability – Tracking changes in AI-generated responses over time and ensuring compliance with internal and external regulations.
- Brand Consistency & Customer Trust – Maintaining alignment with brand messaging and regulatory requirements.
- Hallucinations & Incorrect Outputs – Ensuring AI-generated responses are factually correct and do not mislead customers.
Key Audit Considerations
- What is the source of data for AI-generated responses?
- How is the chatbot’s accuracy monitored over time?
- Are there controls in place to prevent misleading or non-compliant responses?
The Role of Internal Audit in AI Oversight
AI adoption requires a structured governance framework that includes:
- Ownership & Accountability – Clear designation of who is responsible for AI oversight within the organisation, ensuring accountability.
- Risk-Based Approach – Applying AI risk assessments based on potential impact, complexity, and likelihood of adverse outcomes.
- Monitoring & Testing – Continuous validation of AI models to prevent drift (unintended shifts in AI decision-making over time).
- Integration with Business Strategy – Aligning AI adoption with organisational goals and regulatory expectations.
- Model Explainability & Interpretability – Ensuring that AI decision-making processes can be explained in a way that aligns with audit and regulatory requirements.
- AI Use Case Prioritisation – Organisations must determine which AI applications are most beneficial and align with strategic objectives.
- Regulatory Readiness – Internal audit teams must stay ahead of evolving AI regulations and assess compliance measures proactively.
Many organisations are developing AI Centres of Excellence within internal audit teams to ensure AI initiatives align with governance and risk management best practices.
Looking Ahead: AI Regulations and Compliance
The regulatory landscape around AI is evolving rapidly. Some jurisdictions, such as the EU AI Act, are adopting more prescriptive regulations, requiring transparency in AI model development and risk assessment. In contrast, the UK is taking a principles-based approach, focusing on responsible AI use rather than rigid compliance rules.
Key Regulatory Trends:
- EU AI Act – Focused on risk classification and transparency in AI decision-making.
- UK Principles-Based Approach – Encourages responsible AI innovation while balancing regulation.
- US & Global AI Standards – Varying regulatory approaches depending on jurisdiction and sector.
- Ongoing Monitoring: Regulatory expectations will continue to evolve, requiring internal audit teams to stay agile and informed.
As AI adoption accelerates, internal audit functions must proactively implement governance frameworks, engage with business leaders, and ensure AI deployments align with corporate risk appetite and ethical standards.
Conclusion
AI presents both opportunities and risks for internal audit. While it can enhance efficiency, improve risk assessments, and streamline processes, it also introduces new challenges that require ongoing monitoring and governance.
To navigate these complexities, internal audit teams must:
- Develop adaptive risk frameworks that account for AI’s dynamic nature.
- Stay informed about regulatory changes and emerging best practices.
- Engage with leadership to ensure AI adoption aligns with corporate strategy.
- Implement proactive monitoring tools to track AI performance and risk indicators.