top of page

Auditing Artificial intelligence systems. A general guide.

We are thinking about the key considerations for auditing Artificial Intelligence (AI) systems and the necessary skills for the Internal Audit (IA) team.


Subject: Auditing Artificial Intelligence (AI) Systems: Key Considerations and Internal Audit Capabilities


1. Introduction

As our organisations increasingly incorporates Artificial Intelligence (AI) systems into operations to drive efficiency, innovation, and decision-making, it is crucial that our Internal Audit (IA) function evolves to provide assurance over these powerful technologies. AI introduces unique risks and complexities that require specific attention. This post outlines key factors management should consider regarding AI governance and risk management, and the corresponding skills our IA team needs to effectively audit these systems.


2. Why Audit AI?

AI systems, while beneficial, present significant risks, including:

* Data Bias and Fairness: Potential for biased data leading to discriminatory or unfair outcomes.

* Lack of Transparency: Difficulty in understanding how complex AI models (often called "black boxes") arrive at decisions.

* Data Privacy and Security: Handling vast amounts of potentially sensitive data increases exposure to breaches and regulatory non-compliance (e.g., GDPR).

* Model Accuracy and Reliability: Risk of model degradation over time or failure under unexpected conditions, leading to poor decisions or operational disruption.

* Ethical Concerns: Ensuring AI aligns with organisational values and ethical principles.

* Regulatory Compliance: Navigating the evolving landscape of AI-specific regulations and existing legal frameworks.

* Accountability: Difficulty in assigning responsibility when AI systems make errors.

Effective IA provides independent assurance that these risks are identified, assessed, and appropriately managed.


3. Key Factors for Management Consideration

To ensure robust AI implementation and oversight, management should focus on the following areas, which will also form the basis of IA's reviews:

* AI Governance Framework:

* Is there a clear AI strategy aligned with business objectives?

* Are there defined roles, responsibilities, and accountability for AI development, deployment, and monitoring (including board/executive oversight)?

* Are there specific policies and procedures governing AI use, ethics, and risk management?

* Risk Management Integration:

* Have AI-specific risks been formally identified, assessed, and integrated into the overall enterprise risk management (ERM) framework?

* Is there a defined risk appetite for AI initiatives?

* Are risk mitigation strategies appropriate and actively monitored?

* Data Management and Quality:

* Are the data sources used to train and operate AI systems accurate, complete, relevant, secure, and ethically sourced?

* Are there controls over data input, processing, storage, and lineage?

* How is data privacy ensured throughout the AI lifecycle?

* Model Development, Validation, and Monitoring:

* Is there a documented and controlled process for developing or acquiring AI models?

* Are models independently validated for performance, bias, and robustness before deployment?

* Is there ongoing monitoring of model performance, drift (degradation), and outcomes in production?

* Is there a process for retraining, updating, or retiring models?

* Ethics and Fairness:

* Has the potential for bias been actively assessed and mitigated during development and operation?

* Are ethical principles defined and embedded in the AI lifecycle?

* Is the impact on stakeholders (customers, employees, society) considered?

* Is there appropriate transparency and explainability, considering the context and risk of the AI application?

* Security and Resilience:

* Are AI systems and underlying data protected against unauthorised access, manipulation, and cyber threats?

* Are appropriate business continuity and disaster recovery plans in place for critical AI applications?

* Legal and Regulatory Compliance:

* Does the use of AI comply with all relevant laws, regulations (e.g., data protection, emerging AI acts), and industry standards?

* Are contractual agreements with third-party AI vendors robust?


4. Required Internal Audit Skills and Capabilities

To effectively audit AI systems, the IA team needs to enhance its skillset beyond traditional audit competencies. Key areas include:

* Foundational AI/ML Understanding: A conceptual grasp of different AI types (e.g., Machine Learning, Natural Language Processing), how algorithms work, model training/validation concepts, and inherent limitations. They don't need to be data scientists but must understand the concepts and risks.

* Data Analytics Proficiency: Advanced skills to analyse large datasets, assess data quality, identify potential biases in data, and understand data lineage relevant to AI models.

* IT Audit Fundamentals: Strong understanding of IT general controls (ITGC), cybersecurity principles, cloud computing environments (as many AI platforms are cloud-based), and system development lifecycle (SDLC) controls adapted for AI.

* Risk Management Expertise: Ability to identify, assess, and evaluate controls for complex, technology-driven risks specific to AI.

* Ethical and Regulatory Awareness: Knowledge of AI ethics frameworks, data privacy regulations (like GDPR), and the evolving landscape of AI-specific laws and standards.

* Business Acumen: Understanding how AI systems support business processes and strategic objectives to assess their effectiveness and alignment.

* Critical Thinking & Professional Skepticism: Essential for challenging assumptions, evaluating model limitations, and assessing the 'black box' nature of some AI.

* Communication and Collaboration: Ability to effectively communicate complex technical findings to non-technical stakeholders (including management and the board) and collaborate with data scientists, IT specialists, legal, compliance, and business units.


5. Recommendations

* Invest in Training: Provide targeted training and development opportunities for the IA team in the areas outlined above.

* Strategic Sourcing: Consider co-sourcing audits with specialist third-party firms or hiring individuals with specific AI/data science expertise, especially in the initial stages.

* Update IA Methodology: Integrate AI risks into the audit universe, risk assessments, and audit methodologies. Develop specific audit programs for AI systems.

* Foster Collaboration: Encourage close collaboration between IA, risk management, IT, data science teams, legal/compliance, and the business units implementing AI.

* Pilot Audits: Begin with audits of lower-risk or less complex AI applications to build experience and refine the approach.


6. Conclusion

AI offers transformative potential, but its effective and responsible use requires robust governance, risk management, and control. A skilled and prepared Internal Audit function is essential to provide management and the board with independent assurance that AI risks are being adequately addressed. Proactive investment in developing IA's capabilities in this area is critical for navigating the complexities of AI adoption successfully.

Comments


© 2025 by ASD Consulting

Powered and secured by Wix

bottom of page