Al – both planned and unplanned – is now embedded, introducing complex ethical, governance, and audit risks that are not always well understood or controlled.
AI challenges fundamental audit concepts, including the reliability of evidence, the explainability of outputs, and the exercise of professional scepticism.
Models that lack transparency, are undocumented, or frequently change increase the risk of inappropriate reliance and insufficient challenge.
AI also raises ethical issues including bias, confidentiality breaches, data misuse, and over-reliance on automated outputs.
This session examines the practical implications of AI for auditors and how the fundamental principles in APES 110 apply in AI-enabled environments.
This session will:
- Explain how the fundamental principles in APES 110 apply when using or relying on AI
- Examine how AI affects audit evidence, risk assessment, and professional scepticism
- Identify key risk areas, including bias, model drift, explainability, and cybersecurity
- Outline governance frameworks and control considerations for AI-enabled systems
- Discuss emerging regulatory expectations and areas of focus, and
- Analyse scenarios where AI has failed and why.