top of page

AI Risk Management Bootcamp: Turning AI Ambition Into Action

Mary Carmichael
As Director, Technology Strategy & Risk

Momentum Technology

Workshop Outline

Workshop Overview


AI adoption is accelerating across every industry  but many organizations are still struggling to manage the risks that come with it. From unreliable outputs and data bias to regulatory scrutiny and vendor dependency, AI introduces new challenges that traditional risk programs were never designed to handle.
This hands-on bootcamp helps leaders move beyond theory and take steps towards AI adoption. Participants will learn how to support innovation while building the guardrails needed to protect their organization, customers, and reputation.
Through real-world scenarios, interactive assignments, and proven governance approaches, attendees will gain the tools and confidence to manage AI risk across the entire lifecycle; from strategy and procurement to deployment, monitoring, and oversight.


Who Should Attend
This workshop is designed for leaders and professionals responsible for overseeing technology, risk, governance, and accountability in AI-enabled organizations. It supports those seeking practical approaches to confidently govern AI initiatives, reduce risk exposure, strengthen oversight, and ensure innovation aligns with organizational trust and responsibility.
Learning Objectives
By the end of this bootcamp, participants will be able to:
1. Recognize Where AI Creates Business Risk:  Recognize how AI introduces unique operational, regulatory, ethical, and reputational risks that traditional risk frameworks may not fully address.
2. Assess AI Risk Across the System Lifecycle: Gain practical tools to identify, assess, and prioritize AI risk exposures from strategy and procurement through deployment and ongoing monitoring.
3. Implement AI Governance and Controls: Learn how to implement governance structures and control mechanisms that improve AI system reliability, transparency, accountability, and organizational trust.
4. Strengthen Vendor Oversight and Risk Monitoring: Develop practical approaches to managing third-party AI risk and establishing monitoring, reporting, and assurance processes that support proactive oversight.
5. Build an Actionable AI Governance Roadmap: Understand how to develop a tailored roadmap that helps their organization adopt AI responsibly while enabling innovation and business value.


Course Outline


Day 1: Building the Foundation — Understanding AI Risk & Governance
Focus: Creating a shared foundation for understanding how AI changes organizational risk, governance responsibilities, and oversight expectations.
Participants explore:
• The evolving AI landscape: how organizations are adopting AI, where competitive pressures are increasing, and why traditional risk approaches often fall short
• AI risk across the lifecycle: understanding how risk emerges from use-case selection and model design through data sourcing, deployment, monitoring, and system updates
• Distinct AI risk characteristics: examining issues such as model uncertainty, unreliable outputs, data bias, explainability limitations, ethical concerns, regulatory scrutiny, and operational dependency
• Governance and accountability structures: clarifying decision ownership, oversight responsibilities, and the role of cross-functional collaboration among business, technology, risk, legal, and compliance teams
• Practical governance integration: embedding AI oversight into existing enterprise risk management, internal control frameworks, compliance programs, and assurance activities without creating parallel structures
The day includes guided discussions and applied working sessions where participants evaluate AI scenarios, identify risk exposures, and discuss appropriate governance and oversight responses.


Day 2: From Awareness to Action — Implementing AI Risk Management
Focus: Translating AI risk awareness into practical controls, oversight mechanisms, and implementation strategies that work in real organizational environments.
Participants explore:
• Designing effective AI control environments: establishing preventive and detective controls that reduce the likelihood and impact of AI failures, strengthening model reliability, managing performance drift, ensuring meaningful human oversight, and maintaining documentation that supports transparency and audit readiness
• Managing third-party and vendor AI risk: understanding hidden dependencies in outsourced AI solutions, conducting practical vendor due diligence, addressing accountability through contractual safeguards, and implementing ongoing monitoring of third-party AI performance and risk exposure
• Monitoring, metrics, and early warning indicators: identifying what meaningful AI risk measurement looks like, selecting indicators that provide actionable insight, detecting issues before they escalate, and establishing structured escalation and response protocols
• Internal assurance and independent oversight: clarifying the role of internal audit and second-line risk functions in AI governance, supporting independent model validation and testing, strengthening executive and board reporting, and preparing for regulatory and external scrutiny
• Implementation and organizational rollout:  translating governance principles into practical action plans, prioritizing quick wins and longer-term initiatives, defining roles and accountability structures, and aligning change management efforts to support responsible and sustainable AI adoption
The day includes applied working sessions where participants evaluate organizational scenarios, assess control effectiveness, and develop approaches to strengthening AI oversight and implementation readiness.

Contact Us

ISACA Malaysia Chapter

Unit 916, 9th Floor, Block A
Damansara Intan, No. 1, Jalan SS 20/27
47400 Petaling Jaya
Selangor, Malaysia

Tel. +6017 219 6225 

© 2026 by CIAG Committee. Powered and secured by Wix

bottom of page