

Human Risk Management Workshop: Addressing Deepfakes, AI‑Enabled Threats, and Safer AI Use
Brian Hay
Executive Director, Cultural Cyber Security Pty Limited
Workshop Outline
This two‑day Human Risk Management workshop is designed to strengthen organisational performance by addressing the human drivers of risk, safety, and reliability in complex, high‑pressure environments, with a specific focus on AI‑enabled threats such as deepfakes, synthetic media, and AI‑driven deception.
As artificial intelligence becomes embedded in everyday business activity, the nature of human risk is changing rapidly. Deepfakes, voice cloning, synthetic identities, and highly convincing AI‑generated communications now target human trust, judgement, and authority structures, bypassing many traditional technical controls. This workshop recognises that defending against these threats is not primarily a technology problem, it is a human risk and education challenge.
Rather than concentrating solely on systems, processes, or cyber controls, the workshop examines how people interpret information, assess authenticity, respond to urgency, and make decisions under pressure. Participants explore how deepfakes exploit normal human behaviours such as trust in senior leaders, reliance on familiar voices or faces, deference to authority, time pressure, and fear of delaying action. These same behaviours, if not understood and managed, can undermine even the most mature security environments.
Participants examine how human factors including workload, assumptions, distractions, handovers, leadership behaviours, cognitive bias, and organisational culture, can either amplify AI‑driven risk or act as powerful defences. Through a practical, scenario‑based approach, the workshop demonstrates that incidents involving AI deception and deepfakes rarely succeed because people are careless or untrained. Instead, they succeed because organisations have not yet adapted their Human Risk Management programs to the realities of AI‑enabled threat actors.
A central theme of the workshop is the dual role of AI:
-
AI as a threat, where deepfakes, synthetic media, voice cloning, and generative content enable highly targeted social engineering, executive impersonation, and fraud. Participants explore how over‑trust in digital communications, automation bias, and a lack of verification norms increase exposure to these risks.
-
AI as a defence, where AI tools can assist with detection, verification, anomaly identification, and decision support—but only when people understand their limitations, know when to challenge outputs, and apply them safely.
The workshop explicitly positions AI safety and deepfake resilience as core components of Human Risk Management. Participants explore why traditional awareness training is insufficient, and why HRM programs must continuously monitor how AI is actually used across the organisation, including informal, unapproved, or “shadow AI” use. Emphasis is placed on understanding how people verify information, escalate concerns, and challenge suspicious or AI‑generated content in real operational contexts.
Day One focuses on building a shared understanding of Human Risk Management in an AI‑enabled threat landscape. Participants are introduced to key human risk concepts alongside practical education on deepfakes, synthetic media, and AI‑driven manipulation. A facilitated desktop exercise demonstrates how realistic deepfake‑style scenarios exploit human trust, authority gradients, and decision pressure, often without triggering suspicion until it is too late.
Day Two deepens this focus through a more complex simulation that shows how AI‑enabled human risk accumulates over time if not actively monitored and managed. Participants observe how small verification failures, unchallenged assumptions, and informal workarounds can compound into significant operational, financial, or reputational harm. Strong emphasis is placed on practical defences: establishing verification behaviours, strengthening challenge and escalation norms, integrating AI safety into everyday workflows, and embedding human oversight into AI use.
Structured debriefs connect individual decisions to broader organisational outcomes and highlight how Human Risk Management programs must evolve to keep pace with AI‑driven threats, not through fear or restriction, but through education, behavioural reinforcement, and cultural change.
By the end of the workshop, participants can expect to:
Understand how deepfakes and AI‑enabled deception exploit human behaviour, trust, and authority
Recognise early warning signs of synthetic media, impersonation, and AI‑driven social engineering
Apply practical verification and challenge techniques to reduce deepfake and impersonation risk
Understand how Human Risk Management programs must monitor, measure, and address AI-related behaviors.
Contribute to a culture that supports critical thinking, safe AI use, and confident challenge of suspicious activity
The workshop delivers measurable value by helping organisations move beyond awareness‑based or technology‑centric responses toward proactive, human‑centred management of AI risk. Participants leave with a shared language, practical tools, and actionable insights that can be applied immediately to strengthen deepfake resilience, improve decision quality, and support the safe, responsible use of AI across the organisation.