March 26, 2026

The AI Governance Convergence

Share

How Regulatory Waves in AI Governance Are Reshaping Board and General Counsel Accountability

By 10 December 2026, organisations will be legally required to disclose what personal data it uses in automated decision making. But that compliance deadline is just the surface. Three regulatory waves are converging simultaneously, privacy transparency requirements, competition and consumer fairness expectations, and emerging national security considerations around sovereign AI, creating a governance and risk landscape that boards and general counsels are largely unprepared to navigate.

Introduction

The regulatory environment for artificial intelligence in Australia has shifted from experimental to prescriptive. The Privacy Act amendments commence in December 2026, the Australian Competition and Consumer Commission has sharpened its focus on AI transparency claims, and the federal government has signalled expectations around AI governance in high risk settings and data security through frameworks like the National Framework for the Assurance of AI in Government. These are not separate initiatives, they are parallel movements that will converge on organisations simultaneously, each creating compliance obligations and governance demands.


But here is the uncomfortable part: most organisations have no visibility into where AI is actually being used in their business. While boards are discussing generative AI strategies, employees are quietly deploying uncontrolled AI tools that feed proprietary data, customer records, and strategic information into third party systems. This gap between official AI governance and the reality of shadow AI deployment is where real risk sits. It is also where regulatory enforcement is likely to focus first.

The Problem: Three Regulatory Tracks, One Compliance Deadline

The Australian regulatory convergence is happening across three distinct but overlapping domains.


1. Privacy Transparency and Automated Decision-Making


The Privacy and Other Legislation Amendment Act 2024 introduces mandatory disclosure obligations effective 10 December 2026. Australian Privacy Principle 1 (APP 1) will require organisations to publish in their privacy policies the kinds of personal information used in automated decision making processes. The Office of the Australian Information Commissioner (OAIC) has been clear: this obligation applies to any decision made on or after that date, regardless of when the underlying algorithm or data collection infrastructure was deployed.


2. Consumer and Competition Scrutiny


The ACCC has explicitly positioned AI transparency as a consumer protection issue. Misleading or unexplained AI claims risk being treated as potentially deceptive conduct under the Competition and Consumer Act 2010 (Cth). The ACCC’s December 2024 Digital Platform Services Inquiry reiterated the regulator’s intent to monitor AI systems for unfair practices, undisclosed algorithmic decision making, and overstated capability claims. This creates parallel liability risk alongside privacy non compliance.


3. National Security and the Sovereign AI Horizon


The Australian Government’s 2024 proposals for mandatory guardrails in high risk AI settings, combined with emerging expectations around data residency and foreign ownership scrutiny through FIRB, signal an evolving governance landscape. Today, these obligations land hardest on defence primes, critical infrastructure operators, and financial services entities. But governance precedents established for heavily regulated sectors reliably become expectations for the broader market. What is mandatory for a defence contractor today is best practice for a mid market enterprise tomorrow. Boards that understand this trajectory are better positioned to build durable governance architecture now, rather than retrofit it under regulatory pressure later.


These three tracks converge on the same underlying problem: a lack of visibility and control over AI use within the organisation, and that is before we examine shadow AI.

The Shadow AI Blind Spot

Shadow AI is the elephant in every boardroom that nobody is talking about. Ninety eight per cent of organisations report unsanctioned AI use, according to 2025 research, but shadow AI takes two distinct forms, and the second is far harder to detect than the first.


Form One: In-Network Shadow AI


Employees spinning up ChatGPT, Claude, Microsoft Copilot, bespoke language models, and AI agents integrated into enterprise systems, without IT approval, security assessment, or governance controls. Gartner forecasts that 40 per cent of enterprise applications will feature task specific AI agents by end of 2026, up from under 5 per cent in 2025. The majority of those deployments will be uncontrolled. These tools leave a digital footprint inside enterprise systems, which means they can, in principle, be detected and audited, but only if someone is looking.


Form Two: Personal Device AI and Data Exfiltration


The second form is more insidious and almost entirely invisible to governance controls. Employees take photographs of sensitive data, customer records, financial spreadsheets, strategic plans, personal information, using personal devices, and feed those images into AI systems on their own laptops or phones. A loan officer photographs a customer file and uploads it to an AI tool for summarisation. A developer snaps a screenshot of database schema and feeds it into an AI coding assistant. A compliance officer photographs a regulatory report and asks an AI to extract key risks.


None of this leaves a trace in enterprise systems. It is completely invisible to IT audits and shadow AI detection tools. Yet it constitutes direct data exfiltration, and a potential breach of privacy obligations, information security policies, and in sensitive contexts, national security obligations.

Three Governance Failures Shadow AI Creates

Data leakage and intellectual property exposure


In-network shadow AI tools operate on an organisations systems but outside the organisations controls. Personal device AI use exfiltrates data entirely. Either way, sensitive customer data, financial records, strategic plans, and technical specifications end up in third-party systems. Once data enters those systems, it may be logged, cached, or used for model retraining. For a financial services firm handling customer information subject to privacy laws, or a technology company with commercially sensitive intellectual property, this is existential risk.


Compliance gaps and liability chains


Shadow AI tools, whether in-network or on personal devices, operate outside governance frameworks. When an employee uses an unapproved AI system to make or inform a customer decision, that decision falls outside documented AI governance processes. If that decision causes harm, the organisation’s defence that it maintains AI governance controls collapses. The organisation becomes liable for decisions made by tools it did not know existed.


Regulatory exposure


Shadow AI in both forms represents exactly the kind of uncontrolled automated decision making and data handling that privacy regulators, competition authorities, and national security agencies will scrutinise first. When the OAIC investigates AI use, organisations will be required to produce an inventory of all AI systems processing personal data. Shadow AI may not be able to be disclosed because it was not tracked. Personal device exfiltration cannot be disclosed because it was not visible.

The Case: A Financial Services Scenario

Consider a mid-tier financial services firm with 300 employees. The compliance team has documented AI use in credit decisioning and fraud detection. Their privacy policy is updated, their algorithms are tested, their governance committee meets quarterly. From the outside, it looks clean.


But unbeknownst to the board, loan officers across the business have individually subscribed to advanced AI tools to summarise customer applications and flag credit risks. Customer data such as name, account history, income, previous credit decisions, is being fed into these tools daily. Developers have built bespoke AI agents using opensource frameworks to automate routine compliance checks, feeding regulatory reports into systems that were never security assessed. The marketing team has deployed an AI powered lead scoring tool integrated into their CRM without IT approval.


None of this was malicious. It was productivity driven.


It is now December 2026. The OAIC sends a compliance inquiry requesting a full inventory of all AI systems processing personal data. The organisation discovers it has 47 active AI deployments, it can document and govern 12, the other 35 exist in shadow.


Suddenly, the organisation faces a choice: disclose the inventory gap (admitting lack of control to a regulator) or provide incomplete disclosure (misleading a regulator). Either path carries enforcement risk, that were avoidable.

The Solution: From Audit to Architecture

Three steps move an organisation from vulnerability to resilience.


Step One: Map and Categorise


The first action is an honest AI inventory:


  • What systems exist?
  • Where are they deployed?
  • What data do they process?
  • Who owns them?


This is not a technical audit, it is a governance census, conducted by general counsel and risk, not IT, because the goal is understanding decision rights and data handling, not technology stack. Systems should be categorised into three tiers:


  1. Official (documented and governed)
  2. Shadow (known but ungoverned), and
  3. Unknown (yet to be discovered).


The census will not eliminate all shadow AI, but it establishes the foundation: knowledge is the prerequisite for control.


Step Two: Establish Governance Gates


Create a mandatory approval process for any AI deployment that processes personal data, customer information, or commercially sensitive material. This does not mean banning employee use of AI tools. It means any tool handling in scope data passes a simple governance checklist:


  • Is data being transmitted outside the organisation?
  • Is the vendor based in Australia or a Five Eyes jurisdiction?
  • Is the tool trained on organisational data?
  • Can data be deleted from the system on request?
  • Is the use consistent with how customers understand their data is handled?


The checklist also needs to address personal device use, employees must understand that using a personal phone or laptop to photograph or copy organisational data into external AI systems is a policy breach, not a productivity shortcut.


Step Three: Align to Established Frameworks


Organisations do not need to invent AI governance from scratch. The MindForge AI Risk Management and Governance Framework, developed by the Monetary Authority of Singapore in collaboration with a consortium of major financial institutions and released in January 2026, provides a proven and internationally credible architecture. MindForge establishes four governance pillars:


  1. Governance and oversight (clarity of roles and accountability for AI);
  2. AI risk management (identification, materiality assessment, and inventorisation);
  3. AI lifecycle management (controls covering the full lifecycle from deployment through retirement), and;
  4. Organisational enablers (capability, infrastructure, and resources for ongoing responsible AI use)


Mapping an organisations AI environment against these four pillars, and documenting where gaps exist, gives you a structured basis for remediation and, critically, for demonstrating to regulators that reasonable and proportionate steps have been taken.


The key to this approach is that it is anticipatory. Organisations that move now, are moving with the regulatory curve. Those that wait, will face the December 2026 deadline under pressure, with inadequate inventory, limited time for remediation, and difficult choices about disclosure.

Conclusion

The convergence of privacy, competition, and national security expectations around AI governance is not a threat to innovation. It is a forcing function, for discipline. Organisations that map their AI environment, establish governance gates, and align to proven frameworks, will find the December 2026 Privacy Act amendments straightforward to implement. Those that do not will face a compliance crunch: a hard deadline, no inventory of what is in scope, and rapid decisions about disclosure made under regulatory scrutiny.


The board’s role is clear, demand that general counsel and risk owners report on three questions by mid 2026:


  1. What AI systems is the organisation officially deploying, and are they governed?
  2. What shadow AI exists, in-network and on personal devices, and what is the remediation plan?
  3. Is the organisation aligned with established governance frameworks, and what evidence can be demonstrated to a regulator?


The answers will reshape how organisations approach both innovation and risk. The time to act is now, moving with the regulatory curve rather than behind it.

References

Australian Government, National Framework for the Assurance of AI in Government (Data and Digital Ministers’ Meeting, 21 June 2024).


Privacy and Other Legislation Amendment Act 2024 (Cth).


Office of the Australian Information Commissioner, ‘Chapter 1: APP 1 — Open and Transparent Management of Personal Information’ (OAIC, 2024) <https://www.oaic.gov.au>.


Australian Competition and Consumer Commission, ‘Digital Platform Services Inquiry — Final Report’ (ACCC, March 2025).


Australian Government, ‘Introducing Mandatory Guardrails for AI in High-Risk Settings’ (Department of Industry, Science and Resources, September 2024).


Enterprise AI Governance Research (various), cited in ‘Shadow AI Explained: Risks, Costs, and Enterprise Governance’ (2025).


Gartner, ‘Top Strategic Technology Trends 2026’ (Gartner, 2025).


CIO, ‘Shadow AI: The Hidden Agents Beyond Traditional Governance’ (2025) <https://www.cio.com>.


Monetary Authority of Singapore and MindForge Consortium, AI Risk Management and Governance Framework: Operationalisation Handbook (MAS, January 2026).


By Andrew Dibley April 29, 2026
An investigation can become your greatest liability. Why process discipline and chain of custody are non-negotiable.
By Andrew Dibley April 28, 2026
The biggest threat to your organisation may already be inside it. How equipped is your board to detect fraud and internal misconduct?