Skip to main content

What we do

ASIC is Australia's integrated corporate, markets, financial services and consumer credit regulator.

Artificial intelligence transparency statement

ASIC’s Artificial intelligence (AI) transparency statement is issued further to the Australian Government’s Policy for the responsible use of AI in government (the policy). The policy provides mandatory requirements for government departments and agencies. This page provides details of ASIC’s implementation of those requirements.

ASIC is Australia’s integrated corporate, financial services, consumer credit and markets regulator. ASIC’s work helps maintain the integrity of Australia’s financial system and protects consumers from harm. We do this by undertaking a range of regulatory activities that will be improved by developing and deploying AI systems.

ASIC is committed to safe, responsible, and transparent use of AI and will comply with applicable laws and regulation.

Why ASIC uses AI or is considering its adoption

ASIC has an extensive remit, and AI can make us more effective and efficient. Given the rapid adoption of AI in financial services, our ability to regulate the use of AI in this sector will also be enhanced by gaining expertise in and experience with AI.

Classification of AI use according to usage patterns and domains

ASIC has adopted the definition of AI used in the policy. This definition is broad and potentially includes technologies and systems not commonly considered as AI. We manage AI systems by taking a risk-based approach. For example, we scrutinise more closely and apply tighter controls to AI systems that present the most risk.

As a law-enforcement agency, there may be occasions when we do not disclose the use of AI in connection with surveillance and enforcement activity, and this transparency statement should be read subject to that qualification.

The following tables describe how ASIC is currently using AI, as prescribed by the Digital Transformation Agency (DTA):

The table below describes AI usage at ASIC.

Usage pattern Description

Analytics for insights

Identifies patterns, produces insights, and extracts information within structured and unstructured data

Workplace productivity

Automates routine tasks, helps manage workflows and supports staff with administrative activities

Decision making and administrative action

Supports decision making by guiding, assessing, or by making a recommendation to a human decision maker

Image processing

Identifies patterns and objects within images (such as nudity) to prevent exposure of ASIC staff to inappropriate material

The table below demonstrates ASIC domains where AI is used at ASIC.

Domain Description

Compliance and fraud detection

Supports identification and summarisation of patterns and anomalies in data to detect fraudulent activities and risk

Corporate and enabling

Supports corporate functions by automating processes, optimising resource allocation, and improving operational efficiency

Law enforcement, intelligence, and security

Supports enforcement and intelligence activities by analysing data from various sources, and by aiding in intelligence gathering

Service delivery

Enhances efficiency of internal services

Classification of use where the public may directly interact with, or be significantly impacted by AI or its outputs, without human review

ASIC does not use AI in any manner that allows direct interaction with the public or significantly impacts the public, without human oversight or involvement.

Measures to monitor the effectiveness of deployed AI systems and protect the public against negative impacts

ASIC has governance structures and processes in place and is continuously developing these in alignment with whole-government initiatives. For example:

ASIC’s AI policy

Our AI policy sets organisational responsibilities and considerations for developing, deploying, and using AI. Our AI policy is aligned with the principles underpinning the Australian AI Ethics Principles and implements the Australian Government’s Policy for the responsible use of AI in government.

AI assurance and risk management process

We are participating in whole-of-government initiatives to develop an AI assurance process comprising of AI controls, measurement and assessment, monitoring and reporting, roles and responsibilities.

For our regulatory and enforcement activities, ASIC uses well-understood systems that have clear and proven benefits. Our risk-based approach is aligned to the National framework for the assurance of AI in government.

ASIC’s data and AI governance practices include privacy, ethics, and security assessments. In all cases of AI use, human review or oversight is involved before any action is taken.

AI board

Our AI board is the primary governance body for AI which oversees the design, development, deployment and use of AI in ASIC. This includes approval of AI use cases and making recommendations for improvements, risk mitigation, and ongoing monitoring and compliance.

Foundational technologies

We are investigating and adopting fit-for purpose technology to support the use of AI. This has involved:

  • regular monitoring and evaluation of performance,
  • adoption of robust security measures and access control, including monitoring for abnormal security activities, and
  • implementation of explainable AI methods to ensure the AI systems are interpretable, explainable and understandable.

Compliance with the requirements under the Policy for responsible use of AI in government

AI accountable official and chief AI officer

ASIC’s Senior Executive Leader, Data, Analytics and AI is the accountable official under the policy and the chief AI officer under the APS AI Plan 2025.

Strategic position on AI adoption

ASIC is currently developing a Data, Digital and AI Strategy which will outline ASIC’s strategic position on AI adoption.

Internal AI use case register and use case accountability

ASIC maintains an internal register of AI use cases that captures the minimum required fields outlined in the Standard for accountability. Each use case has a designated accountable use case owner.

Operationalising the responsible use of AI and AI impact assessment

ASIC has established an AI policy, AI board and AI use case assessment that integrates all provisions of the DTA’s AI impact assessment tool. This includes assessment of AI use cases by ASIC’s AI board and reporting of any high-risk use cases to the DTA. ASIC also has existing risk incident management processes which includes AI incidents.

Staff training on AI

ASIC will be implementing the Australian Public Service Academy’s (APSC) AI in government fundamentals as a mandatory training for all ASIC staff. In addition, we implement role-based training programs and a series of organisation-wide AI training sessions. Data literacy is also a core capability within ASIC’s learning and development syllabus.

Updates and more information

This transparency statement was most recently updated on 27 February 2026. It will be updated to reflect significant changes in our approach to AI, and at least every twelve months.

If you have questions or seek more information about this AI transparency statement please submit an online enquiry.

Update publication date Update comment
28 February 2025 Publication of first AI transparency statement
27 February 2026 Reviewed to reflect the updated Policy for responsible use of AI released in December 2025