Country Profile – Australia
Introduction
The Australian Electoral Commission (AEC) engages with AI within a national regulatory environment that is increasingly complex. The Australian Government (Commonwealth Government) has developed a range of governance frameworks, assurance mechanisms, support structures and implementation directives to shape the responsible adoption of AI across the public sector.
In alignment with emerging policies, the AEC’s approach to AI is deliberately conservative, ensuring that any potential use cases comply with governance and accountability standards well before commencing their deployment. Core electoral administration remains manual. Voting is conducted using paper ballots, which are counted by hand, reflecting the Commission’s emphasis on maintaining the integrity of the ballot through auditable paper records (AEC 2025). Digital technologies play only a limited supporting role—such as the use of rule-based algorithmic optical recognition software to scan voter preferences on Senate (Upper House) ballot papers, with all results verified by manual capture and human oversight before preference distribution is calculated.
This manual foundation continues to shape the Commission’s perspective on AI adoption. Rather than pursuing automation of core electoral functions, the AEC has focused on exploring AI primarily for administrative efficiency, internal productivity, communications and analysis and planning of some voter services. In doing so, the Commission maintains a clear boundary around fundamental elements of the electoral process such as vote casting, counting, tabulation, as well as electoral roll and boundary determinations and compliance and enforcement decision making. However, this does not mean that the AEC does not recognize AI’s transformative potential, as it is concurrently exploring more substantive AI applications. As of early 2026, the Commission is in the process of developing an AI strategy intended to define potential use cases, governance arrangements and institutional capacity requirements. In parallel, the Commission has begun modernizing its underlying data infrastructure to support future digital systems, ensuring that any prospective AI applications are developed on reliable foundations.
AI tools currently in use
The AEC’s initial AI projects have been characterized by the Commission as low-risk, productivity-oriented tools that offer gains in daily operations, serving primarily to support the work of human practitioners.
The two ways in which AI has been utilized by the Commission so far include:
- Deploying GitHub Copilot for coding assistance: The AEC’s first AI deployment was the programming tool GitHub Copilot, introduced as part of the Commission’s program to modernize election systems, and funded by the Australian Government. Initially, GitHub Copilot was trialed in a smaller cohort of software developers working on the .NET and ServiceNow platforms, who tested the efficacy of the AI tool before scaling it up. Its usage centers on augmentation rather than generation: as developers write the code, GitHub Copilot reviews it to identify improvement opportunities and then provides suggested revisions. During the trial, staff members experienced an increase in both the speed and the quality of the code, as well as perceived improvements in their personal skill sets. Following an evaluation of the trial, GitHub Copilot was rolled out to all developers across the modernization program and to the Chief Information Officer Division.
- Using Microsoft 365 Copilot to support productivity: The AEC has deployed Microsoft 365 Copilot to support staff productivity in two forms. First, a licensed version is currently under trial among 200 staff members. Members of the trial are obligated to contribute to an active in-house community, ensuring that the technology is thoroughly evaluated, and that staff actively explore possible use cases. Pulse surveys are conducted every four to six weeks, and regular showcase sessions allow participants to present their ideas for use cases and prompts. Second, a lighter chat version of Copilot has been made available to all AEC staff. Both forms of deployment focus on internal, administrative work rather than election-specific operations.
Aside from these two key examples of AI use, the AEC reports using AI to support staff members in need of adaptive or assistive technologies. This usage predominantly entails text-to-speech AI for visually impaired employees. The AEC also uses generative AI in its graphics department for the development of visuals in educational content (AEC 2026).
AI applications under consideration
While definitive plans for future AEC initiatives involving AI will be made concrete with the publication of a formal AI strategy later in 2026, the Commission has already scoped a set of potential areas in which AI might benefit activities. These applications primarily focus on improving the accessibility and convenience of voter services, but they also include measures to improve election planning as well as programmes that assist with the responsible implementation of AI.
There are currently five primary examples of AI use under consideration at the AEC, plus an assistance tool for AI testing and evaluation:
- Mis- and disinformation monitoring and generative engine optimization (GEO): The AEC has identified AI-enhanced social media monitoring for the purpose of conducting sentiment analysis and threat detection as a potential use case. While such monitoring offers a powerful tool for reducing pollution in the information environment, by mitigating the spread of mis- and disinformation, the AEC equally recognizes the importance of ensuring that its own information reaches voters as early as possible. The Commission notes that the introduction of AI-mediated search-result summaries in traditional search engines creates new challenges for guaranteeing that verified information reaches voters. As fewer users click through the AEC website, the Commission is deprioritizing search engine optimization (SEO) in favor of generative engine optimization (GEO). Part of this strategy involves structural changes to web content, such as migrating videos from video hosting platforms back to the AEC website and implementing file structures that are better recognized by AI search tools.
- Agentic AI for basic voter information: The AEC will explore the use of agentic AI to improve voter and stakeholder information reach, especially by microtargeting relevant information at specific audiences. An example of how AI might serve voters and stakeholders relates to the upcoming commencement of amendments to Australia’s electoral funding and disclosure legislation. The revised legislation will introduce real-time reporting requirements during elections and lower the threshold for declaring donations from AU$ 16,000 to AU$ 5,000. This will significantly expand the number of entities subject to regulatory obligations, including many who might be unaware of the new requirements. By deploying public-facing agentic AI via official channels, the AEC can more easily ensure that relevant information reaches the affected parties. This AI capability would continue to be complemented by AEC staff managing all complex inquiries through our current phone and email service offering.
- Agentic AI for public service delivery: The AEC is investigating how integrating agentic AI into its website might help voters access services beyond the straightforward capabilities of a large language model (LLM) chatbot. Beyond the answering of basic voter queries on the electoral process, the Commission will explore the various self-service functionalities, such as enabling the chat interface to redirect users to the appropriate service portal. Overall, the AEC is exploring a multi-agent architecture, drawing on specialized, purpose-built LLMs for basic inquiries related to its various service offerings, including funding and disclosure, enrollment, and postal voting. All decision making relating to voter entitlement and issuing votes will remain part of the AEC’s current manual processes and be undertaken by a human decision maker.
- Geospatial and predictive operational modelling: The AEC is introducing master data management and geospatial capabilities to internal systems under its modernization programme and is considering how these could be combined with AI to support predictive modelling. AI may, for example, be used to analyse voter enrolment patterns and provide insights on possible allocation of resources to polling places. While this predictive analytical capability will act an input into the development of potential electoral service settings, the AEC’s senior executive will continue to make all decisions on electoral service offerings.
- GovAI sandbox experimentation: The AEC is planning to use the government-wide platform ‘GovAI’, operated by the Department of Finance, as a secure sandbox environment for testing and prototyping AI concepts prior to their development or procurement. GovAI provides secure infrastructure specifically designed for Australian Public Service (APS) bodies to experiment with AI tools while remaining within government data sovereignty requirements. The GovAI platform was specifically designed to reduce vendor dependency by offering a neutral platform for exploring solutions (Australian Government, GovAI 2025).
Government-wide and Australian Electoral Commission-specific AI policy
The AEC is subject to a multilayered AI regulatory landscape, with administrative powers distributed across Commonwealth departments. The overlapping instruments that jointly govern the AEC’s AI activities are designed at different levels of abstraction, from high-level ethical principles to operational technical standards. The Commission’s forthcoming AI strategy is intended to merge these obligations with concrete plans for its future projects.
Key instruments that govern AI at the AEC are:
- Australia’s AI Ethics Principles: Published by the Department of Industry, Science and Resources, this voluntary, government-wide framework sets out ethical principles that apply to the design and use of AI systems in the public sector, covering human well-being, fairness, privacy, transparency, contestability and accountability. The principles serve as the normative foundation for all AI governance instruments in the Australian public sector. The AEC’s internal AI policy maps each principle to specific AEC obligations, translating the high-level commitments into actionable guidelines.
- Policy for the Responsible Use of AI in Government: Administered by the Australian Government Digital Transformation Agency (DTA), this binding policy sets baseline requirements for all Commonwealth agencies. This includes a mandate for appointing dedicated AI officials and for publishing a public transparency statement detailing how AI is used at the agency. Its key principles cover accountability channels, harm prevention and cross-governmental coordinating mechanisms.
- Technical Standard for Government’s Use of AI: Released by the DTA in late 2025, the standard provides practical, end-to-end technical guidance for entities deploying AI systems in a government context, including design, data management, deployment, monitoring and disengagement. The AEC’s internal policy designates the Technical Standard as a key source of guidance on the technical terms and conditions governing AI tools used at the Commission—and the Technical Standard is the most operationally specific instrument within the AEC’s AI governance architecture.
- National Framework for the Assurance of AI in Government: Agreed to by Commonwealth, state and territory governments, this framework establishes a consistent, principles-based approach to AI assurance across all levels of government. It maps practical assurance steps to each of the principles outlined in Australia’s AI Ethics Principles. The AEC has built its internal assurance procedures directly on this framework, including an AI assurance assessment template that must be completed and endorsed before any new tool reaches the Senior Executive Committee (SEC) for approval.
- Evaluation of the whole-of-government trial of Microsoft 365 Copilot: Based on the evaluation of an initial Copilot trial among 21 volunteer agencies held by the DTA in 2024, this report contains a draft AI impact assessment tool designed to help agencies compare use-cases against the Australian AI Ethics Principles at each stage of the AI lifecycle. While the AEC was not among the volunteer agencies who took part in the DTA trial, the evaluation report has been influential in shaping the assessment criteria for the Commission’s own ongoing Copilot trial.
- AI Plan for the Australian Public Service 2025: The AI Plan for the Australian Public Service is jointly led by the Department of Finance, the DTA and the Australian Public Service Commission, and it is intended to support public agencies with tools, resources, knowledge and guidance on AI. Complementing the Policy for Responsible Use of AI in Government, it mandates the appointment of specific focal points for managing AI in public agencies. The plan also prescribes investment in the GovAI platform, which the AEC intends to use for AI prototyping and testing.
- AEC’s internal use of AI policy: The AEC’s internal policy sits as an agency-specific layer beneath the national instruments. It maps Australia’s AI Ethics Principles to concrete AEC responsibilities, specifies role-level accountabilities for AI across branches of the Commission, and sets out guidance on both the acceptable and prohibited uses of AI. The policy is regularly updated to reflect the swift pace at which AI is changing at the Commission, accommodating the introduction of new tools or changes in governance arrangements.
Organisational arrangements to support AI
As part of the regulatory obligations under the Policy for Responsible Use of AI in Government, the AEC has appointed two roles to supervise AI use. First, the Chief AI Officer (CAIO) is responsible for implementing the Commission’s AI programme. That includes designing use-cases, overseeing deployment, assuring the accountable officer that tools are being used responsibly, and leading the institutional push for AI transformation (Australian Department of Finance 2025). Second, the Chief AI Accountable Officer (CAAO) ensures that AI is used responsibly across the Commission, building institutional trust in the AI tools that are in place, serving as the contact point for inter-agency AI coordination and notifying the DTA of any new high-risk AI use-cases (Australian Government 2025).
In addition to the responsibilities undertaken by the CAIO and CAAO, the AEC has formed a dedicated AI working group. This cross-departmental forum allows staff members to discuss, develop and test AI ideas before they are escalated to the Investment Committee or the Senior Executive Committee of the Commission for formal approval. The working group’s formal mandate is to identify use-cases, exchange updates on policy developments, consider options for policy implementation, provide guidance to staff on AI procurement, and improve organizational awareness and literacy concerning AI. As part of the AI strategy development process, the AEC expects to recommend the creation of a more defined AI division as the Commission’s engagement with AI technology grows.
Authored by
Cecilia Hammar – International IDEARegion or country
AustraliaKey takeaways
- Low-risk approach: The Australian Electoral Commission’s (AEC) approach to AI is deliberately conservative, focusing its initial adoption on internal productivity tools while avoiding the use of AI for vote counting and tabulation. This approach also embraces a strict testing and assurance process before deploying new AI applications.
- Current AI use: GitHub Copilot assists developers by improving the speed and quality of coding, and Microsoft 365 Copilot is being trialed for streamlining internal administrative tasks.
- AI under consideration: The AEC is developing an AI Strategy to guide the development of further AI applications to support internal administration, voter services and election planning. Potential use cases include the AI-driven monitoring of mis- and disinformation, agentic tools to deliver targeted voter and service information, predictive models for operational planning, and secure AI experimentation through the Australian Government’s ‘GovAI’ sandbox.
- Multilayered AI governance landscape: The AEC must navigate a complex landscape of ethical principles, government policies, technical standards and cross-government assurance frameworks before integrating a new AI application.
References
Australian Electoral Commission (AEC), Central Senate Scrutiny – security and integrity, updated 11 August 2025, accessed 13 March 2026
—, Australian Electoral Commission artificial intelligence (AI) transparency statement, updated 9 February 2026, accessed 13 March 2026
—, The Senate counting process, [n.d.], accessed 13 March 2026
Australian Government, Standard for accountability, updated 2 December 2025, accessed 13 March 2026
Australian Government, Department of Finance, Establishing Chief AI Officers for the APS, 19 December 2025, accessed 13 March 2026
Australian Government, GovAI, About GovAI, updated 19 November 2025, accessed 13 March 2026
The AI + Elections Clinic case studies were developed by International IDEA in partnership with national electoral management bodies (EMBs). The information is primarily based on one-on-one interviews with AI experts from these EMBs and has been corroborated with internal documents provided by EMBs as well as relevant public sources.
International IDEA publications are independent of specific national or political interests. Views expressed in this text do not necessarily represent the views of International IDEA, its Board or its Council members.