Country Profile – United Kingdom
Introduction
The Electoral Commission of the United Kingdom has experienced rapid organizational growth in recent years. Despite its comparatively modest size relative to other UK public bodies, the Commission has been accelerating its digital maturity, driven by expanding legislative obligations, a renewed institutional mandate under new leadership, and the uptake of new technologies (Department for Levelling Up, Housing and Communities 2024; The Electoral Commission 2025). As reflected in the UK government’s new ‘strategy for modern and secure elections’, emerging technologies are presented as conduits for more efficient public services, facilitating democratic engagement (Ministry of Housing, Communities and Local Government 2025). Consistent with this vision, the Commission has made structured efforts to integrate AI into its operations, concentrating its initial activities on four strands that improve core statutory processes within the constraints of a growing organization.
Underpinning the Commission’s approach to AI integration is an understanding that digital infrastructure alone is insufficient to reap the potential benefits of AI solutions. Institutional adoption requires equal attention to staff acceptance, digital literacy and changing institutional culture. In order to approach the multifaceted issue of responsible AI development, the Commission has established an AI Working Group as a central coordinating unit. The Working Group serves as the primary forum through which AI initiatives are raised, conceptualized and evaluated, and it reflects the deliberative nature of the Commission’s approach to AI technologies.
How is AI currently used at The Electoral Commission?
The Commission has established four main modalities of work on AI, governed by dedicated teams, all of which are represented in the AI Working Group. These four strands reflect considerations on how to use limited resources efficiently to address high-priority issues, as well as what tools can be adequately prototyped while the Commission’s AI architecture is still nascent:
- Financial oversight and political spending tool: One of the Commission’s most distinctive AI applications is concerned with the auditing of political spending. Under UK electoral law, political parties and candidates are required to submit receipts and invoices documenting campaign expenditure. During a general election cycle, this can result in upwards of 10,000 documents requiring review and data entry. To make this process more effective, The Electoral Commission is developing an AI-powered tool to extract information from documents systematically, conduct compliance checks and convert the information format to align with a set template. This information is subsequently uploaded to the Political Finance Online database, which publishes campaign expenditure (The Electoral Commission 2020). As part of the project, the Commission is collaborating with Microsoft to incorporate a data-entry agent for the processing of financial documents.
- Regulatory media monitoring: The Electoral Commission is developing an AI tool for exclusive use by its Regulatory Action Team to support monitoring of the media. The system’s purpose is to automate the labor-intensive task of sifting through online media to identify content relevant to the operational needs of the Commission—primarily news coverage relating to political finance regulations and matters of potential regulatory concern. At the technical level, the model employs a deep-learning systems framework that learns from historical compilations and pre-programmed exclusion rules. The model extracts relevant information from incoming articles, transforms the information to comply with a standardized tone and format, and distributes results as an internal report to a dedicated internal mailing list. The deep-learning component is designed to improve over time as it builds on prior selections, user feedback and other forms of fine-tuning. The writing of newsletters has, historically, been managed by staff members, so the Commission has designed the tool to be an assistant rather than a fully autonomous replacement. Staff members in the Regulatory Action Team are encouraged to work collaboratively with the model, reviewing and correcting output. This human-in-the-loop model is considered vital for ensuring quality assurance and for maintaining practitioner ownership and accountability over the process.
- AI Deepfake Project: In preparation for upcoming elections in May 2026, The Electoral Commission is planning to pilot AI-powered tools to support its detection and analysis of the impact of mis- and disinformation driven by political deepfakes, and to develop its evidence base on the scale of the threat deepfakes pose to the UK’s electoral system. These tools will involve (1) a deepfake detection tool, looking at the accuracy of the material; (2) a social media monitoring tool, scanning online platforms to identify narrative patterns, localize coordinated amplification, and discern the ways in which narratives impact public sentiment on election campaigns; and (3) building a structured repository that stores flagged content and associated metadata in an evidence base. This ‘evidence locker’ would inform any appropriate responses in line with the mandate of the Commission or in collaboration with law enforcement and regulatory agencies. In addition, the data repository also serves as a research corpus for the drafting of post-electoral reports, a task that falls under the statutory obligations of The Electoral Commission.
- Copilot: The Commission is planning to introduce an institution-wide Microsoft Copilot license, allowing practitioners to streamline the management of everyday tasks. The introduction of Copilot will be coupled with an educational program and institutional support to ensure that the software is utilized in compliance with the Commission’s rules on AI usage.
AI use under consideration at The Electoral Commission
The Electoral Commission has limited its current AI activities to the four strands described above in recognition of resource constraints and the need to demonstrate clear proofs of concept before scaling up. However, the Commission expects these initial efforts to be the first in a continued trajectory toward the greater integration of AI. Most notably, the Commission anticipates that AI will be considered for voter outreach and information provision in future discussions. While the four strands currently under development mainly serve to enhance the capacity of the Commission’s internal operations, external-facing AI applications could improve the quality of public service delivery if implemented successfully. Nonetheless, the Commission stresses that any public-facing AI use could introduce a distinct set of challenges—ranging from risks to public trust to concerns about misinformation or ‘technoskepticism’—that will need to be addressed, so as to ensure that new systems are reliable and well-received.
Institutional inertia and changing the work culture
The Electoral Commission underscores that the fruitful implementation of AI tools involves changes to institutional culture. Change management and the challenges associated with reorganizing workflows are acknowledged as important factors that need to be addressed to realize the potential benefits of AI solutions. Even with considerable investment in acquiring and integrating AI systems, projects may not achieve their intended outcomes if end-users are hesitant, uncertain or unwilling to adopt them. Without early attention to these concerns, the introduction of new technologies could encounter resistance from practitioners who may be concerned about job changes, unfamiliar mandatory systems or significant adjustments to work processes.
When internal communication is not centralized, staff often receive information about AI initiatives through informal or fragmented channels, which risks a blurring of institutional direction. The Commission has responded by making internal communications a central element of AI governance, describing AI tools as productivity aids intended to decrease repetitive tasks and allow more time for complex or meaningful work. All processes involving AI retain human oversight, with practitioner review and involvement required to maintain professional control over system functions.
Before deploying Copilot, the Commission is conducting an assessment of digital literacy among all staff, which will inform the development of a targeted training program. This program will aim to help staff understand the practical aspects of the systems, as well as how they fit within existing policy frameworks and regulatory requirements. Importantly, the program will clarify that new AI systems do not alter the fundamentals of different staff functions but rather are intended to complement current workflows. The primary goal is to establish a baseline level of AI literacy prior to introducing new tools, in order to ensure that staff members have the appropriate skill sets to utilize AI effectively. Additionally, by ensuring that practitioners are comfortable with AI usage and basic troubleshooting, the Commission aims to decrease the demand for IT support.
Internal AI policy and the AI Working Group
Alongside the practical development of AI systems, The Electoral Commission is currently undertaking two parallel efforts to build a governance framework around the technology. First, the Commission is in the later stages of developing new policy instruments to govern the internal use of AI. These regulations will establish provisions about how technologies are to be managed and protected, including functions such as data labeling and search systems and role-based access and purview. In addition to AI-specific policy, the Commission’s governance approach is guided by central data-protection principles, including clarity of business purpose for any data processed by AI systems, rules on appropriate periods for the retention of data, and principles regarding the minimization of data collected.
Outside of policy, the Commission’s main institutional change in response to AI is the formation of a dedicated AI Working Group. Chaired by the Chief Executive, the AI Working Group was established specifically to create a cross-institutional forum for AI distinct from the Commission’s broader Portfolio Assurance Board (PAB), which otherwise oversees technology-related projects. To make the AI Working Group institutionally democratic and multi-stakeholder, its membership has been structured to include heads of directorates from across the Commission. Individual teams within the Commission are invited to present AI integration proposals, allowing the Working Group to deliberate collectively about the feasibility and prioritization of investments.
The AI Working Group’s remit encompasses both the inward-facing and outward-facing dimensions of AI procurement. Regarding the former, it is tasked with identifying existing manual processes across the Commission that are suitable candidates for AI solutions, and with determining the initiatives that should be elevated to the PAB for formal approval. Regarding the outward-facing dimensions, the Working Group holds a product awareness function, monitoring the state of the field to discover opportunities relevant to the Commission’s mandate. The AI Working Group has open dialogues with service providers to assess possible AI solutions in parallel with other relevant stakeholders across the Commission—including PAB, Digital Information and Technology, and Digital Products and Innovation—who are likewise charged with appraising new technologies from third-party vendors.
Authored by
Cecilia Hammar – International IDEARegion or country
United KingdomKey takeaways
- Helping organizational change The Electoral Commission’s AI integration process goes beyond building digital infrastructure to include programs intended to support organizational change and help staff embrace AI. Initiatives include a structured internal communication plan, emphasizing the benefits of AI integration into daily workflows, as well as the creation of targeted AI training programs.
- Current AI use: The Electoral Commission has four main areas of ongoing AI application, all of which serve to streamline internal operations. These comprise the financial oversight of political spending, regulatory media monitoring, disinformation detection and analysis, and productivity tools for everyday tasks.
- Institutional AI response: The Electoral Commission is developing policy instruments to govern internal AI use, including functions such as data labeling systems and role-based access and purview. It has also created a dedicated AI Working Group, tasked with inward- and outward-facing AI application appraisal and procurement.
References
Department for Levelling Up, Housing and Communities, Electoral Commission strategy and policy statement, policy paper, 29 February 2024, accessed 11 March 2026
Electoral Commission, The, Political Finance Online, updated 7 October 2020, accessed 12 March 2026
—, Electoral Commission makes key director appointments to its executive team, 5 September 2025, accessed 11 March 2026
Ministry of Housing, Communities and Local Government, Restoring trust in our democracy: Our strategy for modern and secure elections, policy paper, 17 July 2025, accessed 11 March 2026
The AI + Elections Clinic case studies were developed by International IDEA in partnership with national electoral management bodies (EMBs). The information is primarily based on one-on-one interviews with AI experts from these EMBs and has been corroborated with internal documents provided by EMBs as well as relevant public sources.
International IDEA publications are independent of specific national or political interests. Views expressed in this text do not necessarily represent the views of International IDEA, its Board or its Council members.