AI in NZ’s Public Sector:
Opportunities, Challenges, and Ethical Concerns
Artificial intelligence (AI) is increasingly becoming a tool for government agencies worldwide, with the potential to enhance efficiency, streamline processes, and improve decision-making. However, its rapid adoption also raises concerns about transparency, accountability, and ethical risks. New Zealand has recently introduced new guidelines to ensure the safe and responsible use of AI in the public sector. Developed by the Government Chief Digital Officer, the Responsible AI Guidance aims to help agencies balance the benefits of AI with its inherent risks.
AI adoption in government is not new, but its acceleration presents challenges. A 2023 study found that 61% of public sector organisations globally are already using AI in some form, while 74% anticipate increasing its use in the next two years. In New Zealand, AI is being explored in areas such as data analysis, service delivery automation, and even environmental monitoring. Despite its potential, concerns persist around privacy, bias in decision-making, and the need for clear accountability when AI-driven systems impact citizens’ lives. The discussion around whether AI implementation is a strategic move or a dangerous gamble remains at the forefront of policy debates.
“It sounds very pragmatic, but there's always that risk that we do the first part—the light touch—and don’t get to the proportionate and risk-based part."
During a recent discussion on AI’s role in the New Zealand public sector, Shannon Barlow, NZ Managing Director at Frog Recruitment, and Stuart Mackinnon, Strategic Analyst at Analysis One, explored the implications of AI adoption in government agencies.
Barlow highlighted the country’s approach to AI regulation, which takes a “light-touch, proportionate, and risk-based” stance. However, Mackinnon cautioned that while this approach appears pragmatic, it could result in only partial implementation. “It sounds very pragmatic, but there's always that risk that we do the first part—the light touch—and don’t get to the proportionate and risk-based part,” he noted.
The conversation touched on how AI applications in the public sector vary significantly in terms of risk. For instance, Mackinnon shared an example of AI being used for bird monitoring by the Department of Conservation, where “even if the AI fails, the consequences are minimal.” However, he contrasted this with AI tools used to assess a citizen’s eligibility for government assistance, where the stakes are far higher. “You can immediately get that sense of a shift in risk,” he explained.
Another key issue raised was the challenge of navigating AI frameworks. While agencies already have existing obligations around privacy, security, and reliability, there is uncertainty about what is genuinely new with AI-specific regulations. “Agencies might be asking themselves, ‘Where can I leverage existing expertise, and where do I need to seek specialist advice?’” Mackinnon remarked. This lack of clarity could slow adoption or lead to misinterpretations of AI’s role in governance.
Accountability and transparency were also hot topics. While AI can improve government efficiency, it can also obscure decision-making processes. Mackinnon pointed out that New Zealand could learn from other countries, such as the US, which provides a public, machine-readable database of federal AI use cases. “The path to social licence passes through the valley of transparency. There are no shortcuts,” he stressed. He advocated for a shift from agencies being advised that they “should” be transparent to them being required to do so.
Barlow added that while AI can enhance operational efficiency, agencies must not neglect the human oversight aspect. “You still need that human aspect, and you can't leave it all up to AI,” she said. “That skillful input from your good humans is really important.” This underscores the necessity of AI serving as an aid rather than a replacement for human judgment.
Mackinnon also highlighted the risk of proprietary AI models limiting transparency. “If something goes wrong, who’s liable? The person who designed the AI, or the user? If we’re relying on an external vendor who isn’t exposing their code or weightings, then that’s a problem,” he warned. As AI systems become more integrated into critical public services, clarity on liability will become increasingly crucial.
Best Practices for AI Implementation in the Public Sector
To ensure AI adoption in the public sector is both effective and responsible, organisations must consider several best practices:
- Prioritise Transparency – Government agencies must provide clear documentation on how AI systems operate, including their decision-making processes, data sources, and potential biases. This openness builds public trust and ensures that AI-driven actions can be audited.
- Implement Robust Oversight – A strong governance framework should be established to oversee AI applications, ensuring ethical standards are upheld. This includes independent reviews and mechanisms to challenge AI-based decisions when necessary.
- Mitigate Bias and Errors – AI models must be carefully trained and tested to prevent biases that could unfairly impact citizens. This involves regular audits, diverse training datasets, and mechanisms to correct errors before they cause harm.
- Define Clear Accountability – When AI is used in government decision-making, there must be clear lines of responsibility. If an AI-driven system makes an incorrect determination, it should be clear whether accountability lies with the agency, the technology provider, or the individuals overseeing the system.
- Ensure AI Enhances, Not Replaces, Human Decision-Making – AI should be used as a tool to assist professionals rather than entirely replacing human judgement. Skilled oversight is necessary to interpret AI-generated insights correctly and apply them within a broader decision-making framework.
- Monitor AI Development and Adapt Regulations Accordingly – AI is advancing at an exponential rate. Governments must continuously monitor these developments and adjust frameworks to ensure that policies remain relevant and effective in managing risks and maximising benefits.
New Zealand’s journey with AI in the public sector is still evolving, and how the government navigates these challenges will determine whether AI becomes an asset or a liability. While the potential benefits are significant, only a balanced, transparent, and well-regulated approach will ensure AI delivers real value to both government agencies and the public.
Recent Insights


