Artificial intelligence (AI) is increasingly becoming a tool for government agencies worldwide, with the potential to enhance efficiency, streamline processes, and improve decision-making. However, its rapid adoption also raises concerns about transparency, accountability, and ethical risks. New Zealand has recently introduced new guidelines to ensure the safe and responsible use of AI in the public sector. Developed by the Government Chief Digital Officer, the Responsible AI Guidance aims to help agencies balance the benefits of AI with its inherent risks.
AI adoption in government is not new, but its acceleration presents challenges. A 2023 study found that 61% of public sector organisations globally are already using AI in some form, while 74% anticipate increasing its use in the next two years. In New Zealand, AI is being explored in areas such as data analysis, service delivery automation, and even environmental monitoring. Despite its potential, concerns persist around privacy, bias in decision-making, and the need for clear accountability when AI-driven systems impact citizens’ lives. The discussion around whether AI implementation is a strategic move or a dangerous gamble remains at the forefront of policy debates.
“It sounds very pragmatic, but there's always that risk that we do the first part—the light touch—and don’t get to the proportionate and risk-based part."
During a recent discussion on AI’s role in the New Zealand public sector, Shannon Barlow, NZ Managing Director at Frog Recruitment, and Stuart Mackinnon, Strategic Analyst at Analysis One, explored the implications of AI adoption in government agencies.
Barlow highlighted the country’s approach to AI regulation, which takes a “light-touch, proportionate, and risk-based” stance. However, Mackinnon cautioned that while this approach appears pragmatic, it could result in only partial implementation. “It sounds very pragmatic, but there's always that risk that we do the first part—the light touch—and don’t get to the proportionate and risk-based part,” he noted.
The conversation touched on how AI applications in the public sector vary significantly in terms of risk. For instance, Mackinnon shared an example of AI being used for bird monitoring by the Department of Conservation, where “even if the AI fails, the consequences are minimal.” However, he contrasted this with AI tools used to assess a citizen’s eligibility for government assistance, where the stakes are far higher. “You can immediately get that sense of a shift in risk,” he explained.
Another key issue raised was the challenge of navigating AI frameworks. While agencies already have existing obligations around privacy, security, and reliability, there is uncertainty about what is genuinely new with AI-specific regulations. “Agencies might be asking themselves, ‘Where can I leverage existing expertise, and where do I need to seek specialist advice?’” Mackinnon remarked. This lack of clarity could slow adoption or lead to misinterpretations of AI’s role in governance.
Accountability and transparency were also hot topics. While AI can improve government efficiency, it can also obscure decision-making processes. Mackinnon pointed out that New Zealand could learn from other countries, such as the US, which provides a public, machine-readable database of federal AI use cases. “The path to social licence passes through the valley of transparency. There are no shortcuts,” he stressed. He advocated for a shift from agencies being advised that they “should” be transparent to them being required to do so.
Barlow added that while AI can enhance operational efficiency, agencies must not neglect the human oversight aspect. “You still need that human aspect, and you can't leave it all up to AI,” she said. “That skillful input from your good humans is really important.” This underscores the necessity of AI serving as an aid rather than a replacement for human judgment.
Mackinnon also highlighted the risk of proprietary AI models limiting transparency. “If something goes wrong, who’s liable? The person who designed the AI, or the user? If we’re relying on an external vendor who isn’t exposing their code or weightings, then that’s a problem,” he warned. As AI systems become more integrated into critical public services, clarity on liability will become increasingly crucial.
To ensure AI adoption in the public sector is both effective and responsible, organisations must consider several best practices:
New Zealand’s journey with AI in the public sector is still evolving, and how the government navigates these challenges will determine whether AI becomes an asset or a liability. While the potential benefits are significant, only a balanced, transparent, and well-regulated approach will ensure AI delivers real value to both government agencies and the public.
Get in touch
Find out more by contacting one of our specialisat recruitment consultants across Australia, New Zealand, and the United Kingdom.
Search for jobs today
Got a vacancy?
What's happening in the market?
How do I prepare for my job interview?
NZ's 2024 Employment
and Salary Trends Report
Salary trends, talent attraction and retention strategies
Copyright © 2024, Frog Recruitment.
Frog Recruitment acknowledges Māori as tangata whenua and Treaty of Waitangi partners in Aotearoa New Zealand. We pay our respects to the mana whenua of the land on which Frog operates every day.
Frog Recruitment partners with CarbonInvoice to measure and mitigate any carbon emissions associated with the work we do.
Locations
Specialisations
Resources