Short, considered pieces on topics that matter in governance and advisory work — written to be practical, direct, and grounded in real experience.

Overview

Artificial intelligence is already reshaping how students learn, how teachers teach, and how knowledge is accessed and applied. Its impact is immediate, visible, and accelerating.

In Aotearoa New Zealand, early steps have been taken to respond to this shift. Guidance has been developed for schools, assessment bodies have introduced rules around AI use, and elements of AI-related learning are beginning to appear within the curriculum. These are important developments.

However, the current approach remains fragmented. There is not yet a clear, system-level framework that brings together curriculum, assessment, teacher capability, governance, and implementation in a coherent and practical way. As a result, responses are uneven, and the burden of navigating complexity is often carried at the school or classroom level.

This creates risk — but also opportunity.

The Case for a Structured Approach

AI presents both significant opportunities and complex challenges for the education system.

Opportunities include:

  • improved access to knowledge and personalised learning
  • new forms of creativity and problem-solving
  • increased efficiency in teaching and administrative tasks

Challenges include:

  • maintaining academic integrity
  • ensuring equity of access and outcomes
  • supporting teacher capability and confidence
  • managing risk, bias, and appropriate use

These are not issues that can be addressed through isolated guidance or policy adjustments alone. They require a coordinated and structured approach that recognises the education system as an interconnected whole.

A Governance and Systems Perspective

The introduction of AI into education is not simply a curriculum issue, nor solely a technology issue. It is a system-level change that affects how decisions are made, how learning is structured, and how outcomes are achieved.

This requires:

  • clear governance and oversight
  • alignment across institutions and agencies
  • practical implementation pathways
  • an understanding of how change is experienced at every level of the system

Without this, there is a risk that AI adoption will be inconsistent, reactive, and driven by local capacity rather than national direction.

Purpose of This Paper

This paper outlines a practical framework for integrating AI into the New Zealand education system. It is intended to:

  • bring together key considerations across curriculum, assessment, capability, and governance
  • provide a structured approach to implementation
  • support decision-making at system and organisational levels

The focus is not on technical detail, but on how AI can be introduced in a way that is coherent, equitable, and workable in practice.

Positioning

This paper is written from a governance and systems perspective, informed by experience across education, public-sector environments, and complex stakeholder settings. It reflects a practical view of how change occurs in real-world systems — where priorities compete, capacity varies, and implementation matters as much as intent.

A Practical Framework for Implementation

A structured approach to integrating AI into the New Zealand education system requires alignment across five key areas. These areas are interconnected and should be considered together rather than in isolation.

1. Governance and Oversight

Clear governance is essential to ensure consistency, accountability, and alignment across the system. This includes:

  • establishing clear roles and responsibilities across agencies and institutions
  • setting expectations for appropriate and ethical use of AI
  • ensuring alignment with national priorities, including equity and Te Tiriti o Waitangi
  • providing oversight of risk, including bias, misuse, and unintended consequences

Without clear governance, AI adoption is likely to be uneven and driven by local interpretation rather than shared direction.

2. Curriculum Integration

AI should be integrated into the curriculum in a way that supports learning, rather than treated as a standalone or optional topic. This includes:

  • embedding AI literacy across subjects and year levels
  • supporting students to understand both the capabilities and limitations of AI
  • encouraging critical thinking, ethical awareness, and responsible use
  • ensuring that curriculum design reflects real-world applications of AI

The focus should be on equipping students with the skills to engage with AI confidently and thoughtfully, rather than simply using AI as a tool.

3. Teacher Capability and Support

Teachers are central to successful implementation. Their confidence and capability will determine how effectively AI is integrated into learning environments. This includes:

  • providing practical, accessible professional development
  • supporting teachers to understand how AI can be used in teaching and assessment
  • creating opportunities for shared learning and collaboration across schools
  • ensuring that expectations are realistic and aligned with capacity

Without adequate support, there is a risk that AI will be either underused or used inconsistently.

4. Assessment and Integrity

Assessment frameworks need to adapt to ensure that they remain valid, fair, and meaningful in an environment where AI is widely available. This includes:

  • reviewing assessment approaches to reflect the realities of AI use
  • maintaining integrity while recognising new forms of learning and expression
  • providing clarity for students and teachers on appropriate use
  • balancing innovation with trust in the system

Assessment should evolve in a way that supports learning while maintaining confidence in outcomes.

5. Implementation and System Alignment

Successful integration depends on how well change is implemented across the system. This includes:

  • coordinating efforts across agencies, schools, and stakeholders
  • ensuring consistency while allowing for local flexibility
  • sequencing implementation in a way that is manageable and sustainable
  • providing clear communication and guidance at each stage

Implementation should be treated as an ongoing process rather than a one-off change.

A Connected Approach

These five areas are interdependent. Progress in one area without alignment in others is unlikely to be effective. For example:

  • curriculum changes without teacher support will not translate into practice
  • governance without implementation pathways will not result in change
  • assessment changes without clear communication may undermine confidence

A connected approach ensures that AI integration is coherent, practical, and sustainable.

What Good Looks Like

A well-implemented approach to AI in the education system is not defined by the technology itself, but by how confidently and consistently it is used across the system. In practice, this would mean:

  • Students are able to use AI tools responsibly, understand their limitations, and apply critical thinking in their work
  • Teachers feel confident in how AI supports learning, and are able to integrate it into teaching and assessment in a structured way
  • Schools operate with clear guidance and shared expectations, rather than developing their own approaches in isolation
  • Assessment systems remain credible, fair, and reflective of real-world skills
  • Decision-makers have visibility of risks, progress, and system-wide alignment

Importantly, AI is not treated as an add-on, but as part of a broader shift in how learning, teaching, and knowledge are understood.

A Phased Approach to Implementation

A structured rollout supports consistency while allowing for learning and adaptation. A practical approach could include:

  • Phase 1 — Establish Foundations: Develop clear national guidance aligned with governance expectations; define principles for appropriate use across the system; identify key risks and areas requiring immediate attention.
  • Phase 2 — Build Capability: Provide targeted professional development for teachers and school leaders; support shared learning across schools and regions; develop practical tools and resources for classroom use.
  • Phase 3 — Integrate and Align: Embed AI literacy within curriculum areas; align assessment approaches with evolving practice; ensure consistency across agencies and institutions.
  • Phase 4 — Review and Adapt: Monitor implementation and outcomes; gather feedback from educators and learners; refine approaches as capability and understanding develop.

Key Considerations

Several cross-cutting considerations should inform implementation:

  • Equity: ensuring all students and schools have access to tools, support, and opportunities
  • Te Tiriti o Waitangi: embedding partnership, participation, and protection within governance and practice
  • Clarity: providing guidance that is practical and easy to apply
  • Consistency: balancing national direction with local flexibility
  • Sustainability: ensuring that implementation is manageable and supported over time

Conclusion

AI presents a significant shift for the education system. The question is not whether it will be integrated, but how.

A structured, governance-informed approach provides the best opportunity to ensure that integration is coherent, equitable, and effective. This requires alignment across curriculum, assessment, capability, and implementation — supported by clear direction and practical pathways.

With the right framework, AI can be integrated in a way that strengthens learning outcomes, supports teachers, and prepares students for a rapidly changing environment.

Most workplace conflict doesn't start with a disagreement — it starts with a conversation you were never part of.

Most workplace dispute resolution frameworks are designed for visible conflict — disagreements, formal complaints, clear breaches of conduct. But what happens when the conflict is subtle? When it shows up as silence, exclusion, quiet reputation damage, or shifting alliances before you've even had the chance to establish yourself?

These dynamics — often described as passive-aggressive behaviour, relational aggression, or workplace bullying — are harder to name, and even harder to address. Yet their impact can be significant, both for individuals and for organisational culture.

In my own experience across different roles, I've observed how quickly narratives can form — sometimes before a person has even had the opportunity to contribute. Informal conversations, assumptions, and untested claims can shape perceptions in ways that are difficult to challenge once established.

What makes this particularly complex is the role of bystanders. Research consistently shows that workplace behaviour is not just driven by individuals, but by group dynamics. When bystanders remain silent — or are drawn into reinforcing a narrative — behaviour can become normalised, even when it is misaligned with organisational values.

Traditional dispute resolution approaches often struggle in these situations. They rely on clear evidence, formal escalation, and identifiable incidents. Subtle behaviours — tone, exclusion, influence — rarely fit neatly into those frameworks.

So the question becomes: how do we respond more effectively?

Moving from Reaction to Early Intervention

One of the most important shifts is moving from reactive dispute resolution to early, informal intervention. Research in workplace conflict resolution highlights the effectiveness of early conversations, facilitated dialogue, and interest-based approaches (Fisher & Ury, Getting to Yes). Rather than focusing on positions ("what happened"), these approaches focus on underlying interests and perceptions ("what is driving this dynamic").

In practice, this might involve:

  • Creating space for direct, respectful conversations early
  • Encouraging individuals to test assumptions rather than rely on second-hand information
  • Using neutral facilitators where needed to support dialogue before issues escalate

Strengthening the Role of Bystanders

Bystanders are not neutral — they are influential. Studies on organisational behaviour (e.g. Edmondson, psychological safety) show that cultures where people feel safe to question behaviour are more resilient and more effective.

Practical strategies include:

  • Encouraging a norm of "checking, not assuming"
  • Providing language for constructive intervention (e.g. "I'm not sure that's accurate — have you spoken to them directly?")
  • Reinforcing leadership expectations that integrity includes how we speak about others

Clarity, Documentation, and Calm Response

For individuals experiencing these dynamics, one of the most effective responses is often the least instinctive: staying calm and grounded. This doesn't mean ignoring behaviour — it means responding in a way that maintains credibility.

Helpful approaches can include:

  • Documenting interactions and patterns (facts, not emotion)
  • Seeking clarification directly where appropriate
  • Avoiding reactive responses that may reinforce existing narratives
  • Identifying trusted allies or mentors for perspective and support

These actions create both clarity and resilience, even in challenging environments.

The Role of Leadership

Ultimately, the effectiveness of dispute resolution depends on leadership. Leaders set the tone for:

  • What behaviour is acceptable
  • How concerns are addressed
  • Whether people feel safe to speak up

Organisations that take a proactive approach often:

  • Invest in conflict capability training, not just policy
  • Recognise and address informal dynamics, not just formal complaints
  • Promote transparent communication and accountability

A More Constructive Approach

Workplace conflict will always exist. The goal is not to eliminate it, but to manage it in a way that is fair, respectful, and aligned with organisational values. That requires moving beyond formal processes alone and recognising the importance of early conversation, shared responsibility, and thoughtful leadership.

Because in many workplaces, the most damaging conflicts are not the loudest ones — but the quiet dynamics that shape who is heard, who is trusted, and who is given the opportunity to succeed.

Having worked in governance, I've often seen how much of decision-making relies not just on data, but on judgement — context, experience, and the ability to weigh competing priorities.

That's why a recent paper by Stephen Goldsmith and Juncheng Yang really resonated with me. It explores how artificial intelligence is reshaping the relationship between discretion and accountability in urban governance — and it raises an important question for Aotearoa New Zealand: are we thinking deeply enough about how we govern AI, not just how we use it?

AI is often framed as a tool for efficiency. But in practice, it does something more complex — it redistributes decision-making. It can empower frontline staff with better information, while also increasing oversight and control at a managerial level. This creates a new dynamic. Rather than a simple trade-off between discretion and accountability, AI has the potential to increase both — but only if the systems around it are designed intentionally.

The concept that stands out is what the authors describe as accountable discretion — the ability to exercise judgement flexibly while maintaining transparency and ethical oversight. This feels particularly relevant as we consider how AI might be used across local government, public services, and community-facing systems.

From a governance perspective, a few things feel critical.

Capability. AI can amplify decision-making power — but not equally. Without deliberate investment in skills and access, it risks widening gaps, both within organisations and across communities.

Adaptability. Many of our governance and administrative structures were not designed for the pace of technological change we are now experiencing. More flexible, cross-functional approaches will be needed.

Data integrity. Trust in AI systems ultimately rests on trust in the data behind them — how it is collected, managed, and used.

Human judgement. AI can inform decisions, but it cannot replace the need for ethical reasoning, context, and accountability. That responsibility must remain with people.

Public trust. In a New Zealand context, this includes meaningful engagement with communities — ensuring people understand how systems affect them and have a voice in shaping them.

What this highlights is that AI in governance is not primarily a technology project. It is a governance project.

For those of us working in leadership and governance roles, the challenge is not simply to adopt AI — but to ensure it strengthens trust, fairness, and accountability in the systems we are responsible for. Because ultimately, the success of AI in public life will not be judged by its sophistication — but by the confidence people have in how it is used.

Decision-making in public and stakeholder-driven environments is rarely straightforward.

Issues often involve multiple organisations, competing priorities, and differing perspectives. Decisions may carry long-term implications while also being subject to immediate public scrutiny. In these contexts, the process of decision-making becomes as important as the outcome itself.

Effective decision-making requires a clear understanding of context — not just the issue at hand, but how it sits within a wider system. It involves identifying the key considerations, understanding where constraints lie, and recognising that there may not be a single "right" answer.

In practice, this means working through trade-offs. What is achievable within current constraints? What are the risks of acting — or not acting? How will a decision be experienced by those affected by it?

Clarity is critical. When issues are complex, it is easy for decision-making to become clouded by volume of information or competing viewpoints. The role of governance and advisory work is often to help bring structure to that complexity — to identify what matters most and support decisions that are both informed and workable.

Process also matters. Decisions that are made through clear, fair, and transparent processes are more likely to be understood and accepted, even where there is not full agreement.

Ultimately, decision-making in these environments is less about finding perfect solutions and more about exercising sound judgement — balancing information, context, and consequence to support outcomes that are practical, defensible, and sustainable.

Many roles involve working across stakeholders. Fewer involve doing so where those stakeholders have fundamentally different priorities, constraints, and expectations.

In these environments, progress depends less on alignment in principle and more on the ability to work constructively across difference.

Different organisations operate within different frameworks. Public sector entities, private organisations, community groups, and iwi partners each bring their own perspectives, obligations, and ways of working. These differences are not obstacles to be removed, but realities to be understood.

Effective stakeholder work begins with clarity. Understanding what matters to each group, what constraints they are operating under, and where there is flexibility is essential. Without that, it is difficult to move beyond surface-level engagement.

Trust is built over time, through consistency and a willingness to engage constructively. It is also built through clarity — being clear about what is possible, what is not, and where trade-offs exist.

In practice, this often means navigating tension rather than resolving it completely. Not all interests will align, and not all outcomes will satisfy every perspective. The role of leadership in these situations is to support progress while maintaining relationships and keeping dialogue open.

Working across stakeholders is therefore not simply about coordination. It is about understanding systems of relationships, and contributing in a way that enables movement within those systems.

Governance in high-visibility environments brings a particular set of challenges.

Decisions are often made in settings where there is significant public interest, where differing views are strongly held, and where outcomes may be subject to scrutiny or critique. In these contexts, judgement becomes central.

Good governance is not only about process, although process is important. It is also about how decisions are approached — the care taken in understanding issues, the willingness to consider different perspectives, and the ability to remain measured in situations that may be complex or contested.

Public accountability adds another layer. Decisions must not only be sound, but able to be understood and explained. This requires clarity of reasoning and a disciplined approach to how information is considered and communicated.

There is also a balance to be maintained between responsiveness and consistency. Issues may evolve quickly, but decision-making needs to remain grounded and considered.

In practice, governance in these environments relies on a combination of structure and judgement. Structure provides the framework — hearings, reporting, process. Judgement provides the ability to navigate complexity within that framework.

It is this combination that supports decisions that are not only procedurally sound, but also balanced, credible, and capable of withstanding scrutiny over time.