THANK YOU FOR SUBSCRIBING

AI at the Table: Simulating Stakeholder Voices in Policy Design
Dr. Rolly Alfonso-Maiquez, Director of Technology and Data Protection Officer, Verso International School


Dr. Rolly Alfonso-Maiquez, Director of Technology and Data Protection Officer, Verso International School
When building school policies around artificial intelligence, it’s tempting to begin with the tool itself.
“Write an AI policy for my school,” we might type into an AI chatbot and voilà — a decent-looking document appears.
But here’s the catch — words on a page don’t equal wisdom. What’s missing is the real conversation — the friction, the reflection, the diverse perspectives that policies are meant to carry. In schools, policy is not just about compliance. It’s about culture. And culture, at its best, is co-created.
The challenge, of course, is that not every school has the capacity to easily and quickly gather all the right voices in one room. Even when we do — schedules, hierarchy and logistics get in the way. So, I explored something different: using Generative AI not only to generate content, but to simulate the process of collaborative meaning-making.
Letting the AI Host the MeetingLetting the AI Host the Meeting Instead of asking the AI to produce a policy, I asked it to act as a facilitator of a roundtable discussion — a simulated space where key stakeholders could "speak." I created a cast of characters: a Head of School, other school leaders, parents, learners across grade levels, learning designers with divergent views and even a local community expert. Each had a defined persona, a realistic voice and sometimes, passionate disagreement.
I gave my AI assistant the task of moderating the conversation. In this simulation, each character offered their views on Generative AI in school — from concerns about over-reliance and surveillance, to hopes for personalization and equity. They responded to one another, challenged assumptions and sometimes shifted positions.
The results were surprisingly human. Not perfect, not always elegant — but textured. The conversation felt alive. And because it unfolded over multiple rounds, deeper layers of belief, fear and aspiration emerged.
In the next iteration of the simulation, I inserted myself into the dialogue — not just as a facilitator, but as a participant. In this
“Interactive mode,” my AI assistant paused after each round and invited me to reflect: Whose perspective do you want to challenge or support? Do you want to ask a followup question? Would you like to reframe the discussion?
It was a powerful shift — from observer to participating co-creator. I could slow the pace of the exchange, linger on certain dilemmas or introduce insights that wouldn’t have surfaced otherwise. And when I was ready, I’d give the cue to continue with the conversation.
In this way, the simulation became a form of intellectual choreography — my AI assistant managed the rhythm, but I led the meaning-making. The result? A more nuanced, context-aware draft policy grounded in communityinformed reasoning — and deeply aligned with our school’s values.
Designing for Multiple Entry PointsWhat excites me most is that this approach can meet users at different levels of confidence: For starters, you can let the simulation run on its own — producing a transcript filled with diverse, multi-perspective input. Think of it as “passive mode,” perfect for when you’re gathering ideas and letting the conversation unfold without jumping in.
In schools, policy is not just about compliance. It’s about culture. And culture, at its best, is cocreated
And if you'd like to join the party, the AI pauses after each round to invite your input — giving you the chance to steer the dialogue, respond to a speaker or shift the direction entirely. You’re not just observing the scene — you’re part of it. This tiered approach makes the simulation accessible yet powerful. It’s not about AI replacing human input. It’s about creating space for it, in a way that’s scalable and deeply personal.
Why This Matters for PolicyMakers and School LeadersThis method isn’t just a neat trick — it solves real problems:
• It surfaces complexity. Real communities rarely speak in unison. Simulated conversations help us capture contradiction and diversity early, before policy becomes too brittle to adapt.
• It strengthens legitimacy.When you can show that your policy reflects different viewpoints — not just what’s efficient — you build trust.
• It prototypes stakeholder engagement. Even when time or access is limited, you get a meaningful rehearsal of what a real-world conversation might sound like — and what you might be missing.
• It shifts AI’s role. Rather than treating AI as a document machine, we start to see it as a collaborative thinking tool — a space to listen, rehearse and test our assumptions before the high-stakes decision is made.
A Thought to Leave With YouWhile this began as a tool for leadership and governance, the potential doesn’t stop there. Imagine bringing this into the classroom.
Picture your learners engaging in rich dialogues—not just with one another, but with simulated voices of classical philosophers, Nobel-winning scientists or even modern sports heroes; debating policies with world leaders, asking follow-up questions to historical figures and role-playing ethical dilemmas with tech CEOs.
The boundaries of discussion no longer stop at the classroom door — or at time, access or availability. This is more than drafting policies. It’s practicing empathy, argument and perspective-taking. It’s rehearsing democracy.
And maybe that’s what AI is best at right now — not giving us the answers, but helping us ask better questions — together.