Every so often, a project arrives that looks—on the surface—like a technology story, but turns out to be something different.

This is what happened when we realised that we needed a better way to help staff navigate through documentation. We initiated a project to build a GenAI agent, using the UTS Recast platform, capable of surfacing important content that can often be difficult to find by searching manually through SharePoint.

While it was a complex task, the most useful lesson from it for me was simpler than I might have expected—GenAI isn’t the centre of the system. People are. 

The bot is not the point. The point is what happens when you reduce friction between people trying to do good work, and the information that’s currently scattered across documents, pages, channels, and institutional memory. If you’ve ever watched a capable colleague burn time triangulating between “the policy”, “the guidance”, “the latest update”, and “what we actually do”, you’ll recognise the problem immediately. 

What problem were we actually solving? 

In our world, “support documentation” is rarely one coherent thing. It’s a constellation: policies that disagree by omission, process notes that are out of date by one restructure, local workarounds that are brilliant but undocumented, and “the way we do it” that lives in someone’s head until they go on leave. 

So the project wasn’t fundamentally about “building a chatbot.” It was about designing a reliable interface to practice: a way for staff to get to a defensible, current, context-aware answer without having to become archaeologists of SharePoint. 

In a university setting—where risk, compliance, student impact, and reputational stakes are real—that distinction matters. A fluent answer is not the same thing as a safe or useful answer. 

Partnership (because this wasn’t a solo sport)

This work also doesn’t succeed through enthusiasm alone. We were able to progress because we had expert partners—especially valued colleagues in DAIU, like Kathryn Fogarty—who brought deep capability in governance, platform architecture, risk framing, and the kind of disciplined “yes, but how will this behave in production?” thinking that separates an interesting demo from an institutionally safe service.

Their expertise didn’t merely support the project; it improved its shape, its safeguards, and its sustainability.

Learning #1: Authority is designed, not generated 

The most persistent misconception about GenAI in organisations is that it knows things. It doesn’t. It predicts plausible text. That can be useful, and it can also be confidently wrong in exactly the way you don’t want when someone is asking, “What’s the correct process?” or “What are we allowed to do here?” 

So the design constraint wasn’t “make it smart.” The constraint was make it trustworthy. 

That meant doing the unglamorous work up front: 

  • defining what is in scope (and what is explicitly out) 
  • identifying what counts as authoritative or approved documentation when there are competing directives 
  • ensuring the agent signals uncertainty or gaps rather than smoothing them over 
  • treating governance as part of the design, not an afterthought 

If you want an agent to synthesise disparate directives into something reliable, you have to build the conditions for that reliability. GenAI can amplify clarity. It cannot conjure it out of organisational ambiguity. 

Learning #2: Retrieval beats brilliance 

The real value wasn’t in clever summarisation. It was in retrieval that actually works. People don’t ask questions in the language of your taxonomy. They ask in the language of their day: 

  • “Where’s the latest guidance on…?” 
  • “What do I do if a student…?” 
  • “Is this allowed under our current settings…?” 
  • “Who owns this process…?” 

A well-designed Recast solution translates those messy questions into the right slice of information—fast—and returns it in a form that supports action. In practice, this meant treating the content corpus as something to be engineered: structured, curated, tagged, and written for reuse. 

The agent didn’t become useful because the model improved. It became useful because the information architecture did. 

Learning #3: Human-in-the-loop isn’t a compliance tax 

You often hear: “We need a human in the loop.” It’s sometimes said like a brake—like we’re reluctantly stapling a person onto an automated process. 

But my takeaway is the opposite: the human-in-the-loop is the entire point of doing this well. 

When an agent reduces time spent hunting, compiling, and reformatting, it gives expertise back to people. It multiplies the capacity of staff who are already doing high-value work: supporting academics, interpreting guidance in context, resolving edge cases, applying judgement. 

If the agent is doing its job, staff don’t disappear. They reappear—more available for the work only humans can do: sense-making, relational support, ethical judgement, and designing conditions for learning that hold under pressure. 

What “success” actually looks like

The test I keep coming back to is pragmatic: after someone uses the agent, what changes? 

  • Do they take the next step sooner? 
  • Do they escalate less often (or escalate more appropriately)? 
  • Do they make fewer avoidable errors? 
  • Do they feel more confident they’re aligned with current expectations? 

These are unglamorous measures. They’re also the ones that distinguish a novelty from a service. 

Where this leaves me

I’m pro-GenAI—but only in the way you’re pro-anything that helps people do their work with less friction and more care, while staying within safe and accountable boundaries. 

The forward-looking view isn’t “everyone gets a bot.” It’s: we treat institutional knowledge as infrastructure, we design for trust, and we use Recast/GenAI as a multiplier for human service—not as a substitute for it. 

The bot is not the point. 

The point is the person who needed an answer at 4:55pm on a Friday, found it quickly, and used the time they saved to support someone else—with more consistency, less stress, and a stronger sense that they’re standing on solid ground. 

Join the discussion

Skip to toolbar