In the first part in this series, I explored how Generative AI tools are helping some neurodivergent individuals to ‘help themselves’ by providing new forms of support around navigating communication and challenges with executive function. In this post, I focus on how GenAI can support another invisible challenge for neurotypical and neurodivergent people: learning how things are actually done.

You’ve likely encountered this scenario yourself. The official documentation outlines a process in clean, bureaucratic steps – but when you try to follow it, things unravel. Unwritten expectations, workarounds and opaque gatekeeping appear. The real procedure is a mix of half-documented rules and the passed-down practices of tribal knowledge. For neurodivergent people, students, newcomers to an organisation or anyone trying to find their feet in a complex system, this can be a frustrating and exclusionary experience.

This isn’t a problem of intelligence. It’s a problem of access. And this is where GenAI is beginning to offer unexpected value – not just as a source of information, but as a tool for procedural sense-making.

From “What does this mean?” to “What do I do next?”

While GenAI’s headline use cases (e.g. writing support, summarising readings, coding) are now well known, one of its subtler strengths is helping users bridge the gap between abstract instruction and concrete action.

Consider some common real-world challenges:

  • You’ve been asked to write an essay/response/briefing note, but no one can agree on what that means
  • You’re facing an academic integrity process, but the policy documents are more intimidating than informative
  • You’ve been assigned a group project or cross-functional task with seven unclear stakeholders and no timeline

GenAI and similar tools can act as cognitive scaffolds as per these example prompts:

  • Clarifying vague requests: “I’ve been asked to prepare a briefing note – what would that typically include in a university setting?”
  • Modelling lived procedures: “What’s the experience of going through an academic integrity investigation, step by step?”
  • Simulating social scripts: “How do I write a polite but firm email asking for overdue feedback from a teacher or senior colleague?”

For neurodivergent users, this kind of modelling can be transformative. The AI doesn’t just explain; it walks alongside, providing a judgement-free space to try, revise and rehearse. This reflects principles familiar to learning designers: signposting, scaffolding and fourth-wall-breaking to reduce cognitive load and anxiety.

Why this matters

The potential here goes beyond individual convenience. It reflects a deeper shift in how support is being reimagined. As feedback on my first post pointed out, we are moving towards a model where GenAI operates less as a tool and more as personal infrastructure – ambient, flexible and user-driven.

This shift is particularly meaningful for users who have historically had patchy or inadequate access to traditional forms of support. As one colleague noted, more privileged learners often have easier access to mentors or “insider” guidance. For others, AI becomes a kind of leveller: always on, nonjudgmental and free from the social risks of “asking a stupid question.”

Proceed with caution

There are good reasons to be sceptical. These systems are not neutral – GenAI is produced and maintained by for-profit tech companies whose goals don’t always align with public good. Today’s free support may become tomorrow’s gated service – or worse, surveillance.

There’s also the risk of overreliance. Tools that scaffold can easily become crutches. If learners lean too heavily on GenAI to decode every task, we risk hollowing out essential practices such as critical thinking or peer collaboration. There’s a world of difference between being supported to do the thing and outsourcing the thing entirely.

Many critics rightfully call out the insufficient focus on the major issues of GenAI stealing intellectual property, including that of academics, to train the software. Others note the impacts on the environment and the exploitation of data workers by the companies profiting from the proliferation of this technology.

Design, don’t default

If GenAI is here to stay (and it looks like it is), then the challenge is not just about using it, but designing with it ethically. That means involving neurodivergent users and others with diverse needs as co-designers, not just consumers. It means embedding AI ethics and literacy into learning and workplace systems – not to replace support, but to enhance and extend it.

As we move ahead with questioning and implementing concepts and processes through an active AI-human partnership, we must remain committed to human connection. AI can simulate process, even empathy – but it cannot offer community. We shouldn’t let it replace those awkward, generous moments where someone leans over and says, “Hey, let me show you how it’s really done.”

Learning by doing

Elliot Eisner reminded us that formal learning is only a fraction of what gets transmitted in any educational setting. The same is true of the workplace. Whether it’s learning how to submit a grant, run a team meeting or navigate a performance review, much of what matters isn’t written down. Let’s change that – not just with better documentation, but with smarter tools, more inclusive practices and a willingness to name what’s currently unsaid.

So, what’s a process you only learned by doing, awkwardly, publicly and without support? What if we could scaffold those moments for others, and maybe even ourselves?

Join the discussion

Skip to toolbar