Over the past two years generative AI has become ubiquitous in public consciousness, occupying much space and time in our imaginations of how the nature and experience of teaching, learning, work and decision-making might evolve. It is not at all surprising then that ‘AI’ featured as a prominent theme in work produced by students in the TD Elective, Envisioning Futures Worth Wanting this year.

In this subject teams are invited to identify present-day issues and signals of change and explore possible futures. They then design and stage “experiential futures” to engage in critical reflection on whether these possible futures are indeed worth wanting, for whom, and why. As with any good futures studies work, the real value is in the discussion, reflection and insights generated when people encounter possibilities that diverge from their present-day experience and expectations about the future.

Of the 57 experiential futures presented by this year’s cohort, two that foregrounded future experiences with AI stood out for me. One, set in an unnamed university in the year 2050, involved industry consultants implementing a program to replace human tutors with AI on the basis of cost effectiveness and sufficiently high student satisfaction scores. The other, set in a not very distant future, involved young people relying on GenAI chat interfaces for counselling and emotional support when dealing with stressful situations and important personal decisions.

How students use GenAI now

In both of these cases, the discussion quickly shifted from whether these futures are worth wanting to how they reflect present-day behaviours. Participants in the discussions shared not only some of the ways they currently use GenAI, but also their reasons for doing so. Ease-of-use and immediate access were mentioned – students studying or working on assignments late at night or on the weekend can receive an automatic response from a chatbot, and young people who want or need counselling can skip potentially long delays between contacting and actually seeing a professional by simply typing or speaking into a GenAI interface.

The deeper, more interesting, and frankly more concerning insight emerged when participants talked about how much safer they felt interacting with GenAI than with teachers, psychologists, friends or family members. Paraphrasing some of the discussion, participants mentioned that they sometimes felt scared to ask teachers and tutors questions about subject content or assessment tasks because those educators might realise that the students did not understand. Similarly, people reflected on the experience that a GenAI chatbot would not judge them or be disappointed in them for feeling stressed out, uncertain, or mentally unwell.

Hearing these reflections, I felt sad. Whether in educational or social settings, when people feel unsafe and unable to express their uncertainty or confusion, to share that they do not already have answers or that they might be struggling or suffering, we have failed. When our systems of socialisation and education produce situations where people avoid seeking help from other people for fear of being judged, those systems are broken. My sadness here is not only for individuals who feel unsafe and who evidently do not feel cared for by friends, family, teachers, or other people. My sadness is also for our collective capacity to respond to stress, uncertainty, and disaster, all of which benefit from social capital and community connectedness.

Imagining GenAI futures

These in-class discussions connect with my deep discomfort that far too many of the proposed use cases for GenAI boil down to “how might we make it easier to avoid engaging with other humans?”. We hear that GenAI tools can be “thought partners,” “collaborators,” “research assistants,” and “personal assistants.” They can ostensibly be used to increase productivity, generating a larger volume of output, faster, and at lower immediate cost than human individuals or teams. For efficiency-minded organisations, like businesses, governments, or universities, where the primary goal of the past several decades has been to scale service delivery while reducing costs, the potential economy of scale effect here must be appealing.

Given that GenAI systems are designed to produce outputs that give the appearance of agreeableness – to the point of sycophancy (see Towards Understanding Sycophancy in Language Models) – the appeal to users may also be high. An automated thought-partner or staff member that politely produces what you demand without question or challenge might be a very comfortable collaborator for a range of people, from those struggling with imposter syndrome to those who resent insubordination.

Combining these two potential outcomes of ubiquitous GenAI – scaling institutions and individualising services through automation, and reducing people’s exposure to psychological discomfort and disagreement – leads me to envision a future I do not want. In this possible future, we can imagine people reliant on GenAI assistants for everything from information and analysis to emotional support, and progressively losing any ability to encounter and consider contrasting viewpoints or critique. When we can turn to an always-on, always-agreeable interlocutor, why suffer the delays and risk of discomfort that can arise from trying to engage with human teachers, counsellors, or friends?

Considering consequences

Decreasing the frequency of human interactions and our exposure to diversity, critique, and challenge could have significant negative consequences. For individuals, resilience and psychological safety go hand-in-hand. But we must remember that psychological safety is about creating spaces in which people can feel safe expressing uncertainty, taking risks, making mistakes, being wrong, engaging in rich and robust discussions, and learning, in order to develop our resilience. For communities, social interaction across difference, which can be uncomfortable, is crucial for building and maintaining social capital and cohesion. These in turn are critical resources for collective resilience.

We are facing a changing and challenging world, where climate change is almost certain to create unprecedented hardships and where the foundations of a dominant but failing social, political, and economic order are being questioned and remade.

In this difficult context, we need to invest in and develop the resilience and social capital that may help us realise a future worth wanting. When we consider the potential uses of GenAI, we need to challenge the assumption that there is no alternative to scaling institutions through individualising and automating service delivery. And we need to carefully reckon with what we risk losing when we try to implement technological solutions for social problems.

  • Thank you Scott for this thoughtful piece that really resonated with me. I think you have beautifully articulated thoughts that have been trying to form in my head. How sad it is indeed if students feel they cannot talk to tutors when they do not understand something, when they cannot feel they are safe to disclose that they are struggling with their studies, or that they do not understand assessment tasks etc. I agree that if that is the case, we have failed as educators and as humans! I know that many people prefer to interact with bots of some kind, but as you point out, we need to think about what we are losing (as well as sometimes gaining), if we prioritise bots over human interaction. Maybe your blog would be a great blog to read with students, to get their reactions to your thoughts. It could start interesting discussions about what students prioritise for a good student experience and help us all think about how we work in a world where GenAI is becoming ever more prevalent while fostering human relationships.

  • Thanks Scott for a really engaging piece. I whole-heartedly agree that there’s a real risk that in our rush to develop students’ AI literacy, we end up inadvertently discouraging students from making the effort to build meaningful relationships with each other or their teachers.

    My hope is that the requirement for assurance of learning pushes us to focus on dialogic methods of assessment, and therefore dialogic methods of learning, so our students get used to discussing with peers and teachers their processes, their understandings and even their feelings as they attempt to make sense of what they’re learning.

Join the discussion

Skip to toolbar