Even before the ramifications of Generative AI on higher education were fully realised, we could never really be certain if a student had genuinely done all their own work. Apart from the more obvious issues like contract cheating, we never really knew how much help students might have received from family, friends, housemates or classmates. Third parties may have provided answers for an online quiz, rewritten sections of an essay, or advised how to fix problems in their coding. Perhaps the student had looked at a previous paper that had been downloaded from a file-sharing website or passed on by a former student.
Far less did we usually know about how much any given student had learnt. Maybe they learnt to play the game: I’ll support argument X because I know my teacher agrees with it; I’ll write a reflection about how useful this project was because I know that’s what they want to hear. Often, we are assessing their ability to meet our expectations at the expense of genuine learning.
Maybe they did the work, genuinely tried to learn, and got decent grades but still walked away not feeling like they learnt anything, or not having any confidence that they could apply it in the real world. I once got an HD for a subject where in every assessment I felt like I was guessing what I was supposed to do, and every time I’d hold my breath and wait to see if I might fail, then be genuinely puzzled when I got HDs. Did I learn anything? Nothing I could usefully apply even back then, let alone so many years later.
Responsibility for learning and assessment
So maybe we need to rethink some of our long-held beliefs about how we measure learning. Maybe we need to encourage a culture where students take more responsibility for being aware of what they’ve learnt. After all, according to one of our UTS policies, “students are responsible for their own learning (and) making decisions about their learning journey”.
Apparently that responsibility stops when it comes to assessment, when suddenly we decide that we are solely responsible for measuring how much students have learnt, even though we’re not in their heads, we haven’t usually been involved in or even witnessed most of their processes for completing assessable work, and we rarely speak with them about what they’ve learnt, how they might apply it or how confident they feel about using it outside of a classroom.
When we decide that we should be the sole arbiters of what students have learnt, it’s not much more of a step to decide that we should also be the sole arbiters of whether students have acted with integrity throughout the learning process. Do you know your students well enough to make those kind of judgements? If we’re honest, we’d admit that sometimes we don’t even know our students’ names, let alone anything about them as people.
Preparing our students for the future workplace
Of course we need to be able to assure as best we can that students have sufficient skills and understandings to gain meaningful employment opportunities. We owe this to our students and to the broader community. We must also be mindful, however, that our primary purpose is to help our students develop new ways of thinking, not to act as gatekeepers, especially as it’s increasingly unclear what will lie beyond the gate.
And of course in the GenAI era we need to provide opportunities for students to critically engage with relevant tools. We need to prepare students for their future, not our past. But given that machines are increasingly capable of completing complex tasks more efficiently and effectively than humans, perhaps we also need to provide students with more opportunities to be human.
What might this look like? Well for a start, it would avoid any pedagogical technique or technology that presumes an adversarial relationship between students and teachers. It would be built on meaningful relationships with our students and genuine dialogue about the challenges that both we and they are facing in adapting to a world where disruption is profound, rapid and constant. It would acknowledge that none of us know what jobs will be around or what skills might be required in 10 years, let alone the 30-50 years that many of our students are looking ahead to in their professional lives.
But to be human is not to have answers. It is to have questions – and to live with them. The machines can’t do that for us. Not now, not ever.
D. Graham Burnett, Will the Humanities Survive Artificial Intelligence?, The New Yorker
It would mean less concern about how much of a submitted task was generated by AI, and more concern with finding space in the curriculum for being able to probe students about their understandings. This would not be an attempt to “catch them out”, but a way of engaging with them, of stimulating their intellectual curiosity, of expanding their thinking, and even – if we’re brave enough to be open to it – of learning from them. Are we ready for that?
Resources and support
Academic integrity at UTS (on SharePoint; requires UTS log-in) offers staff guidance on:
- Enhancing curriculum and assessments – practical ideas to help you design and regularly review curriculum and assessments with academic integrity in mind
- Educating and supporting students – provide students with the relevant knowledge, skills and support to maintain academic integrity
- Fostering connection and belonging – get to know your students and enhance a sense of belonging in all learning environments at UTS
The Student misconduct and appeals site on Sharepoint has a wealth of clearly organised information and updates covering key concepts, processes and practical advice, including the latest information about GenAI tools. For advice about any aspect of the student misconduct and appeals process at UTS and training options, contact the Student Misconduct and Appeals team.
If your students need to get on top of the basics or brush up their understanding of academic integrity, the Academic Integrity at UTS Canvas modules are a great place to start.
In the era of AI, it becomes even more critical to rethink the role of assessment in engineering education. Rather than treating assessment as a tool to certify outcomes or rank students based on final results, we should reposition it as a dynamic driver of learning, growth, and capability development.
In this context, assessment should not be the exhaust of the learning process (something that comes after all the meaningful work is done), but the engine that powers it. When thoughtfully designed, assessment can actively shape student motivation, guide reflection, and deepen engagement.
Especially in engineering education, where real-world problem solving, teamwork, and adaptability are paramount, assessments should capture the richness of the learning journey. This includes evaluating how students think, collaborate, apply knowledge creatively, and leverage tools like AI, not just what final answers they produce.
By shifting from grading what students know to assessing how they learn and contribute, we can create an environment that fuels curiosity, encourages collaboration, and builds lasting competence. Isn’t this the kind of learning culture we should be aiming for in the age of intelligent systems?
Love your metaphor, Jianchun, of assessment as the engine rather than the exhaust!