In contemporary higher education — particularly in contexts shaped by Generative AI, academic integrity and external scrutiny — traditional approaches to assessment validity are no longer sufficient. At a recent online seminar (video; 70 mins) hosted by the UTS Business T&L Team, Monash University’s Tim Fawns argued that validity should be a practical, ongoing judgement about whether assessment evidence meaningfully supports claims about what students have learned.

The PPPP framework

To emphasise the strengths & weaknesses of the myriad ways we evaluate student learning, Tim Fawn developed a framework in collaboration with CRADLE’s David Boud & Phill Dawson. This framework represents a practical guide for designing assessment strategies which provide stronger evidence for student achievement of learning outcomes. It comprises 4 proxies: product, process, performance and practice.

1. Product

‘Products’ are tangible outputs or artefacts created by students – e.g., essays, reports, websites and video presentations. These show that students can create something useful or valuable or of quality, and can be marked asynchronously. It’s perhaps less clear how these artefacts were created or what the individual/independent contribution was.

2. Process

This proxy refers to the steps, strategies and approaches in completing a task, or preparing a product or performance. It provides information on how students did the work of assessment, and can show evidence of approaches to problem-solving, decision-making, disciplinary thinking and developmental trajectories. It’s important you know why you are capturing the process and to allow for diverse processes where appropriate.

3. Performance

Situated demonstration of knowledge and skills in action can range from clinical procedures and teaching demonstrations to role-plays and musical recitals. Performance is observable, reveals real-time application of skills, and has capacity to adapt to immediate and unpredictable circumstances. It may, however, require synchronous observation, is context-dependent, and requires sampling across different situations.

4. Practice

Patterned ways of doing things in authentic environments include placements, community projects, lab work, studio work and teamwork projects. These practices reflect applied ongoing professional, disciplinary or general capabilities, and can evidence relational and dispositional dimensions. On the downside, they can be challenging to observe and assess, and make more sense over a longer period of time.

Potential misalignments

The following failures can arise when educators focus on task formats rather than interrogating what types of learning the evidence sheds light on.

  • Mismatching – an inappropriate proxy is selected for assessing learning outcomes (e.g., oral communication skills marked via a written explanation of communication principles)
  • Mismanagement – an appropriate proxy is chosen but assessment fails to properly assess the learning outcomes (e.g., journal club is used to evaluate students’ ability to critique the literature, but they are awarded marks based on how well they facilitate the session)
  • Misinterpreting – drawing inappropriate conclusions from assessment evidence (e.g., inferring sound methodological understanding from correct mathematical answers indicate)
  • Slippage – the intention is to assess one kind of proxy, but a different proxy is assessed (e.g., evaluating the quality of a software project via a pitch to hypothetical investors)
  • Spillage – additional proxies are unintentionally assessed (e.g., a process-based assessment of a student’s work in a laboratory includes inferences from a presentation of results or the accuracy of the results)
  • Over-saturation – one type of assessment proxy is repeatedly employed across a curriculum (e.g., submitted assignments, written tests, quizzes, exams, oral exams)

Further reading

By foregrounding evidence rather than tasks – and judgement rather than measurement – the 4Ps framework provides educators with a shared language for designing, critiquing and justifying assessment decisions in complex learning environments. Explore the framework further in the full paper from Fawn, Boud and Dawson: Identifying what our students have learned: A framework for practical assessment validation.

Join the discussion

Skip to toolbar