As a lecturer in Construction Automation and Property Technology, I introduce undergraduate construction and property students to emerging technologies in the built environment. In my Property Technology and Innovation class, one of the activities involved creating custom AI agents using Microsoft Copilot. The aim was to help students develop the AI literacy the industry increasingly demands. Around 70% of these students already work in the industry, which meant they could immediately see the potential of what they were learning. 

The challenge: LeaseMate GPT

Commercial leases provided the perfect test case. They are long and detailed, full of clauses that can be costly to overlook, and often take hours to review manually. Students were tasked with building a custom AI agent using Copilot called LeaseMate GPT – Commercial Lease Analyser. The purpose was to automate the review and summarisation of lease contracts, highlight key obligations, critical dates and clauses.  

I ran a live demonstration in class and showed them high-level instructions on building a custom Copilot AI agent. From there, students were encouraged to refine and expand the approach, developing their own prompt strategies and adding enhancements. To test their creations, I provided a sample lease agreement I had drafted. Students then adapted and restructured prompts, with some adding different prompts on lease clauses that felt were critical to include. What mattered was that they were designing the agent for a professional task, much like industry teams do when tailoring AI for specific workflows. Here are the slides I used for this activity (PDF, 430KB).

This practical approach reflects the findings of Qian (2025) in Pedagogical applications of generative artificial intelligence in higher education: a systematic review. Qian (2025) emphasises that AI in education is most effective when it develops learner autonomy and critical thinking. 

The starting point – and what changed

When we began, student confidence with Generative AI varied widely although they all had experimented with ChatGPT or Copilot at work or university. Concerns ranged from accuracy and ethics to the perception that AI was a “one-size-fits-all” product with little scope for tailoring. 

By the end of the activity, their views had shifted. Students saw that Copilot could identify key clauses in seconds but also recognised that it could make mistakes. Rather than undermining trust in the tool, these errors became opportunities to refine their agents and improve accuracy.  

Questions that sparked discussion

One question came up repeatedly was: is there a difference between simply uploading a lease document to Copilot chat for analysis and creating a custom Copilot AI agent for the same task? 

We discussed that uploading a lease into a one-off chat is quick and useful for ad-hoc analysis. It works well if the requirement is to get the overview of a single document or find a simple extraction. Creating a custom AI agent however, allows for consistency, repeatability, and shareability. It can be set up with standard instructions, connect to approved data sources, and produce outputs in a uniform format. 

Another question followed: can any document be uploaded, or does it require permission from the relevant parties? This opened an important discussion about ethics and compliance. Students recognised that even if Copilot operates within enterprise boundaries, they remain responsible for ensuring they have the right to use the document. In practice, that means seeking permission from the document owner or confirming that the file is already approved for analysis. For those in industry, it also means respecting confidentiality clauses, client agreements, and data protection laws. 

The student voice

Students were surprised by how quickly they could adapt the AI agent to their needs. Once they realised they could change the instructions, they took ownership of the process. They were also struck by the time saved, with tasks that might have taken hours completed in minutes. They began thinking about other uses such as analysing asset management reports, reviewing requests for proposals, and preparing client summaries. They valued how the activity demystified AI, showing it as something they could control and adapt within the ethical and legal boundaries. 

Where we go next and tips for other educators

In future, the exercise will expand to include different contract types and technical reports using publicly available samples. Role-play exercises will be added, where one student acts as the AI-assisted analyst and another as the client, ensuring that outputs can be explained in clear, professional language. This approach could be adopted in other disciplines, as the ability to configure, test and refine AI agents blends technical expertise with professional judgement. Educators should choose relevant challenges, allow learners to adapt, address ethics, bias and data privacy early, and build in opportunities for refinement to track progress.

To learn more about Biyanka’s use of GenAI in teaching, join the upcoming Teaching with GenAI showcase on 25 September.

  • Great Blog sharing your teaching practice to develop student agency, critical thinking and applied practices in Prop Tech @Biyanka. Your engaging pedagogical approaches affirm that AI activities can enhance and sharpen critical literacy and thinking skills while bridging AI applied in industry with AI developed in higher education. Very kind of you to share your teaching slides & instructions to enable teachers to explore, experiment and build into their teaching, where suitable. Great work. Melinda

  • Thanks, Biyanka. Love seeing how you’re bringing emerging technologies into the classroom. Getting students to create their own AI agents is such a practical way to build AI literacy, and it’s great that so many of them can take these skills straight back to their workplaces.

Join the discussion

Skip to toolbar