educationtechnologyinsights
| | SEPTEMBER - OCTOBER 20259APACAPACwhen technology is inseparable from the human endeavour. This is the conversation that universities need to be having. The issue is not about prevention or detection. It is about prevention and adaptation, an argument made much more coherently than I can here. Where there is knowledge that we believe students must be able to demonstrate without the use of AI (such as threshold concepts foundational knowledge that, once understood, changes the way students think about a topic), we need mechanisms that reliably prevent the use of AI. Invigilation is one obvious approach to securing assessment, but there are others, such as in-person activities, orals, practicals and placements. For everything else, we need to accept that all aspects of students' engagement with the topic will to a greater or lesser extent be mediated by technology and AI, and thus focus on helping them develop the ethical, critical, and information technology skills to be expert and effective users of AI. If we do not, the worst-case described in this paper from MIT (in short, over-reliance on AI will make you dumber) seems inevitable.Some universities are further ahead with this. The two-lane approach from the University of Sydney is widely regarded as the most pragmatic solution for the current situation, and many universities around the world are moving towards similar approaches. Our own approach, the AI Use Framework, provides a first step towards that model as it provides us with a common language to use with students, but, as one of the authors of the AIAS model on which it is based points out, it is not an assessment security instrument, and we are at risk of treating it as such. Trying to decide whether a students' use of AI exceeds the parameters of a particular category simply pushes us back into an even more complex process of detection, and it really misses the point about the importance of working with students to develop their AI knowledge and skills.I believe there are four critical steps for universities to take:1. Building scaffolded AI knowledge and skills into all our qualifications.2. Mapping where our assessments are secured across a qualification. There should be no programme where a student can graduate without having completed secured assessments at critical points that allow us to authenticate their knowledge.3. Mark unsecured assessments on the assumption AI has always been used and focus on the quality of the assessment `product' with that in mind. (Is the argument coherent, accurate, well-structured, etc.?) Instead of looking for so-called AI indicators such as buzzwords, emotionless writing styles and superficial arguments as a detection mechanism, simply treat such issues as poor academic writing, and grade the assessment accordingly. Despite being unsecured, these assessments will still have value as they are the opportunities for students to develop the core knowledge they need to succeed in the secured assessments. Students who choose to freewheel through these using AI will do so at their own risk.4. Treat un-paraphrased AI outputs, hallucinations and fabricated references as you would any other matter that falls along the poor-academic-practice-to-academic-misconduct continuum. The issue should not be the AI generation; rather, it is that students are submitting unchecked, inaccurate and falsified information. That is the conversation we need to be having with our students. The pace at which these changes are occurring is challenging. A comment was made at a conference I recently attended that `the pace of change has never been faster and will never be this slow again'. Generative AI has been a feature of our daily interactions since 2023, but many universities are yet to address the structural adaptations of our course content and assessment that we spoke about in those early days. If we wish for university qualifications to retain their meaning and value, it is imperative that we do so.
< Page 8 | Page 10 >