The Evolving Landscape of Assessment in the Age of Generative AI: An Australian Perspective

Lauren Sayer, Director of Curriculum at Victorian Curriculum and Assessment Authority

Lauren Sayer, Director of Curriculum at Victorian Curriculum and Assessment Authority

The rapid advancement of generative AI has sparked widespread concern in the educational community, particularly regarding its impact on academic integrity. However, it's important to recognise that AI technologies, when used responsibly, cannot only maintain but also enhance the learning experience. As these technologies become more sophisticated, the potential for misuse in educational settings grows, raising critical questions about how we can maintain the fairness and rigour of student assessments.

The Australian Government's Australian Framework for Generative AI in Schools provides a robust foundation for guiding AI's responsible and ethical use in education. This framework ensures that our students benefit from these tools while maintaining academic integrity.

  The Role of Quality Assessment Practices

To navigate the complexities introduced by generative AI, educators must take the lead in leveraging a range of quality assessment practices. These practices help detect misconduct and foster a deeper, more authentic learning experience for students. The Australian Framework for Generative AI in Schools emphasizes the utmost importance of aligning assessment practices with fairness, transparency, and accountability principles. This alignment reassures educators about the integrity of the assessment process. Below are several evidence-based assessment strategies that can support this goal within the Australian educational landscape.

  Drafting Process and Continuous Feedback

Research underlines the importance of iterative learning processes in fostering student understanding and reducing opportunities for academic dishonesty. By implementing a structured drafting process, where students submit multiple drafts and receive continuous feedback, educators can confidently monitor the evolution of student work. This method allows teachers to identify inconsistencies or sudden leaps in quality that might suggest using AI tools, thereby maintaining academic integrity.

“Maintaining academic integrity requires diverse and thoughtful assessment practices, and while AI impacts education, it doesn’t change the core issue of misconduct”

According to Sommers (1980), drafting and revising help students develop critical thinking and reflective skills essential for deep learning. I will note the year of the reference here: 1980. Technology does come and go, but good assessment practices stay true. The Australian Framework for Generative AI in Schools supports this approach by advocating for learning design that clearly outlines how generative AI tools should or should not be used, ensuring a clear and unbiased evaluation of student ability (Principle 1.5).

  Oral Assessments and Discussions

Oral assessments or viva voces have been a cornerstone of assessment in many educational systems for centuries. Viva voce (derived from Medieval Latin) is defined as “an examination conducted by speech or assessment in which a student’s response to the assessment task is verbal, in the sense of being expressed or conveyed by speech instead of writing” (Pearce & Lee 2009). They require students to verbally articulate their knowledge, defend their ideas, and engage in spontaneous discussion—all of which are difficult to fake using AI. This method aligns with the Framework’s principle of human responsibility, ensuring that teachers and school leaders retain control of decision-making and remain accountable for the outcomes (Principle 5.1).

A study by Dann (2002) highlights that oral assessments can effectively gauge students' understanding, as they must explain concepts in their own words and respond to follow-up questions. This method also allows educators to assess communication skills and ensure that students have internalised the material rather than merely regurgitated AI-generated content.

   Portfolio-Based Assessments

Portfolios offer a longitudinal view of student's progress and accomplishments, making them a powerful tool against academic misconduct. Students demonstrate their achievements and learning journey by compiling work over a semester or academic year. The Australian Framework for Generative AI in Schools emphasises the importance of designing assessments that allow for a clear and unbiased evaluation of student ability, which portfolios naturally support (Principle 1.5).

Barrett's (2007) research indicates that portfolios support authentic assessment by providing a comprehensive picture of a student’s capabilities. They also encourage students to take ownership of their learning, as they must reflect on their work and justify their choices. This reflection process is inherently individualised, making it difficult for students to substitute AI-generated work without detection.

  Peer Assessment and Collaborative Learning

Peer assessment and collaborative learning environments can significantly reduce the likelihood of academic misconduct while promoting deeper engagement with the material. When students assess each other's work, they develop critical evaluation skills and a deeper understanding of the subject matter. Moreover, the collaborative nature of such tasks fosters accountability and a sense of community, which is crucial for maintaining the integrity of assessments.

Topping (1998) found that peer assessment enhances learning outcomes and promotes fairness and transparency in the evaluation process. The Australian Framework for Generative AI in Schools supports these principles by emphasising the importance of equity and inclusivity in using generative AI tools, ensuring that all students have access to fair and respectful assessment practices (Principle 4.1).

  Problem-Based Learning (PBL)

Problem-based learning (PBL) is an instructional method that challenges students to "learn by doing." PBL tasks are often open-ended and complex, requiring students to apply their knowledge to real-world scenarios. This approach inherently discourages academic misconduct, as AI does not easily generate solutions.

Barrows and Tamblyn (1980) emphasise that PBL fosters critical thinking, creativity, and problem-solving skills. The Australian Framework for Generative AI in Schools supports using generative AI tools to enhance critical thinking and creativity rather than restrict human thought and experience (Principle 1.4). Because PBL tasks are typically unique to the learning context, it becomes difficult for students to use AI-generated content without being easily detected by educators.

  Authentic Assessments

Authentic assessments involve tasks that mirror real-world challenges and require students to apply their knowledge and skills in practical, meaningful contexts. These assessments are more difficult for AI to replicate because they require a deep understanding of the subject matter and the ability to apply knowledge in novel situations.

Wiggins (1998) argues that authentic assessments provide a more accurate measure of student learning by focusing on applying knowledge rather than rote memorisation. The Australian Framework for Generative AI in Schools encourages the use of generative AI tools to support and enhance teaching and learning outcomes, ensuring that assessments are relevant and aligned with real-world applications (Principle 1.1).

  Integrating AI Literacy in the Curriculum

One proactive approach to maintaining academic integrity in the face of generative AI is integrating AI literacy into the curriculum. Educating students about the ethical use of AI tools and the potential consequences of misuse can foster a culture of integrity. By understanding how AI works and the implications of its misuse, students are more likely to engage in honest academic practices.

The Australian Framework for Generative AI in Schools advocates for educating students about generative AI tools' potential limitations and biases, deepening this learning as student usage increases (Principle 1.2). By embedding AI literacy into the curriculum, educators can ensure that students are prepared to navigate the ethical challenges of using AI in their academic work.

  Addressing Academic Misconduct with AI

Generative AI offers new cheating methods as well as tools to combat academic misconduct, such as AI-powered plagiarism detection for both traditional and AI-generated content. However, technology alone isn't enough. Maintaining academic integrity requires diverse and thoughtful assessment practices. While AI impacts education, it doesn’t change the core issue of misconduct. Educators should adopt varied methods—like drafting processes, oral discussions, and authentic assessments—to foster integrity and provide meaningful learning. The Australian Framework for Generative AI in Schools supports ethical AI use, ensuring equity and accountability in education.

Weekly Brief

Read Also

Preparing Every Classroom for Career Success

Preparing Every Classroom for Career Success

Jarrad Grandy, Executive Director of Student, Oakland Schools
Navigating Through Cybersecurity in the AI Era

Navigating Through Cybersecurity in the AI Era

Dennis Guillette, Director and Security Architect, University of South Florida
Digitalizing Education

Digitalizing Education

Eva Harvell, Director of Technology, Pascagoula-Gautier School District
Transforming Education Through Technology Leadership

Transforming Education Through Technology Leadership

Hector Hernandez, Director of Technology Operations, Aspire Public Schools
Social Impact and Artificial Intelligence: Understanding Indirect Measures

Social Impact and Artificial Intelligence: Understanding Indirect Measures

Kent Seaver, Director of Academic Operations, the University of Texas, Dallas
Building Smarter Infrastructure through Faculty Partnership

Building Smarter Infrastructure through Faculty Partnership

Brad Shook, Senior VP of Technology and Operations, the University of Texas Permian Basin