This resource was developed by Jason Lodge as part of the ACSES Visiting Scholarship Program.
Authors: Jason M. Lodge (The University of Queensland), Matt Bower (Macquarie University), Kalervo Gulson (The University of Sydney), Michael Henderson (Monash University), Christine Slade (The University of Queensland), and Erica Southgate (University of Newcastle)
The full document is available for download in PDF [2MB] format.
Statement of purpose
Since 2022, artificial intelligence (AI) has challenged academic integrity, assessment, and pedagogy (Jin et al., 2025) in higher education, aligned with a fundamental shift in human-machine interaction.
The Australian Framework for Artificial Intelligence in Higher Education (henceforth, the Framework) provides guidance for the implementation of AI in the Australian higher education sector. AI technologies, including predictive algorithms and large language models (LLMs), can create new, synthetic content (generative AI) and can execute tasks autonomously (automated decision-making systems—sometimes referred to as “agentic AI” or “AI agents”).
The implementation of these technologies requires principles and practices that support human flourishing, honour diverse knowledge systems, promote equity, and support ethical innovation.
Any use of AI requires ongoing, critical discussion within institutions and across the sector regarding the environmental, moral, ethical, and intellectual property implications of these technologies. Despite the apparent opportunities that AI may bring to higher education, concerns persist about various aspects of the development, governance, and use of AI. A case can and has been made that AI (particularly generative AI) has no place in education (for example, see Bender, 2025), and there may be some validity to this argument. These technologies were not developed for educational purposes and, in many ways, conflict with the values and purpose of higher education. The appropriateness of these technologies for learning, teaching, research, or administration must remain a primary and ongoing topic of utmost concern.
The Framework was developed to address the transformative potential and challenges of AI, aligning with the values and standards of Australian higher education.
The Framework has a central focus on equity to align with the Australian Universities Accord Final Report recommendations (Department of Education, 2024) and to avoid amplifying existing digital divides and social inequities (Birhane et al., 2022). It specifically supports students from equity-bearing groups as initially identified in the report, A Fair Chance for All (National Board of Employment, Education and Training, 1990). These groups are currently recognised as:
- people from socio-economically disadvantaged backgrounds
- Aboriginal and Torres Strait Islander people
- women in non-transitional areas of study
- people from non-English speaking backgrounds
- people with disabilities
- people from rural and isolated areas.
This understanding of equity is grounded in Fraser’s (2009) concept of social justice, which includes:
- Redistribution: The fair sharing of resources and opportunities.
- Recognition: Respecting non-dominant cultural ways of knowing, doing, and being.
- Representation: The right for diverse perspectives to participate in decision-making (see also Southgate, 2020).
The Framework also affirms Indigenous peoples’ right to maintain, control, protect, and develop their cultural heritage and knowledge, including its representation in AI systems and the associated practices of data sovereignty.
This Framework builds on the Australian Framework for Generative AI in Schools but recognises the unique context of higher education. It is also recognised that the current and future development and use of AI is, and will be, broader than generative AI and automated decision-making systems (hence why we have opted for the generic term “artificial intelligence”).
The Framework aligns directly to key policy documents:
- The Australian Universities Accord Final Report (Department of Education, 2024).
- The Study Buddy or Influencer report (Parliament of Australia, 2024) highlighted the need for frameworks addressing academic and research integrity, equitable access, staff training, data privacy, and consistent standards.
Notably, the Framework does not substantially address academic integrity and the need for assessment reform, instead directing readers to Assessment Reform for the Age of Artificial Intelligence (Lodge et al., 2023a) and Enacting Assessment Reform in a Time of Artificial Intelligence (Lodge et al., 2025), published by the Tertiary Education Quality and Standards Agency (TEQSA).
To continue reading, please download the document in PDF format below: