The Australian Centre for Student Equity and Success acknowledges Indigenous peoples as the Traditional Owners of the lands on which our campuses are situated. With a history spanning 60,000 years as the original educators, Indigenous peoples hold a unique place in Australia. We recognise the importance of their knowledge and culture, and reflect the principles of participation, equity, and cultural respect in our work. We pay our respects to Elders past, present, and future, and consider it an honour to learn from our Indigenous colleagues, partners, and friends.

You are reading: Artificial intelligence, ethics, equity and higher education: A ‘beginning-of-the-discussion’ paper

Associate Professor Erica Southgate, University of Newcastle, Australia

NCSEHE 2016 Equity Fellow

Artificial intelligence (AI) can been defined as:

a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy (OECD, 2019).

AI is all around us, infused in everyday computing applications and the automation of organisational processes and systems. From search engines, to smartphone assistants, to systems that evaluate job and loan applications, online product recommendations, and the use of biometric facial recognition technology in social media and security applications, AI increasingly and invisibly powers our digital interactions and influences what we can do, know and, some would argue, who we can be.

The purpose of this discussion paper is three-fold:

  1. As AI-powered applications become more ubiquitous it is incumbent upon educators, administrators and leaders in universities to develop a foundational understanding of what the technology is and how it works so that we can ask critical questions about its design, implementation and implications for humans in educational systems. As the opening quote suggests, we do not want to be working and learning in institutions where decisions about processes involving, and interactions with, AI feels like magic.
  2. Having a foundational understanding should prompt informed dialogue and democratic decision making about the ethical design, implementation and governance of AI in higher education. This includes leveraging existing legal and regulatory mechanisms and developing new robust governance frameworks to ensure fairness, transparency and accountability.
  3. It is important to raise awareness of the unique challenges that AI poses to equity in education and to commonly held views on discrimination.

The intention of the paper is to equip stakeholders with talking points to prompt informed and sustained dialogue on how AI might be used for good, and for what it’s good for, and importantly, to consider where it should not be used at all. The paper provides an introduction to AI and its subfield, machine learning, and outlines some of the uses in education. It then provides a framework for thinking ethically about AI in education from a human rights perspective, before exploring issues of AI error, bias and discrimination. Resources to assist with the ethical governance of AI are highlighted. The paper concludes with suggestions to ensure that the introduction of automated and intelligent systems in higher education does not come at the expense of equity.

Read the full discussion paper:

Erica Southgate discussion paper front cover thumbnail

Featured publications
A case study documenting the transition of one Indigenous student, Robbie, from an underprivileged school located in the Western suburbs of Sydney to an urban Australian university.
The Critical Interventions Framework Part 3 (CIF 3) focuses on evaluative studies which provide details of the impacts of specific interventions on equity groups in relation to access to and success in higher education.