AI Bias in Australia: An Urgent Call for Regulatory Intervention
Introduction
An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment (OECD.AI, 2024)
With technological advancement and adoption across all domains of society, AI systems making life or death decisions is an urgent moral concern. In conflicts, the risks of increasing autonomous weapons are a real-time humanitarian, legal and ethical risk. Meanwhile, AI systems are making decisions that affect Australian lives right here at home. We are witnessing digital dehumanisation and the undermining of human rights through automated decision-making when AI determines decisions that negatively impact people and cause harm.
AI systems are quietly determining who gets jobs, housing, healthcare, and government services, while perpetuating systemic bias against already marginalised people. The statistics reveal the scale of AI use: 62 per cent of Australian organisations use AI extensively in hiring, yet only 41 per cent of organisations monitor for bias, according to the Responsible AI Index 2024. The Department of Industry, Science and Resources reports that 40 per cent of Australian small and medium businesses deploy AI systems without informing customers or employees. This means potentially discriminatory algorithms operate in commercial settings with little scrutiny or transparency.
Australia faces a critical choice, to lead global standards for ethical AI governance or risk becoming a testing ground for discriminatory systems designed elsewhere. The window for preventive regulation is closing rapidly, but decisive action could position Australia as a world leader in the responsible use of AI.
The Current Landscape: Regulatory Backpeddling
Australia is backing away from tighter AI regulation precisely when intervention is most needed. After signing the Paris AI Action Summit Agreement and establishing regulatory “guardrails”, the Productivity Commission’s recent Interim report now recommends treating AI-specific regulation as a “last resort”, firmly prioritising economic growth over discrimination protection (p. 1). Meanwhile, pressure mounts from bodies such as the Tech Council of Australia to accelerate AI adoption across government departments and recognise AI as “an economic priority”.
Yet Australian families are experiencing algorithmic discrimination in their daily lives. AI systems don’t accidentally discriminate, they’re designed to benefit some people over others, based on who they are, not what they can do. Research shows that bias gets embedded at the design stage through choices about models, success metrics, and fairness programming (Soleimani et al., 2025). Every design decision becomes an opportunity to embed discrimination, reducing complex lives to risk scores and data points.
Everyday Dehumanisation and Discrimination
Employment: When Machines Judge Merit
The employment sector provides clear evidence of how AI bias translates to economic harm. In 2024, nine graduates from Sisterworks, a Melbourne social enterprise supporting migrant and refugee women, failed AI recruitment interviews because the AI systems couldn’t accommodate different English literacy levels and non-native speech patterns. In another study, women were significantly less likely to be shortlisted by AI hiring systems than male applicants with identical resumes.
Dr Marc Cheong and his team from the Centre for AI and Digital Ethics also found that subconscious gender bias could be replicated by simple algorithms “without taking into account the merits of a candidate”. Critically, AI systems don’t just replicate bias, they exaggerate it. Research also shows that recruitment algorithms exhibit gender discrimination, racism and ageism. As well as targeting race, gender and age, bias against subtler characteristics like personality types extends discrimination beyond traditional categories (Soleimani et al., 2025). Often, those who are supposed to benefit from the AI systems development process are instead excluded.
Healthcare
Australian healthcare systems are failing those who need them the most, with the use of AI accelerating existing inequalities. For instance, digital triage systems that are trained predominantly on middle-class white populations struggle to recognise health indicators expressed by different cultural groups (Cross et al., 2024; Walter & Kukutai, 2019). Medical devices may not function the same way or as effectively when used with certain populations. An example of this is a piece of medical technology called the pulse oximeter, which reads the level of oxygen in the blood. The device was less effective at detecting oxygenation levels in people with darker skin. When medical decision-making is outsourced to biased algorithms, existing health inequities become intensified, with potentially life-threatening consequences.
For Indigenous Australians, this creates particular vulnerabilities. Research shows that colonisation, racism and cultural barriers already lead to inequity in healthcare access for Aboriginal and Torres Strait Islander peoples (D’Costa et al., 2025; Li, 2017). The Australian Institute for Health and Welfare has documented healthcare providers’ implicit bias, creating barriers to accessible healthcare. When AI healthcare systems trained on Western biomedical frameworks are applied to Indigenous populations, they fail to recognise culturally specific expressions of illness or accommodate traditional approaches to wellbeing. This could exacerbate existing health disparities.
Migration
AI systems increasingly control migration pathways without adequate consideration of the human cost of errors. One example of the potential for damage to be done by AI use in the migration context is when Chatbots provide incorrect information to migrants during the visa application process. These error-prone systems provide misleading advice about visa eligibility, telling some migrants they can’t apply, and encouraging others to pay for visas with no chance of success. When people make life-changing decisions based on flawed algorithmic advice, they might sell their home, quit their job, or give up on their dream of migrating altogether. For families already dealing with language barriers and complex legal processes, these errors represent a form of digital discrimination that can impact financial security and family futures.
Once again, international examples reveal the discriminatory potential of these systems. The UK discontinued the use of a discriminatory visa processing algorithm in 2020 that categorised applicants by nationality, and reinforced biases through feedback loops. At Australia’s borders, ‘smart’ airport gates read and evaluate the facial images and biometric data of people wanting to enter the country, determining who can enter and flagging those deemed security threats. Without careful regulation, these potentially life-changing decisions can be based on biased data that reflects historical prejudices rather than genuine security concerns. When this algorithmic decision-making process operates without transparency and humans are removed from the decision-making process, people’s futures get reduced to automated processes that appear objective but are, in fact, perpetuating discrimination.
Migrants in detention face additional algorithmic assessments through tools such as Australia’s Security Risk Assessment Tool (SRAT) and the US’s COMPAS system. These create harmful feedback loops where biased assessments lead to punitive treatment, which in turn worsens detainee behaviour due to detention stress, further entrenching punitive measures (Kinchin, 2024). COMPAS now recommends detention more frequently as officers tend to override its recommendations.
The problem extends beyond border security. While migration law is complicated and constantly changing, AI systems throughout the immigration process rely on outdated legal frameworks. With visa chatbots at the start of the immigration process and automated systems based on obsolete regulations that decide outcomes at the border, migrant families bear the cost of compounded algorithmic errors, yet the systems themselves are not accountable for the impact they have on people’s lives.
Social Services
The same AI systems that are discriminating in employment, healthcare and migration are now controlling access to Australia’s social safety net, with concerning implications for inequity. Centrelink and other government agencies are increasingly turning to AI to detect “fraud” and assess eligibility. Natural language processing is used to analyse communication patterns or predictive modelling to identify families who are “at risk”, yet they are trained on the same biased historical data that affects other sectors (Conrad, 2025). The result is that some may even have income support terminated unnecessarily due to algorithmic error. The Government’s Robodebt scheme exemplifies how systems intended to improve efficiency have dire consequences for people already facing financial hardship (AIAsiapacific.org, 2020; University of Sydney Law School, 2023).
International evidence also demonstrates a concerning pattern. In the US, for instance, AI child welfare systems flag Black children for investigation 20 per cent more often than white children, while language analysis tools misinterpret African American English as “aggressive” up to 62 per cent more frequently than standard English (Conrad, 2025). This results in systemic over-surveillance of families of colour, with Black and Indigenous families being reported and investigated, and families separated at disproportionately higher rates.
In Australia, this creates comprehensive digital exclusion. The same algorithmic discrimination affects employment, housing, welfare access, and child protection decisions. When these systems operate across all sectors simultaneously, they constitute systematic removal from Australian society. This reduces families to risk scores while appearing objective and efficient.
Housing
Housing represents another sector where algorithmic bias devastates real lives. AI systems have the potential to create subtle but significant discriminatory effects by amplifying existing systemic inequalities that disproportionately impact marginalised communities through factors like credit scores, employment history, and eviction records. Francesca Dias from Sydney couldn’t activate an Airbnb account because facial recognition software couldn’t match her photographs, forcing her to rely on her white male partner to make bookings instead (ABC, 2023).
The Massachusetts class action against the RentSafe algorithm demonstrates how tenant-screening systems disproportionately impact Black, Hispanic, and low-income housing voucher holders—discriminatory patterns almost certainly replicated in Australian systems. These aren’t isolated glitches but systematic digital redlining that excludes entire communities from housing opportunities based on algorithmic prejudice rather than genuine tenancy qualifications.
Education
Australia severely lags behind in AI regulation for education. While a few policy frameworks acknowledge potential bias and discrimination in AI systems, enforcement tools remain absent. The Australian Framework for Generative Artificial Intelligence in Schools, released in 2023, merely “provides guidance” without accountability mechanisms or penalties for non-compliance. The framework covers only school-based settings and generative AI, excluding predictive AI applications entirely.
Though the Framework acknowledges risks, including “potential for errors and algorithmic bias in content”, and emphasises inclusion principles, schools and education-tech firms face no consequences for violations. For instance, if institutions deploy AI systems that falsely accuse non-native English speakers of cheating (a documented bias pattern), affected students have no recourse, and institutions face no requirement to change their practices (Myers, 2023). This voluntary approach enables discriminatory AI to entrench in educational settings while appearing to address the problem.
Digital Colonisation Concerns

AI systems affecting Indigenous Australians exemplify digital colonisation in real time. Government algorithms use Indigenous data to make decisions about families and communities without transparency or mechanisms for Indigenous input. This violates data sovereignty by processing cultural knowledge through Western frameworks that fundamentally misunderstand Indigenous concepts of family, community, and wellbeing.
Dr Terri Janke warns that excluding Indigenous peoples from AI development affecting them “deepens colonial wounds.” Tech companies compound this harm by misappropriating Indigenous cultural heritage through generative AI systems that scrape sacred imagery for commercial profit without consent. For example, Adobe iStock’s mining of Indigenous languages for commercial applications.
The physical world already shows this pattern. The 2022 Productivity Commission report found that 75 per cent of products featuring Indigenous-style elements were created by non-Indigenous people. In the digital realm, facial recognition technology has been deployed in Indigenous communities without consent. This represents surveillance and extraction that strips Indigenous peoples of agency over their own data, culture, and futures.
Regulatory Gaps and Path Forward
Australia’s current approach to AI governance reveals significant gaps. The National Framework for the Assurance of Artificial Intelligence in Government mentions the need for vigilance against algorithmic bias, but lacks concrete bias prevention measures. Similarly, the Australian Government Department of Industry, Science and Resources’ Voluntary AI Safety Standard 2024, which was developed in partnership with the National Artificial Intelligence Centre and CSIRO, provides industry guidance but, as the title suggests, compliance remains voluntary, with no penalties in place to address non-compliance.
This voluntary approach is not adequately protecting Australians from algorithmic harm.
The evidence is clear that AI bias is systematically affecting Australian lives across employment, healthcare, housing, education and social services. Indigenous Australians face particular risks through data sovereignty violations and cultural erasure. Current voluntary frameworks are failing to address these challenges.
Australia needs comprehensive regulatory intervention before discriminatory AI systems become irreversibly embedded in our institutions.
Mandatory bias auditing
All public sector AI systems should undergo regular bias testing and the results should be published. High-bias systems should not be deployed in employment, health, housing, welfare and education until bias is eliminated, so that people are not dehumanised and treated unfairly based on algorithmic prejudice.
Transparency
Full disclosure should be required wherever AI affects people’s lives, including training data sources, algorithm logic and decision-making criteria.
Indigenous Data Sovereignty
Indigenous Australians must have agency in determining how and where their data is used. They must be genuinely consulted and have the power of veto over AI development that affects their communities.
Independent Oversight
Australia needs an independent AI Safety Authority, rather than an advisory group, to keep organisations accountable. Bias testing needs to be mandatory, and real penalties must be put in place to keep discriminatory AI practices in check.
Australia’s Choice
Australia can lead the world in demonstrating that advances in technology and human dignity are not competing values. We can attract global investment and partnerships by proving that advanced AI and fairness are compatible. But we must act decisively and urgently.
If we allow discriminatory systems to become the norm while clinging to the hope that voluntary measures will be enough, we have to deal with the consequences of algorithmic discrimination designed by others.
The window for preventive action is closing. The time for voluntary approaches has passed. Real penalties are needed to mitigate the negative impacts felt by everyday Australians, including those already disadvantaged and marginalised groups. Decisive leadership can still ensure AI serves all Australians fairly, but only if we act now.



