As Artificial Intelligence (AI) continues to shape the modern workplace, it is paramount employers and policymakers address AI issues proactively and as they arise.  Balancing technological advancement with the protection of employee rights is essential to ensure a fair and equitable work environment in the age of AI.

Ensuring that AI systems are designed and implemented to uphold principles of fairness, transparency, and accountability is essential to protect employers from potential claims by employees or prospective employees under the Fair Work Act 2009 (Cth).

Equal Treatment and AI’s Role

Employees are legally entitled to equal treatment, free from discrimination based on protected characteristics such as age, race, sex etc…  These fundamental rights remain, irrespective of whether decisions are made by human managers or AI systems. The use of AI in the workplace may complicate efforts to prove discrimination, potentially making it harder for employees to challenge biased outcomes due there being no human readable explanation for the AI outcome/decision.

AI systems, which are programmed from vast datasets may preserve human biases.  If these systems are programmed on data reflecting prejudices, they may replicate or even exacerbate such prejudices.  For instance, if AI tools are not carefully designed and monitored, they might disproportionately affect certain groups, raising significant concerns about whether an employee or prospective employee is being discriminated against for a prohibited reason.

Current Uses of AI in Employment

Employers are increasingly turning to the use of AI for a range of tasks that have a direct impact on employees.[1]  We note the following examples:

Electronic Tracking: Employers are utilising electronic tracking systems to monitor employees’ movements, use of systems and productivity. While this can help in managing workflows and improving efficiency, it may also raise concerns about privacy and the potential for misuse of the tracking information.

Camera Monitoring: Surveillance cameras are increasingly being used to monitor employee movement and behaviour to ensure employees are complying with workplace policies and expectations. While surveillance is in some ways a protection and safety feature, a downside may be the impact on employee morale in the workplace due to the constant surveillance.

Automated Hiring and Promotion: AI algorithms are now being used to screen resumes and make decisions about hiring and promotion.  Although AI systems can process applications more quickly than, than a manager or recruiter, they may perpetuate biases via training data programming which may in turn lead to unfair treatment.[2]

Keystroke Monitoring: Some employers use keystroke tracking to gauge employee performance examining how often and quickly an employee types.  This practice may be considered intrusive and may not accurately reflect an employee’s productivity or effectiveness during work hours.[3]

While the above AI technologies can enhance efficiency, they also pose potential risks to employers if regard is not had to employees or prospective employees’ rights.

Case Study: A Company AI Recruitment Algorithm

An example of AI-related discrimination occurred in 2014 when a company developed a proprietary recruitment algorithm designed to streamline its hiring process. Using a decade’s worth of internal recruitment data, the AI system was intended to identify traits and qualifications valued by the company so that it could apply the criteria to potential candidates.  The algorithm, however, inadvertently developed a bias against female candidates.

The system was found to favour male candidates over females, as it had learned to associate certain male-associated traits with higher job suitability. This bias was exacerbated by the AI system penalising resumes which included terms related to women’s achievements, such as “women’s chess captain.”

This case highlights the live risk that exists for AI systems to unintentionally discriminate based on a prospective employee’s gender.[4]

The Law Council of Australia’s Stance on AI

In response to AI challenges, the Law Council of Australia (LCA) has recently advocated for increased transparency and accountability in AI use.  The LCA has conveyed the following:[5]

Disclosure: The LCA is calling for a requirement that individuals be informed when AI is being used in decisions affecting them. Transparency is crucial for ensuring that employees are aware of how AI might impact their work and rights.

Human-Readable Explanations: The LCA has also sought a mandate that automated decisions are accompanied by a clear, human-readable explanation, thus enabling employees to both understand and contest decisions (if necessary) and ensure that automated processes can be reviewed by commissions, tribunals or courts.

Moving Forward

As AI continues to evolve and become more integrated into workplace practices, it is crucial for both employers and policymakers to address these challenges proactively. Balancing the benefits of AI with the need to protect employee rights requires careful consideration.

So, what are the takeaways?

  • Think about the methods of recruitment and accessibility for potential candidates and regularly review these methods;
  • Specifically consider any matter which may be seen as discriminatory or bias; and
  • Review your contracts of employment to ensure they cover the organisation/company in relation to privacy and surveillance.

A sensible approach to implementing AI in the workplace is the best approach.  Efficiencies can be achieved with smart use of AI while still ensuring your operations do not fall foul of the law.

Should you have any queries about AI in the workplace, please feel free to contact us.

By Henry Coventry & Nicole Dunn

Disclaimer: This publication has been provided for general guidance only and does not constitute professional legal advice. You should obtain professional legal advice before acting on information contained in this article.

____________________________

[1] Australian Government Department of Industry, Science and Resources, Safe and responsible AI in Australia, June 2023.

[2] M. Hoffman, L. B. Kahn, D. Li, Discretion in Hiring, The Quarterly Journal of Economics 133, 765-800 (2018).

[3] The Future of Work Report, Herbert Smith Freehills, 22 September 2021.

[4] Dastin, J, Insight – Amazon scraps secret AI recruit tool that showed bias against women, Reuters, 11 October 2018.

[5] Law Council of Australia, Submission into Safe and responsible AI in Australia, 17 August 2023.