top of page
Fair Labour Alliance

Artificial Intelligence and the Ethics of Recruitment

AI is quickly becoming an integral part of the recruitment process. It is used to assess candidates faster, and hire the best staff for the job, all at a much lower cost than a traditional recruitment agency. Over half of HR Managers expect AI to be a regular part of their job in the next five years.

These AI systems can allow for greater efficiency, and a deeper understanding of the hiring process. Data-driven decisions are, at first glance, objective, and should be a force for greater efficiency when recruiting.

But what happens when those AI systems favour one group above another?


Problems With AI In Recruitment

Biases are often programmed into systems unconsciously, or can be learned as systems acquire more data. If the data from which the program learns contains biases, the machine will in turn learn those biases, and they become baked into the system as it generates new, more sophisticated algorithms.

In 2014, Amazon was training a machine learning algorithm to review resumes and make decisions about whether to hire applicants for software developer roles. After a year of watching the program, those in charge of developing it realised it had a bias against female applicants.

The program had learned this bias because it had been fed a decade’s worth of job applications, the vast majority of which were from male applicants – an indication of a lack of diversity in the technology industry. It had been programmed to spot patterns, and anything outside that pattern was rejected.

It penalised any resume that mentioned ‘women’, even those who would have been clearly qualified for the role if their CV had been checked by a human recruiter. Amazon abandoned the project in 2017, but many other businesses are still developing AI for recruitment.

AI can be applied to both job applications and interviews, there are now AI systems that can apply linguistic analysis to a candidate’s writing or speech to gain deeper insight into how that candidate’s mind works. Systems that analyse speech can even analyse a candidate’s tone of voice, emotional state, and body language.

This of course raises many questions about privacy, and how much data a business really needs before deciding to hire someone.

There is some concern about whether facial analysis systems could have a negative impact on diversity or equal opportunity hiring, opting to prefer certain candidates over others for reasons they can’t control.


The History Of Ethics In Recruitment

Psychometric evaluations have been part of the recruitment process for the over one hundred years, evaluating people based on intelligence, personality, and general mental wellbeing.

We’ve come a long way in making sure that certain groups are not marginalised in the process of recruitment. It is now illegal to discriminate against protected groups, and this has made the recruitment process better, fairer, and more ethical. But there are still improvements to be made, and some of the progress we have made so far on diversity could be undermined by AI.

Employers cannot ask any intrusive questions about an applicant’s personal life, where such questions do not impact their ability to do the job. However, if not programmed correctly, an AI might identify such irrelevant factors and reject an otherwise suitable candidate.

For example, it is now common for candidates to post huge amounts of personal data online and it is not unusual for recruiters to search open source social media sites for a better understanding of a candidate. However, a human recruiter will know where to draw the line in this process. An AI might not and could infer from such material matters such as political views or sexual orientation. In a liberal country this may not have a great impact, but in a country where personal and political rights are restricted, it could be problematic.

The UK government is attempting to curb these issues before they become a reality. The UK AI Code stipulates that “artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities”.

The key questions is whether this code will be sufficient to keep recruitment companies on the right track when AI systems promise unparalleled insight into candidates?


How To Keep Your Recruitment Process Ethical

The important thing to remember when recruiting, whether you’re using an AI system or not, is to consider the objective requirements of the job in question. Identify the skills an applicant would need to do the job, and focus on those during the hiring process.

Check your process isn’t excluding anyone for any reason other than a lack of skills, qualifications, or experience. Keep detailed notes on why certain candidates make it through your interview process to ensure you or your system isn’t discriminating, directly or indirectly, consciously or subconsciously against anyone.

Measure your outcomes, what is your final rate of hires from your initial pool of talent? Is there any way it can be improved? Do you need a more rigid structure, or a more human approach to the interview? New technology can help us be more organised and efficient, but it can’t replace a genuine human connection.

Human oversight can also help keep the whole recruitment process ethical, applying a level of self-awareness that an AI system simply can’t.


214 views0 comments

Recent Posts

See All

Comments


Post: Blog2_Post
bottom of page