Why you shouldn’t be trying to persuade a robot that you should get the job

While much of the world is busy worrying about losing jobs to automation in the future (and this is overstated), what has crept past for over a decade is that automated systems (to most, Artificial Intelligence, or AI) already play a major role in whether or not — and how — we get the jobs we still have, and these are used by a steadily growing number of Indian and multinational companies.

The current breed of algorithmically-driven, automated decision-making systems warrant our attention and alarm.

All but ubiquitous, particularly in mass hiring, these systems are largely opaque as to their decision-making process, potentially making many unethical (or illegal) decisions, whilst dehumanising the hiring process in general.

Usually, the bigger the company, the more the applicants, the wider the automation, and larger the potential of being “wronged” by a machine. Lured in by sales pitches peppered with higher productivity and lower costs and attrition, who wouldn’t want to use them?

There are benefits, but they don’t outweigh the concerns yet. What often goes unseen is how the secrecy around the deployment of these systems (ironically, in part owing to the fear of them being gamed) combined with a lack of both awareness and adequate legal safeguards creates a deadly cocktail of misuse of personal data, automated discrimination, and abject uncertainty for hundreds of millions of job-seekers.

So what does an applicant drowning in rejection emails do? The Guardian, in a rather click baiting-ly titled piece, “How to persuade a robot that you should get the job”, (inadvertently) offers up some solutions, seemingly suggesting that there are ways in which you and I can, in fact, fight back against the machine.

After highlighting the plight of the job-seeker in the age of AI, towards the end, where one expects a silver lining, the article gets into how people discuss and offer various ways to stress-test and “game” the system.

One ludicrous solution comes up (citing “an HR employee for a major technology company”) — slipping words such as “Oxford” and “Cambridge” into your CV in invisible text to pass automated screenings.

It may not be The Guardian’s, but it’s someone’s solution, arguably offered to almost 25 million readers, without raising the red flag about its potential futility.

Complex machine learning systems that parse and analyse things like CVs to assess candidates are not yet the stuff of Elon Musk’s worst nightmare, but they’re not so gullible either. For starters, many of these systems use tonnes of other data points — social media posts, video essays, scroll speed, text entered (and deleted), and much more is logged and analysed.

The systems learn quickly and constantly from this perennial supply of training data, using incredibly complex neural networks we cannot yet adequately comprehend, to draw correlations they themselves cannot justify. Just how many things can a candidate hope to adapt to for a system he knows nothing about?

Michael Veale, a prominent researcher in automated decision systems at the University College London adds that “While there are a lot of studies lately pointing to how individuals can ‘fool’ machine learning systems, they tend to require people to have access to the systems already.

That’s a cybersecurity problem more than anything.” And that even where a candidate thinks he or she may succeed, “It is definitely possible to adapt systems if they are being gamed, but it’s not assured it will happen. More likely than not, attempts at gaming will just make systems useless.”

What would it take a system like this to game the gamer? Not much at all. Would an application that would otherwise have been selected by such a system be binned because it attempted to use invisible ink? A raised eyebrow? A certain number of words per minute? We just don’t know. Systems also change from company to company and vary by narrow contexts — there are no catch-all solutions.

What we do know is that someone hoping to achieve a different (positive) outcome thanks to slapdash measures is likely to come out feeling far more dejected when rejected — for if even gaming didn’t work, they must really not be good enough. That is not it, the problem (most often) is not with the applicant. First and foremost, this is the message that needs to go out.

What should you really be doing? There is no quick-fix solution, and not as simple as invisible ink. The silver lining, however, is that the movement towards greater fairness, accountability and transparency is already well under way.

Long overdue updates in data protection regulations are also on the anvil, including in India, which will directly challenge many of the automated hiring solutions companies currently offer, severely restricting complete automation as well as what data they can gather, how they gather it and what they can do with it — a big departure from the current free-for-all data (mis)use.

As interim measures, candidates should first work towards making themselves more aware about the level and scale of automation, and asking the right (and difficult) questions of their potential employers.

While polite emails may not get clear answers, in more severe instances where you feel you may have been unfairly discriminated against by an automated system, consider taking a slightly more formal, legal approach.

When it comes to hiring, there are already laws that require employers to account for equality, anti-discrimination and disparate impact — and these apply to an algorithm as much as they do to hiring manager John Doe.

While systems can (in some cases) be designed to ignore certain attributes in an attempt at fairness and equality, this is not a comprehensive solution (for technical reasons to do with machine learning that I won’t get into here).

When companies know they are unlikely to be able to prove that their automated hiring process was (legally) fair, a letter from your lawyer or professional association will get them breaking into a sweat.

There is also fantastic academic and data activism research taking place in many parts of the world, applying various methodologies, old and new, attempting to reveal the many issues with widely used automation systems, holding their makers and users to account.

What researchers often lack is data, being outside actors, locked out of the companies they would like to audit. Use your data for good — contribute it meaningfully to sound, ethical research which needs it to effect change.

For the companies themselves, should growing awareness and potential legal challenges prompt a rethink, Veale suggests that rather than over-reliance on automation and trying to remove the human element altogether, “They should be investing in ways to better help their employees sift through applications in computational ways.

Data visualisation tools and user interfaces also enable people to deal with great numbers of applications, and they’re often lacking and lagging behind right now. Use computers to help humans make decisions, not to help humans offload decision-making responsibilities.”

IANS