Deus ex machina

Since Plato, philosophers have invested countless hours and words on the investigation of ethics. What makes something right or wrong? What do we mean by acting morally or immorally – or indeed amorally. Are good and bad fixed and objective facts, or just opinions relative to your culture, your religion, your circumstances, your place in time and space? Can the ends justify the means – in fact, must the means always be justified by the ends?

Until now, excepting perhaps the profoundly religious, we’ve always had to contend with the problem of evidence. Even if you believe that moral values are fixed – inherent in the nature of things – it’s not been possible to prove it. For other supposed facts we can usually provide some level of objective[1] evidence. If I say that an object is red, I can microscopically measure its reflectiveness to show that it reflects longer wavelength light. If I say it is hard, I can prove it by reference to the Mohs scale using universally accepted tools.

It’s a bit harder to demonstrate rightness. Very broadly, apart from reliance on your deity of choice, there have been three philosophical solutions to this problem over time: hidden objectivity (Plato & Kant) which proposes that moral values are objective but we can’t perceive them directly, unless we somehow perfect ourselves; teleology (Mill, Bentham) which suggests that moral choices are justified by their results and then has some trouble measuring those results; and moral relativity (Rousseau, Locke, Mackie) which concludes in one form or another that morality is a social construct on which we agree – in some fashion – and which is therefore mutable.

So far, so moral philosophy 101. ”What’s this doing in a privacy and infosec blog?”, I hear you cry. Let me elucidate…

The problem with defining something as objectively right or wrong is that historically we’ve not been able to point to anywhere (other than a religious text) and say “look – there you go – so is it written”. Except, increasingly, now we can.

Why? Because machine learning. We are delegating moral decisions to machines. Sometimes a human coder has taken the decision and encoded it directly; other times it’s the product of some kind of training scheme – a GAN or similar construct and some training data. What’s interesting is the moral consequences.

Consider:

Self-driving cars

Sometimes a driver has to make a moral choice – run someone over or swerve and potentially harm oneself and others. There’s a whole area of philosophy, colloquially known as trolleyology, dedicated to this problem. In a self-driving car, the machine must make the choice, and must do so consistently in order for any insurer to cover the vehicle. It is presently supposed that self-driving cars are likely to have a moral bias in favour of their occupants encoded into them, precisely for reasons of minimising liability. Tough on the kid that ran out into the street chasing a football, though; guess he was in the wrong.

Automatic parole decisions

Particularly in the US, the court system has taken to using automated systems to make parole decisions. Increasingly it appears that the users of these systems don’t know, or don’t understand, the algorithms that underpin them. Life-changing assessments of moral character are being made by machines based on a hidden encoded ethics, without effective human scrutiny. Since people are good at deriving moral principles by observation, at some point prisoners will begin to shape their behaviour to match the AI’s expectations in order to secure parole.

AI courts

China has – yes, really – begun to virtualise its justice system. Using civil disputes in the digital domain as a prototype, the idea is eventually to automate much of the judiciary, and to tie it in to their rather Orwellian social credit system. Given the relative dumbness of machine intelligence, it seems obvious that smart defendants will learn how to present evidence primarily to convince the algorithm rather than prove the truth.

ML-driven CV screening

This one somewhat blew up in Amazon’s face. They trained a GAN to select candidates for interview based on their CVs. Unfortunately, because the training dataset was mostly male the GAN learned to identify male CVs based on non-obvious markers (the CVs had been redacted to remove gender identifiers along with age, race and so on) and biased selection in their favour. To their credit Amazon noticed this, tried to code it out, failed and scrapped the programme. But they won’t be the only ones trying to do this, and at some point it is inevitable that a good CV – and by implication a worthwhile career – will be as adjudged by the AI.

Social media content moderation

Social media is drowning in crap. Racist crap, sexist crap, extremist crap, pornographic crap (sometimes literally link SFW). Presently we largely rely on horribly-overworked human beings to try to stem the tide; they don’t usually last long before they go down with some kind of PTSD. So the demand for automation is obvious, but so are the flaws and risks. Now that social media has become perhaps the primary mechanism for self-expression, giving a machine absolute right of censorship on the basis of arbitrary (and quite possibly entirely unreviewed) rules with limited or no right of appeal potentially has a frighteningly chilling effect on freedom of speech. More importantly, as people adapt their posting behaviour to stay within what they deduce to be the rules, they will be adopting a morality that has been determined by and encoded in a machine. After all, if bad posts are suppressed, and my post was permitted, then my post must be a good post, no?

Each of these cases contains a moral choice; whom to kill, whom to free, whom to convict, to whom to offer opportunity. In each of these cases the output for a given set of inputs is fixed – it is objective, and unarguable. And in each of these cases, given time, one could identify the specific lines of code that enshrine the moral judgement and provide a deontological proof of the relevant moral principle if we accept the machine as a moral agent. Which, in the case of self-driving cars at least, we must do if we want car insurance to continue to be a thing.

In the absence of proof of the existence of gods, and in acceptance of the difficulties of getting machines to make moral judgements in the fluid and contextual way that we do – or at least that one subset of philosophers believes that we do – we’ve had to fall back on becoming god, and encoding morality into reality for our machines.

By and large it’s happening without organised oversight and being done by tech companies, not by legislatures or anyone else that we might recognise as a contingent moral authority. It’s also profoundly Kantian, since most of us lack the critical faculties (i.e. coding skills) to perceive the objective truth and must simply trust the machines.

This disturbs me. Particularly in the context of social media; as we move our lives ever more online, I am concerned by the idea that a machine will decide what we may and may not say, and whether content we post – and by implication our behaviour – is morally acceptable. We cannot argue. We cannot collectively decide differently. We have no democratic say. We do not even have the inevitable flexibility of the physical world, where not all presently transgressive behaviour will be detected and so morality has some room to evolve. Online, the panopticon is total.

And this, my friends, is why we have Article 22 of the GDPR.


[1] Pace Descartes

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.