Morals in the Machine

It’s easy to believe and trust that machines are neutral. Or that they can make better decisions than we can. But in the age of hyper scaling models, super size neural networks and automated decision making systems, does machine learning actually lead to better outcomes? It depends on what you feed the machine.
Software engineer Jenny (phire) Zhang pens an enlightening essay exploring the mechanics of trust in an AI world, and the widespread belief that machines can make better decisions than we can. She drops the technical jargon and instead displays the ability to ask good questions, brilliantly breaking down the concepts of machine learning and AI in general.

The first time I took on the role of a lead engineer, a few years ago, I had a really hard time learning how to prioritize and delegate work. For much of my early career, I had simply never needed any planning skills beyond “say yes to everything and work yourself into the ground”.

One of the best pieces of professional advice I’ve ever received came during this time, from a mentor who told me to delegate the things I was already good at. If I’m good at something, it means I’m actually equipped to evaluate whether my team is doing a good job. It also means I don’t need the practice as much, so delegating frees me up to improve other skills.
There’s an oft-repeated myth about artificial intelligence that says that since we all know that humans are prone to being racist and sexist, we should figure out how to create moral machines that will treat human beings more equitably than we could. You’ve seen this myth in action if you’ve ever heard someone claim that using automated systems to make sentencing decisions will lead to more fairness in the criminal legal system. But if we all know that humans are racist and sexist and we need the neutrality of machines to save us—in other words, if we should delegate morality to AI—how will we ever know if the machines are doing the job we need them to do? And how will we humans ever get better?
AI is a funny multifaceted beast. What the media colloquially terms “AI” is generally actually talking about a subtype called “machine learning”. The most common form of machine learning involves collecting large datasets illustrating the kinds of things you would like a machine to be able to do, feeding those datasets into a neural network that has layers upon layers of internal parameters for making decisions, and letting the model iteratively teach itself how to imitate the skill captured in the dataset. For example, if you want to build a speech recognition tool that lets you dictate text messages to your phone, you need to feed your machine learning model thousands of hours of people speaking along with transcripts of what they said, so that it can learn which sounds correspond to which letter combinations.
Two of the most common strategies for building a model involve supervised and unsupervised learning. In supervised learning, individual data points in a collection are labelled with the “ground truth”, which is the empirical, factual description of that data. For example, an audio clip in a voice dataset either does or does not feature someone saying “the quick brown fox jumps over the lazy dog”. In unsupervised learning, the model is fed unlabelled data and makes best guesses about the outcome based on pattern-matching. In either case, engineers then evaluate the model with test cases that check how “correct” its assessments were, and adjust accordingly.
This means that the strength of …
Jenny Phyre for UPPL
Jenny is a software engineer who mostly writes about the impact of technology on our lives and also feelings and books and other sundry things.

Share this article

Latest news