The rapid evolution of artificial intelligence has forced man to raise questions previously the sole prerogative of science fiction or philosophy. Machines are not just tools supplementing man’s abilities anymore; machines are devices that have the capacity to make decisions, learn, and in some cases, impact man’s actual behavior. This truth has raised the issue of immediacy: are machines moral, and if not, how do humans create moral systems for them?
Morality, at first blush, appears to be a distinctly human phenomenon. It is inextricably linked with emotions, empathy, culture, and the messy experiences that inform human judgment. Machines, by contrast, function through algorithms, rules, and data sets. They don’t feel pity, guilt, or responsibility. But as artificial intelligence is increasingly used in medicine, law, warfare, and daily life, machines are being left to make decisions with profound moral consequences. An autonomous car deciding whether to save its passengers or not to kill pedestrians is making a moral decision, though its reasoning is mathematical and not emotional. The scary reality is that morality has been outsourced, at least in part, to machines.
One way to respond to this challenge is to look at morality as codifiable. Just as people establish codes of law, engineers can establish ethical codes in algorithms. This is the foundation of such ideas as “ethical AI” or “machine ethics.” For example, in medicine, an AI system can be designed to prioritize saving human life over expense. In theory, this makes the machine hold human values. But the problem is this: human morality is not universal, nor is it absolute. Various cultures have varying ethical priorities, and within one culture alone, people can disagree intensely with each other over what is right. Which morality should be instituted in the machine, and who gets to decide?
There is another complication due to the nature of machine learning. Unlike other explicit instruction-based programs, machine learning programs arise through exposure to data. This results in uncontrollable action, as the machine learns to “judge” by patterns. If the data is biased, then so will the judgments. Consider facial recognition software that isn’t as effective for certain ethnic groups due to being trained on biased data. Therefore, machines not only inherit the smartness but also the moral failures of their creators. Rather than being objective, AI is a mirror that distorts and reflects social biases.
The philosophical side cuts deeper when we ask the question of whether machines can ever be moral, not merely impersonate it. Human morality involves intentionality, consciousness, and the ability to claim responsibility. A human being can regret, learn from failure, and attempt to be morally better. A machine can’t regret, but can only send error messages. It can’t desire to be good; it can only be re-programmed. In this sense, machines can never be moral agents like humans are. They are, at best, moral tools, human extensions of morality, not the origin of it. The moral duty then continues with the designers, consumers, and societies that design and use these systems.
There is, though, a counter to this argument. Human morality itself also evolves and develops based on social conditioning. Our values, biases, and instincts are not all volitional but are coded by nature and culture. If human morality is the product of complex patterns of input, why cannot computers produce something equivalent through complex processes of learning? It would not be human morality, but possibly a synthetic ethics, differing in origin, but still capable of guiding decisions in significant ways. That possibility frightens a great many because it foretells machines coming to the point of being able to act under a moral calculus without humans’ input, threatening AI not just thinking for itself but deciding values for itself.
The ethical responsibility for humanity, therefore, is two-fold. First, to ensure the systems we build reflect our highest values and not our lower biases. This demands transparency, accountability, and cross-disciplinary collaboration between engineers, philosophers, ethicists, and legislators. Secondly, to be mindful of the limits of control. Leaving moral decisions to machines does not result in a shift of human responsibility. A country cannot blame its weapons for pleading innocence, and neither can society blame its algorithms. The moral responsibility remains human, although the decision is executed by a machine.
Ultimately, maybe the question “can machines have morals?” is not as important as “how do humans remain moral in a world of machines?” The danger is not that machines will be moral agents, but that people will abdicate their moral agency to them.
Technology is a projection of the morals of its inventors, and if we start to deal with it as if it has independent morality, then we risk forgetting that it’s our choices, our prejudices, and our morality that make it. The potential and danger of AI is not in whether machines can be moral or not, but if human beings are going to take the moral task of designing, directing, and mastering them responsibly.