How a Secular View of Morality Creates Challenges for Ethical A.I.
Artificial Intelligence is increasingly making decisions that impact our lives.
From the social media algorithms that decide what you see on social media (and thus, shape what you’re likely to believe) to artificial intelligence systems aiding medical diagnoses, these technologies all have ‘built-in’ moral and ethical values influencing how they work. (For more on this, see the book by Tim Challies, The Next Story: Life and Faith After The Digital Explosion.)
However, as we continue to develop and use these technologies, we must consider the ethical implications of A.I. systems making moral choices. Will these technologies make decisions that ultimately harm or help humanity? Will these technologies lead to ethical human flourishing, or will they dehumanise us?
While the ethics driving Western societies – and thus many A.I. designers – have been based on a loosely Christian framework, this is eroding and being replaced with a more secularised view of ethics.
And this secular view of ethics will cause increasing challenges in developing ethical A.I.
Here’s why:
1) The dominant secular view is that morality is made up and subjective
Historian Yuval Harari captures the widespread secular view of morality in his bestselling book Sapiens:
“Hammurabi and the American Founding Fathers alike imagined a reality governed by universal and immutable principles of justice, such as equality or hierarchy. Yet the only place where such universal principles exist is in the fertile imagination of Sapiens, and in the myths they invent and tell one another. These principles have no objective reality.”
(New York: HarperCollins, 2015, p. 108. Quoted in John Lennox, “2084 – Artificial Intelligence and the Future of Humanity”, Zondervan: Grand Rapids, 2020, p. 147.)
In this view, there is no objective moral code that transcends time and culture. It’s like any other arbitrary rule, like road rules: we make them up to give order to our societies.
2) But if we make up morality, how can we say one view of morality is ‘better’ than any other?
This raises an urgent question for the future of A.I. ethics: if morality is purely subjective, how can we determine whether one view of morality is superior to another? In his book 2084 – Artificial Intelligence and the Future of Humanity (Zondervan: Grand Rapids, 2020, p. 147) Oxford mathematician and author John Lennox points out the disturbing implications of this view:
“However, if morality, if our ideas of right and wrong, are purely subjective, we should have to abandon any idea of moral progress (or regress), not only in the history of nations, but in the lifetime of each individual. The very concept of moral progress implies an external moral standard by which not only to measure that a present moral state is different from an earlier one, but also to pronounce that it is ‘better’ than the earlier one.”
He continues:
“Without such a standard, how could one say that the moral state of a culture in which cannibalism is regarded as an abhorrent crime is any ‘better’ than a society in which it is an acceptable culinary practice?”
And this view will impact A.I. design:
3) This will have real implications for ethical A.I.: What does ethical A.I. look like if ethics are purely subjective?
If morality is purely subjective, how do we decide what is right and wrong? Without an objective framework, what moral principles will A.I. have, whether autonomous weapons or autonomous cars?
Companies will likely make these decisions in the short term (hello, Facebook!). And sadly, such companies are often driven by the almighty dollar rather than by Almighty God. After all, if ethics is just made up, who’s to say that loving money is any worse than loving your neighbour?
However, as the government starts regulating A.I., the question of morality becomes even more crucial. Regulation is expected to grow, and at that point, we citizens get a direct say in the ethics that guides the development of these technologies.
We must recognise the need for an objective moral framework to evaluate and compare different ethical perspectives. Doing so can pave the way for a future where A.I. systems are efficient, innovative, compassionate, and ethical.
___
Originally published at AkosBalogh.com. Photo by Kindel Media.
One Comment
Leave A Comment
Recent Articles:
14 February 2025
4.1 MINS
Adelaide's Walk for Life 2025 organisers are looking towards a much brighter future for the unborn and were thrilled with “the best Walk for Life to date”.
14 February 2025
3 MINS
As Aussie NFL star Jordan Mailata etches his name into Super Bowl history, he does so among a score of teammates who openly attribute their success to their faith in Jesus Christ.
14 February 2025
2.8 MINS
We are thrilled to announce the Christians in Politics 2025 Digital Summit, which will be held next Wednesday, 19 February 2025 on Zoom. With the federal election to be called soon, now is the time to learn how to most effectively engage in the political process to promote Australia's Christian values for the good of all.
14 February 2025
4.8 MINS
Freed Israeli captives reveal horrific conditions under Hamas — hostage torture, starvation, and unimaginable suffering in war-torn Gaza.
14 February 2025
7 MINS
Trump has defied political correctness to address a major human rights abuse, sanctioning South Africa over land expropriation, racial discrimination, and Afrikaner persecution.
13 February 2025
6.6 MINS
Leading British intellectual and Jewish commentator Melanie Phillips offers much-needed clarity on how Western leftists are siding with Islamists to bring about the end, not only of Israel, but of the West.
13 February 2025
13 MINS
An increasing number of farmers and agricultural bodies are complaining that the promised bonanza from free trade giving them greater access to global markets is not happening. They point to rapidly rising food imports undermining Australian farmers in their own domestic market.
Akos, thank you for cutting across the algorithm and bringing us this piece that was not part of our ‘feed’ today. You seem to imply that AI is a given, it has to come, it will come. And our job is to be awake and make sure we have a say in ‘how’ it comes.
I pray that this will not be the story of AI. I pray it will find a way to ‘self destruct’ when ‘it’ realizes what it has made for itself. I pray that AI will be atop of the modern day Babel, and come crashing down to earth very soon. Call me a Luddite, I don’t mind.