AI morality

How a Secular View of Morality Creates Challenges for Ethical A.I.

11 July 2023

2.6 MINS

Artificial Intelligence is increasingly making decisions that impact our lives.

From the social media algorithms that decide what you see on social media (and thus, shape what you’re likely to believe) to artificial intelligence systems aiding medical diagnoses, these technologies all have ‘built-in’ moral and ethical values influencing how they work. (For more on this, see the book by Tim Challies, The Next Story: Life and Faith After The Digital Explosion.)

However, as we continue to develop and use these technologies, we must consider the ethical implications of A.I. systems making moral choices. Will these technologies make decisions that ultimately harm or help humanity? Will these technologies lead to ethical human flourishing, or will they dehumanise us?

While the ethics driving Western societies – and thus many A.I. designers – have been based on a loosely Christian framework, this is eroding and being replaced with a more secularised view of ethics.

And this secular view of ethics will cause increasing challenges in developing ethical A.I.

Here’s why:

1) The dominant secular view is that morality is made up and subjective

Historian Yuval Harari captures the widespread secular view of morality in his bestselling book Sapiens:

“Hammurabi and the American Founding Fathers alike imagined a reality governed by universal and immutable principles of justice, such as equality or hierarchy. Yet the only place where such universal principles exist is in the fertile imagination of Sapiens, and in the myths they invent and tell one another. These principles have no objective reality.”

(New York: HarperCollins, 2015, p. 108. Quoted in John Lennox, “2084 – Artificial Intelligence and the Future of Humanity”, Zondervan: Grand Rapids, 2020, p. 147.)

In this view, there is no objective moral code that transcends time and culture. It’s like any other arbitrary rule, like road rules: we make them up to give order to our societies.

2) But if we make up morality, how can we say one view of morality is ‘better’ than any other?

This raises an urgent question for the future of A.I. ethics: if morality is purely subjective, how can we determine whether one view of morality is superior to another? In his book 2084 – Artificial Intelligence and the Future of Humanity (Zondervan: Grand Rapids, 2020, p. 147) Oxford mathematician and author John Lennox points out the disturbing implications of this view:

“However, if morality, if our ideas of right and wrong, are purely subjective, we should have to abandon any idea of moral progress (or regress), not only in the history of nations, but in the lifetime of each individual. The very concept of moral progress implies an external moral standard by which not only to measure that a present moral state is different from an earlier one, but also to pronounce that it is ‘better’ than the earlier one.”

He continues:

“Without such a standard, how could one say that the moral state of a culture in which cannibalism is regarded as an abhorrent crime is any ‘better’ than a society in which it is an acceptable culinary practice?”

And this view will impact A.I. design:

3) This will have real implications for ethical A.I.: What does ethical A.I. look like if ethics are purely subjective?

If morality is purely subjective, how do we decide what is right and wrong? Without an objective framework, what moral principles will A.I. have, whether autonomous weapons or autonomous cars?

Companies will likely make these decisions in the short term (hello, Facebook!). And sadly, such companies are often driven by the almighty dollar rather than by Almighty God. After all, if ethics is just made up, who’s to say that loving money is any worse than loving your neighbour?

However, as the government starts regulating A.I., the question of morality becomes even more crucial. Regulation is expected to grow, and at that point, we citizens get a direct say in the ethics that guides the development of these technologies.

We must recognise the need for an objective moral framework to evaluate and compare different ethical perspectives. Doing so can pave the way for a future where A.I. systems are efficient, innovative, compassionate, and ethical.


Originally published at Photo by Kindel Media.

We need your help. The continued existence of the Daily Declaration depends on the generosity of readers like you. Donate now. The Daily Declaration is committed to keeping our site free of advertising so we can stay independent and continue to stand for the truth.

Fake news and censorship make the work of the Canberra Declaration and our Christian news site the Daily Declaration more important than ever. Take a stand for family, faith, freedom, life, and truth. Support us as we shine a light in the darkness. Donate now.

One Comment

  1. Jim Twelves 11 July 2023 at 8:34 pm - Reply

    Akos, thank you for cutting across the algorithm and bringing us this piece that was not part of our ‘feed’ today. You seem to imply that AI is a given, it has to come, it will come. And our job is to be awake and make sure we have a say in ‘how’ it comes.
    I pray that this will not be the story of AI. I pray it will find a way to ‘self destruct’ when ‘it’ realizes what it has made for itself. I pray that AI will be atop of the modern day Babel, and come crashing down to earth very soon. Call me a Luddite, I don’t mind.

Leave A Comment

Recent Articles

Use your voice today to protect

Faith · Family · Freedom · Life



The Daily Declaration is an Australian Christian news site dedicated to providing a voice for Christian values in the public square. Our vision is to see the revitalisation of our Judeo-Christian values for the common good. We are non-profit, independent, crowdfunded, and provide Christian news for a growing audience across Australia, Asia, and the South Pacific. The opinions of our contributors do not necessarily reflect the views of The Daily Declaration. Read More.