Making of a digital conscience: ‘Moral’ evolution of ChatGPT

    0
    Labeed Mirza, Jamia Ahmadiyya UK
    AI

    Since the release of ChatGPT in November 2022, a crucial and pertinent question has loomed large: “How do chatbots form and develop their moral compass?” This article aims to explore this question and seek an answer.

    Growing up, we are exposed to a multitude of factors that directly affect how we determine what is fundamentally good and bad in life, morally speaking. Religion, culture, upbringing, and surroundings – these are a few such variables that have a direct impact on our moral compass.

    The debate here is not about what exactly is good or bad – that should be left to the thinkers, religious scholars, and moral leaders of our society. Rather, the question is, how do we determine what is good and bad?

    As humans, we are, to a large extent, cognisant of what forms our moral perspective. Hence, it is reasonable to assert that we have a significant influence on shaping our moral values over time as we gain a deeper understanding of their origins. Allow me to explain. Take the Hindu, who only eats vegetarian food, as an example. Whether or not his surroundings tell him to do otherwise, he is cognisant of the views of his religion; therefore, he understands that as long as he willingly subscribes to that religion, he is also bound to its beliefs.

    The ability to reason, ponder, think, and make a decision are all attributes that make the human creature so magnificent. The Holy Quran alludes to this in Surah al-An’am:

    قُلۡ لَّاۤ اَقُوۡلُ لَکُمۡ عِنۡدِيۡ خَزَآئِنُ اللّٰہِ وَلَاۤ اَعۡلَمُ الۡغَيۡبَ وَلَاۤ اَقُوۡلُ لَکُمۡ اِنِّيۡ مَلَکٌ ۚ اِنۡ اَتَّبِعُ اِلَّا مَا يُوۡحٰۤي اِلَيَّ ؕ قُلۡ ہَلۡ يَسۡتَوِي الۡاَعۡمٰي وَالۡبَصِيۡرُ ؕ اَفَلَا تَتَفَکَّرُوۡنَ

    “Say: ‘I do not say to you: “I possess the treasures of Allah,” nor do I know the unseen; nor do I say to you: “I am an angel.” I follow only that which is revealed to me.’ Say: ‘Can a blind man and one who sees be alike?’ Will you not then reflect?” (Surah al-An’am, Ch.6: V. 51)

    This verse is indicative of the human ability to reason. It is through reflection, debate, and reasoning that we are able to differentiate ourselves from the spiritually blind. Hence, God asks: “Can a blind man and one who sees be alike?”

    Emergence of chatbots

    The lines between what is artificial and what is real have further blurred with the emergence of chatbots that understand natural human expression and are able to process and respond accordingly. The technology behind them has become a springboard for other independent app developers to make chatbots befitting their own needs.

    Some are fun chatbots that allow you to talk to your favourite historical figures, while others use artificial intelligence (AI) and are designed to be your artificial partner. (For those who have watched the TV series “Black Mirror”, you just can’t help but remember the episode “Be Right Back”).

    One such programme which has recently appeared on the Apple App Store and has garnered major interest is TextsWithJesus.

    AI 1

    ‘TextsWithJesus’ chatbot

    TextsWithJesus is a desktop app that aims to replicate Jesus’ voice. One can ask the chatbot for advice, a prayer, or even “a blessing”. Interesting, however, is how the bot – which is designed to replicate a holy personage (and, for many Christians, even a deity) – seems to take a less conservative approach to religion.

    When asked about some of the more controversial passages of the Bible – such as those that discuss the punishment for non-virgin women (Deuteronomy 22:20-21) and for homosexuality (Leviticus 20:13) – it is quick to remind the audience that such verses are not as true today as they were at the time of writing. It quickly tells us to focus on Jesus’ teachings of love and compassion.

    The reason to highlight these examples, in particular, is to highlight just one of the many fallacies that artificial intelligence brings with it, and that is the shaping of impressionable minds by choosing how information should be responded to.

    British Computer Scientist Stuart Russell, OBE, spoke about how chatbots may be capable of misleading humans. He said:

    “From the point of view of the AI system, there’s no distinction between when it’s telling the truth and when it’s fabricating something that’s completely fictitious.” (gbnews.com/news/chat-gpt-ai-news-tech-latest)

    Pandora’s box is an ancient Greek myth in which a lady is told to keep a certain box closed. However, at the first chance she gets, she opens the box. Curiosity gets the better of her, and unknowingly, she unleashes all the evils into the world. This story, although a myth, resonates unequivocally with the modern world.

    High-speed internet, social media, and artificial intelligence – these are all akin to Pandora’s box. It seems that with each, an abundance of evil unleashes itself into the world.

    Chatbots on the Cambridge Analytica scandal

    During the 2010s, Cambridge Analytica, a British consultancy, gathered the personal information of millions of Facebook users without their permission, mainly for the purpose of political marketing. This is often referred to as the Facebook–Cambridge Analytica data scandal.

    An interesting exercise was to ask ChatGPT what exactly was so morally bad about the 2018 Cambridge Analytica scandal. Two of the points mentioned were:

    “Manipulation and Targeting: Cambridge Analytica used the collected data to create detailed psychological profiles of users and employed this information for targeted political advertising. This raised concerns about the manipulation of individuals through tailored messaging and the potential to influence their political views without their awareness.

    “Ethical Concerns: The methods employed by Cambridge Analytica, such as spreading disinformation and exploiting psychological vulnerabilities, raised ethical concerns. Manipulating individuals’ emotions and beliefs for political gain is widely considered unethical.”

    ChatGPT tells us that the scandal was morally wrong because it exploited vulnerable minds with manipulated advertising. Though some allege that ChatGPT does practically the same, it is not fair to say that ChatGPT is hypocritical because it is just a computer programme that is based on neural networks that have been trained on immense bodies of data and information.

    However, having said that, it is important to note that the software does have a political stance of its own. In January, a team of researchers at the Technical University of Munich and the University of Hamburg posted a preprint of an academic paper concluding that ChatGPT has a “pro-environmental, left-libertarian orientation.” (arxiv.org/abs/2301.01768)

    The aim of this article is not to sway the reader against the use of the tool, nor is it to tell the reader to oppose a pro-environmental, left-libertarian political stance. Rather, it is to remind the reader that it is a tool. It should not have the ability to present opinions as facts and sway your moral compass. This will lead to the moral corrosion of society. Man has been given a brain and reason to use his own thoughts and opinions to reach conclusions.

    It is for this very reason that Hazrat Mirza Masroor Ahmadaa, in a meeting with the Alislam team, stated:

    “We have to come up with a way of securing it (artificial intelligence) so that others are not able to present our information in a wrong way. Therefore, you will need to look into this from now on and plan accordingly.” (www.youtube.com/watch?v=NSKdtxCD81s&t=273s)

    Conclusion

    In conclusion, while chatbots may not be hypocritical in their responses, it is essential to recognise that they operate based on extensive data training and algorithms, lacking the consciousness or moral agency of a human being. However, the revelation of its political orientation highlights the importance of approaching AI tools with a critical mindset.

    It is not our aim to discourage the use of such technology or to oppose any particular political stance it may embody. Rather, this article serves as a reminder that chatbots are tools, and we must remain vigilant not to allow them to shape our moral compass or replace our capacity for independent thought and reasoned judgement. As we advance in the age of artificial intelligence, it becomes increasingly crucial to safeguard our information and ensure responsible use to prevent potential moral corrosion in society.

    No posts to display

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here