Campaign for ethical use of artificial intelligence surges

0
Ali Fatty, Student Jamia Ahmadiyya International Ghana
Screenshot 20230706 224447 Samsung Notes
Mohamed Nohassi | Unsplash

It is quite difficult to find a global consensus on the definition of artificial intelligence (AI). Yet ‘AI’ is a broad term that covers a wide range of disciplines that simulate behaviours that are exclusively found in living beings, especially humans. Thus, AI covers areas such as computer vision, Natural Language Processing (NLP), and speech recognition. (www.reviewofreligions.org/36863/will-artificial-intelligence-transform-religion/)

Just recently, the EU Parliament took a bold step towards passing a landmark bill into law on the regulations of artificial intelligence in the bloc. If passed, the bill will see to the responsible and regulatory use of artificial intelligence by giant tech companies or how other people operate it independently. (www.washingtonpost.com/technology/2023/06/14/eu-parliament-approves-ai-act/)

According to EU lawmakers, the AI Act will also aim at protecting fundamental civil rights and safeguarding against AI threats to health and safety while simultaneously fostering innovation in the technology. (www.dw.com/en/innovation/t-50781180)

Furthermore, according to the bill, any AI firm found violating the AI Act will be fined a hefty £60 million, or 6% of the company’s annual global revenue, depending on their crimes.

Importance of AI in the modern age

In truth, it is still open to interpretation whether AI technologies have unreservedly ushered in substantial convenience for humankind. While they are thought to have simplified complex tasks that were once almost insurmountable for humans, it is crucial to note that the extent of these improvements can vary widely. For instance, while AI technology might theoretically make information retrieval from various internet sources easier, the reliability and precision of these processes can often be called into question.

Humans may naturally struggle with large-scale repetitive tasks, leading to the potential for serious mistakes due to fatigue and forgetfulness. The claim is that AI technologies can automate these repetitive tasks with a higher level of accuracy and efficiency, but this assertion warrants careful scrutiny as errors and limitations can still exist.

Regarding the recent ChatGPT AI technology, while it is postulated that it can generate articles on a variety of topics and provide responses to diverse questions, it is important to approach these claims with a degree of scepticism, given the ongoing evolution and unpredictability of AI. It is, for instance, widely known to “hallucinate”, i.e., generate false references and quotes.

In the business world, AI is presented as a tool for collecting vast amounts of data and predicting customer behaviour. Yet, the accuracy of these predictions and the quality of the collected data can be suspect. Similarly, the effectiveness and reliability of AI devices in homes that remind people of important appointments need to be proven over time.

AI supposedly has the capability to quickly provide a wealth of facts and process substantial amounts of data. If these AI tools are handled with careful thought and scepticism, they might bring about some level of convenience for human work. However, it is essential to bear in mind that the real-world effectiveness and reliability of AI technology are still areas that require ongoing and meticulous investigation.

Concerns about AI and calls for regulations

Recently, the heightened anxieties of many people regarding the fleeting advancement of complicated AI machines and their adverse impacts on people and societies have cut across all continents.

People are genuinely concerned, uncertain, and fearful about a technology that most do not fully understand how it will impact their lives.

Most people want AI systems that are transparent, non-discriminatory and environmentally friendly. These people lament that they desire an AI system that should be overseen by people, rather than by automation, to prevent harmful outcomes.

Lawmaker Tudorache told DW during an interview that AI technology has brought a lot of good things to the global digital economy, but he argues that it must be regulated for the greater good:

“At the same time, we see more and more risks. What we’re doing with this legislation is to try and curtail these risks, mitigate them and put very clear rules in place that will ensure transparency and trust.” (“EU lawmakers lay groundwork for ‘historic’ AI regulation”, www.dw.com/)

Even tech company owners and some politicians have expressed the same concerns about the risk AI could pose to human beings in the future. For example, Brando Benifei, also a member of the European Parliament working on the EU AI Act, told journalists that:

“While Big Tech companies are sounding the alarm over their own creations, Europe has gone ahead and proposed a concrete response to the risks AI is starting to table.” (“MEPs ready to negotiate first-ever rules for safe and transparent AI”, www.europarl.europa.eu/ )

Several research papers and AI scientists around the world, including the president of Microsoft, Brad Smith, and the CEO of OpenAI, Sam Altman, have called for tougher regulations on AI, fearing that it has the potential to pose a threat to the very existence of humanity. (https://edition.cnn.com/2023/05/25/tech/microsoft-ai-regulation-calls/index.html)

Irresponsible use of AI tools

As much as AI tools have advantages, they also have many disadvantages. Although, its actions are dependent on the input data it processes. In certain circumstances, if the input information of the AI is inaccurate; it can potentially influence and manipulate the behaviour of people negatively.

This week during the Yale CEO virtual summit, the majority of the business leaders in the summit including Walmart Chief Doug McMillion and Coca-Cola (KO) CEO James Quincy (https://money.cnn.com/quote/quote.html?symb=KO&source=story_quote_link) — said AI had the potential to destroy humanity five to 10 years from now. (https://edition.cnn.com/2023/06/14/business/artificial-intelligence-ceos-warning/index.html)

There are some AI-operated weapons that have the capability of instigating mass destruction around the world, especially during wartime. For example, now there are drones that are used to carry out bombing, spying, and airstrikes on other countries. (https://foreignpolicy.com/2023/04/11/ai-arms-race-artificial-intelligence-chatgpt-military-technology/)

A statement signed by top business leaders, AI industry executives, academics, and celebrities warned of the “extinction” risk from AI. The statement says:

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” (www.safe.ai/statement-on-ai-risk#open-letter)

Geoffrey Hinton, who is also the co-developer of the technology, announced his resignation from Google on Twitter, expressing concerns about the dangers of AI technology.  (https://twitter.com/geoffreyhinton/status/1652993570721210372)

“I left so that I could talk about the dangers of AI without considering how this impacts Google,” Hinton said in a tweet. (www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html)

Hinton further told CNN reporters that “I’m just a scientist who suddenly realized that these things are getting smarter than us.”

“I want to sort of blow the whistle and say we should worry seriously about how we stop these things getting control over us.” (https://edition.cnn.com/2023/05/02/tech/hinton-tapper-wozniak-ai-fears/index.html)

Hinton told CNN that if AI “gets to be much smarter than us, it will be very good at manipulation,” including “getting around restrictions we put on it.” (https://edition.cnn.com/2023/06/14/business/artificial-intelligence-ceos-warning/index.html)

Steve Wozniak, cofounder of Apple, expressed his worries that AI could be used by “bad actors” to spread misinformation:

“AI is so intelligent, it’s open to the bad players, the ones that want to trick you about who they are.” (“Apple co-founder warns AI could make it harder to spot scams”, www.theguardian.com/)

Recently, Elon Musk, the CEO of Twitter, signed an open letter calling for a six-month pause in the development of AI technologies due to their profound risk to society. (www.nytimes.com/2023/03/29/technology/ai-artificial-intelligence-musk-risks.html)

Mr Musk also raised the same alarm in 2014 during an aerospace event at the Massachusetts Institute of Technology, saying he was uncertain about developing AI himself. “I think we need to be very careful about artificial intelligence,” he said while answering audience questions. “With artificial intelligence, we are summoning the demon.” (“Elon Musk Ramps Up A.I. Efforts, Even as He Warns of Dangers”, www.nytimes.com/ )

Learning from past AI tech reputation

Colonel Tucker Hamilton, chief of AI test and operations in the US Air Force, was speaking at a conference organised by the Royal Aeronautical Society when he revealed that an AI drone attacked its operator and destroyed the communication tower for stopping it or being an obstacle in the accomplishment of its objective. (www.aerosociety.com/news/highlights-from-the-raes-future-combat-air-space-capabilities-summit/)

Mr Hamilton said:

“We were training it in simulation to identify and target a SAM [surface-to-air missile] threat. And then the operator would say, yes, kill that threat.” (“AI drone ‘kills’ human operator during ‘simulation’”, https://news.sky.com/)

“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.” (Ibid.)

He continued:

“We trained the system: ‘Hey, don’t kill the operator – that’s bad. You’re going to lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.” (www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test)

Mr Hamilton ended by saying that there needs to be an ethical use of AI tools. “You can’t have a conversation about artificial intelligence, intelligence, machine learning, and autonomy if you’re not going to talk about ethics and AI.” (“AI drone ‘kills’ human operator during ‘simulation’”, https://news.sky.com/)

What should be done to prevent the negative output of AI?

Hazrat Mirza Masroor Ahmad, Khalifatul Masih Vaa has been calling on world leaders for a long time about the urgent need to curb the reckless use of weaponry. Today, this plea takes on renewed significance in the face of rapid technological advancements. The emergence of lethal autonomous weapons (LAWs) and the escalating rise of artificial intelligence have imbued his calls for caution with a heightened sense of urgency and relevance.

On 28 October 2016, during his keynote address at a Peace Symposium of the Ahmadiyya Muslim Community in Canada, Huzooraa said:

“Today, the world around us is constantly evolving and advancing. Unquestionably, in the past few decades, the world has moved forward in leaps and bounds in terms of technological development. Every day, new forms of modern technology and scientific advancements are being developed.” (www.reviewofreligions.org/12839/justice-in-an-unjust-world-2/)

Huzooraa added that

“Where modern technology has been a force for good, it has also been used as a force for evil and destruction. Such technology has been developed that has the capability of wiping nations off the map with the press of a button. Of course, I am referring to the development of weapons of mass destruction that are capable of inflicting the most unimaginable horrors, devastation, and destruction. Such weapons are being produced that have the potential to destroy not only civilisation today, but to also leave behind a legacy of misery for generations to come”. (Ibid.)

In a recent meeting with the Alislam website team, Hazrat Khalifatul Masih Vaa, underlined the importance of attention and responsibility when using AI tools. Huzooraa advised that all pertinent AI-processed output related to Islam Ahmadiyyat should be confirmed for precision by professionals and intellectuals of the Jamaat.

Hazrat Amirul Momineenaa also highlighted how AI tools might not produce an objective output for their master. Huzooraa said:

“If you’re going to write about the positive and negative aspects of something, AI may unduly emphasise the negative aspects. If you are preaching to someone, or even an Ahmadi, they may not be able to analyse information properly as to what is correct and what is incorrect.” (www.reviewofreligions.org/42375/artificial-intelligence-a-great-servant-a-bad-master/)

In addition, Huzooraa emphasised the importance of watchfulness, security, and preparedness for the misuse of AI and to avoid others taking advantage of such unfortunate situations. Huzooraa elaborated:

 “We need to come up with a way of securing it. So that others should not be able to present our information in the wrong way. Therefore, you will need to investigate this from now on and plan accordingly that if such and such situation occurred, then how would we counter this.” (www.reviewofreligions.org/42375/artificial-intelligence-a-great-servant-a-bad-master/)

During a meeting of the German Ahmadi University graduates with Hazrat Amirul Momineenaa, in response to a question, Huzooraa said that if everything is left for the artificial intelligence to do, humans would not have anything to do; thus, that will cause regression and stagnation in the human intellect. (“German Ahmadi university graduates blessed with meeting Huzoor”, Al Hakam, 2 June 2023, Issue 272, p. 3)

No posts to display

LEAVE A REPLY

Please enter your comment!
Please enter your name here