Latest Research Highlights
04Jul

Artificial Intelligence is evolving, so how do we regulate its systems?

04 Jul, 2023 | Return|

Various aspects of life have been revolutionized by the latest technological developments in Artificial Intelligence (AI) systems. Education, economics, health, and security have benefited from this progress. For example, with search engines such as ChatGPT, that are supported by AI technology, teachers and learners can have human-like conversations, receive help in answering various questions, and can also seek assistance in writing emails and scientific articles. Health practitioners may also use AI to diagnose and find causes of diseases and perform necessary surgical procedures to manage them. As a result, AI has contributed to a higher quality of life for humans. 

However, there are several risks associated with the use of AI systems. Since many systems are capable of learning continuously and automatically, this allows them to execute tasks resulting from self-learning. In addition, they can also utilize pre-stored big data. These AI systems can make certain decisions as a result of their creation by developers for personal benefit; examples include bias towards a particular group or acting in violation of the privacy of users and their data.

Therefore, AI systems can be harmful if built on flawed, insufficient, or biased data. Furthermore, the rapid advancement of algorithmic systems (due to their self-learning ability) means that, in some cases, it will not be clear how AI systems have arrived at their conclusions. Thus, we may sometimes depend on systems that are beyond our control to make decisions that impact society.

As a result of artificial intelligence systems becoming a significant component of products and services, a number of countries and organizations are developing ethical rules governing the use of these systems, known as "artificial intelligence usage policies". These policies include legal and ethical doctrines to establish a general framework for compliance by developers of artificial intelligence systems, as well as principles directed at users of these systems in order to alleviate potential risks. Examples of these policies include the artificial intelligence usage policy issued by the Ministry of Transport and Communications and Information Technology in the Sultanate of Oman in 2021, and the principles of artificial intelligence ethics issued by the Saudi Data and Artificial Intelligence Authority in August 2022.

Notably, in 2021, UNESCO member states adopted the world's first global agreement on "Artificial Intelligence Ethics". This was to identify common values and principles that will guide the development of artificial intelligence and ensure its safety by developing ethical guidelines for all individuals involved, whether normal or legal persons, as long as they participate in any stage of the life cycle of any artificial intelligence system.

 

Dr. Saleh bin Hamad Al Barashdi

Dean of the Faculty of Law