Ethical Issues Associated with the Use of Smart and Intelligent Learning Environments

  • Boris ABERŠEK profile
    Boris ABERŠEK
    22 May 2019
    Total votes: 1

 

Introduction

 

The human brain is an endlessly powerful, energy-efficient, self-learning and self-repairable computer. If we understood and could imitate the ways in which the human brain works, we could spark a revolution in information technologies, medicine, and society. In order to create an artificial brain, we should unite all our existing knowledge and everything that we are still able to learn about the internal life of our brain molecules, cells and circuits. Artificial brain is a commonly used term, which refers to research in the field of developing both software and hardware with similar cognitive abilities as the animal or especially the human brain. Research in the field of artificial brainshas three important goals in science:

  1. to understand from the work of neuroscientists how the human brain works, which is known as cognitive neuroscience;
  2. to prove through thought experiments in the philosophy of artificial intelligence, that it is possible, at least in theory, to create a machine that has all the cognitive (and ethical?) capabilities of a human being;
  3. to create through a series of long-term projects in the field of AI a machine with universal or general intelligence, that is, to create universal (general) artificial intelligence. This idea was popularized by Kurzweil [4], who named it strong AI, which means that the machine would have to equally intelligent as humans. 

As part of the first objective, researchers are using biological cells to create neurospheres(small clusters of neurons) in order to develop new treatments for diseases including Alzheimer's and Parkinson's disease. Research in the frame of the second objectiveis related to known arguments such as John Searle's Chinese room argument [5], Hubert Dreyfus' critique of AI [6] or Roger Penrose's argument in The Emperor's New Mind [7]. These critics argued that there are aspects of human consciousness or expertise that cannot be simulated by machines. One reply to their arguments is that this is absurd: the biological processes inside the brain can be simulated to any degree of accuracy, without any deviation from the natural processes. This reply is quite old, as it was made as early as 1950, by Alan Turing [3] in his classic paper Computing Machinery and Intelligence.

The third objectiveis referred to as universal artificial intelligence[9], strong AI[4] or artificial general intelligence[10]. Research in this area focuses on the implementation of artificial brains in conventional (digital) computing machines, and, on the basis of this, on the analysis of whole brain emulation. Kurzweil claims that this could be done by 2025. Henry Markram, director of theBlue Brain project, made a similar claim. The question of whether digital computers are indeed suitable for simulating continuous brain processes is being asked increasingly often. 

In the following chapter, we will pay our attention mainly on the third objective, and attempt to provide possible answers to research questions such as: what potential ethical issues could occur, and what kind of consequences could be caused by introducing universal AI and intelligent learning environments into education?   

 

The social level, or the ethical issue of AI

In focusing our attention to the second, and especially the third objective of research from the field of developing artificial brains, we must ask ourselves two fundamental questions at the very beginning, namely: Can MACHINES (artificial brains) be taught morality and ethics?And, in relation to this question and the idea of introducing AI systems into education: Can machines drastically affect the education system as well?The philosophical starting point for answering these questions could be as follows: 

Before giving machines a sense of morality, humans have to first define morality in a way computer can process, or "understand". This "understanding" means that algorithms of morals and ethics have to be defined in such a way, that they can become formalized, i.e., translated into the language of science, and coded into a language understood by machines, preferably in machine language.

Whether this is a difficult, but not impossible task, is going to have to be the crucial question to be analysed and provided with appropriate solutions. 

While the problems of introducing AI into production and service activities (that is, using intelligent machinesin a context where we can detect "errors" relatively quickly) do not have a dramatic impact on the "cognitive part" of society, the introduction of AI into education processes, which are surely fundamental to human civilization, are extremely risky and require a careful consideration in terms of what should be done and to what extent (COM 237 [11], Com 759 [12], Dignum [13]). Consequences of errors can be catastrophic and, above all, long-lasting, as the results of introducing these innovations will only be seen many years into the future. Here are some initial warnings. For years, alarmist views have warned against the unanticipated effects of general (universal) artificial intelligence (AI) on society. Ray Kurzweil predicts that by 2029 intelligent machines will be able to outsmart human beings [4]. Stephen Hawking argues that once humans develop full AI, it will take off on its own and redesign itself at an ever-increasing rate, which may constitute a threat to the human race. Similarly, Elon Musk warns that AI may represent a fundamental risk to the existence of human civilization

 

Ethics, morality and AI

 

In short, when we talk about AI, we must consider it especially from the perspective of two fundamental philosophies of the explanatory gap, adopted according to Chalmers [14], and related to the development and use of AI:

 

  • the easy problem, the implementation and use of AI in "intelligent machines - robots", where the consequences of malfunction or failure can be quickly detected, and 
  • the hard problem, introducing AI into general society, which includes especially cognitive and affective computing.

Let us focus initially only on the easy problem. Already in 1942, Isaac Asimov [8] was the first to reflect on the ethical and moral issues in relation to robots, and defined the three basic laws of robotics:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Later on, in 1986, Asimov added the Zeroth Law:  

  • A robot may not harm humanity, or through inaction allow humanity to come to harm. 1. A robot may not harm a human, or through inaction allow a human to come to harm, unless this interferes with the Zeroth Law.

The Fourth and Fifth Law of Robotics were added subsequently, and different authors have modified and interpreted these laws in different ways. When we refer to the "easy" problem of implementing AI in the following section of the book, we will terminologically rely on the expression robot.

In 2011, the Engineering and Physical Sciences Research Council “(EPSRC) and the Arts and Humanities Research Council(AHRC)of Great Britain jointly published a set of five ethical principles for designers, builders and users of robots in the real world [15, 16, 17].

  1. Robots should not be designed solely or primarily to kill or harm humans.
  2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.
  3. Robots should be designed in ways that assure their safety and security.
  4. Robots are artefacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.
  5. It should always be possible to find out who is legally responsible for a robot.

In 2016, Tony Prescott revised these principles to differentiate ethical from legal principles [18]. This was followed by a number of authors who formulated various new laws, one of them defending the existence of a "new breed": Mark W. Tildendefined the following three guiding principles, three laws of robotics:

  1. A robot must protect its existence at all costs.
  2. A robot must obtain and maintain access to its own power source.
  3. A robot must continually search for better power sources.

The essential thing about these laws is that they are written in a way as they would be written by AI autonomously, expressing above all the concern for the existence of a new "breed" of intelligent machines. Tilden is one of the leading experts on robotics, who can, like others, breathe life into these intelligent machines – robots, so that they will abide by some completely different kind of ethical norms than those known and established among people today.

On the basis of the above, the question of ethical norms of AI creators arises, since from the moment AI enters its learning stage (a life of its own), humans have only the minimal possibilities of leading the process of learning by this new, AI entity. Because all such systems will be interconnected (for example, in the Internet of Things (IoT)), this means that they will not only learn from humans, but also from each other. A logical question therefore arises: who will they believe more, humans or their equals, especially while bearing in mind that this equal entity holds the opinion that they must primarily protect themselves and their existence?

Many authors are concerned with ethical issues associated with AI, but in the end, these are all just individual opinions. We are still at the very beginning on this matter; we haven't even really made the first step. For this reason, in 2018, the EU and the European Commission established the European AI Alliance to consider these issues (COM 237 [11], COM 759 [12]). The consultation on the draft Ethics Guidelines on Artificial Intelligence(AI) concluded on 1 February 2019[19].

 

 

AI and education

 

What about ethics in education in relation to AI? When we speak about intelligent learning environments that will be guided by AI, we are enabling AI not only to teach itself and create some new, humanoid entity parallel to humans, but to impact the development of the entire human society, define its values, and prescribe its ethical norms.  

ConclusionWe believe that the following recommendations (COM 237 [11], COM 759 [12]): 

  • explicitly defining ethical behaviour,
  • crowd-sourcing human morality, and 
  • making AI systems more transparent,

should be seen as a starting point for developing ethically aligned AI systems. Failing to imbue ethics into AI systems, we may be placing ourselves in the dangerous situation of allowing algorithms to decide what’s best for us. For example, in an unavoidable accident situation, self-driving cars will need to make some decision for better or worse. But if the car’s designers fail to specify a set of ethical values that could act as decision guides, the AI system may come up with a solution that causes more harm. This means that we cannot simply refuse to quantify our values. By walking away from this critical ethical discussion, we are making an implicit moral choice. And as machine intelligence becomes increasingly pervasive in society, the price of inaction could be enormous  –  it could negatively affect the lives of billions of people.

Machines cannot be assumed to be inherently capable of behaving morally. Humans must teach them what morality is, how it can be measured and optimised. For AI engineers, this may seem like a daunting task. After all, defining moral values is a challenge mankind has struggled with throughout its history. If we cannot agree on what makes a moral human, how can we design moral robots?Nevertheless, the state of AI research and its applications in society require us to finally define morality and to quantify it in explicit terms. This is a difficult but not impossible task. Engineers cannot build a “Good Samaritan AI”, as long as they lack a formula for the Good Samaritan human. However, as mentioned already, when introducing and implementing AI (especially into the general society-education), it is necessary to distinguish between two important fields that are relevant to this process, namely:

  • the legal field, and
  • the ethical or moral field.

In doing so, we must abide by the legal and ethical norms both:

  • in the field of using AI, which is a relatively simple problem in this context, as deviations from norms and regulations will be directly detected in the wider society, and especially
  • in the field of creating AI, and creating such algorithms that will contain all the necessary internal elements for the prevention of activities that could potentially harm the individual or humanity as a whole, such as the most elemental version of the four laws of robotics as defined by Isaac Asimov [8].

In the spirit of today's world and the advancements made in this field, these Asimov laws (and similar ones), could be rewritten simply by replacing the word "robot" with the acronym "AI", to imply a more generalized meaning. In the future, moral and ethical laws in this form will still have to be defined and written accordingly in a language understood by AI.

References

[1]     B. AberšekProblem-Based Learning and Proprioception. New Castle upon Tyne: Cambridge Scholars Press. 2018

[2]     B Aberšek, B. Borstner, J. Bregant Virtual Teacher: Cognitive Approach to e-Learning Material. New Castle upon Tyne: Cambridge Scholars Press. 2014

[3]     A. TuringComputing Machinery and Intelligence,Mind,LIX(236): 433–460,doi:10.1093/mind/LIX.236.433, ISSN 0026-44231950

[4]     R. Kurzweil The Singularity Is Near. London: Duckwort Overlook. 2005

[5]     W. Bechtel, A. Abrahamsen Connectionism and the Mind. Oxford, UK: Blackwell Publisher. 2002

[6]     H. Dreyfus What Computers Can't Do. New York: MIT Press. 1979

[7]     R. Penrose The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics. Oxford: Oxford University Press. 1989

[8]     I. AsimovI, robot. New York: Gnome Press. 1950

[9]     P. Voss Essentials of general intelligence: The Direct Path to Artificial General Intelligence. VGoertzel, B., Pennachio, C. (Ed.), Artificial General Intelligence, Berlin: Springer, 131 ̶ 159. 2006

[10]   S. Baum A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy. Global Catastrophic Risk Institute Working Paper 17-1. 2017

[11]   COM 237 Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions: Coordinated Plan on Artificial Intelligence. Retreived form: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM%3A2018%3A237%3AFIN2018

[12]   COM 759 Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions: Coordinated Plan on Artificial Intelligence. EC Retreived from: https://eur-lex.europa.eu/legal-content/EN/TXT/DOC/?uri=CELEX:52018DC0795&qid=1546111312071&from=EN2018

[13]   V. Dignum Ethics in artificial intelligence: introduction to the special issue. Ethics and Information Technology, 20, 1–3. 2018

[14] D. Chalmers The Conscious Mind. Oxford: Oxford University Press,1996.

[15]Stewart, Jon (2011-10-03). "Ready for the robot revolution?". BBC News. Retrieved 01.12.2018 from: https://www.bbc.com/news/technology-15146053.

[16]"Principles of robotics: Regulating Robots in the Real World". Engineering and Physical Sciences Research Council. Retrieved 01.12.2018 from: https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/.

[17]  A, Winfield"Five roboethical principles – for humans".New Scientist. Retrieved01.12.2018 from: https://www.newscientist.com/article/mg21028111-100-five-roboethical-principles-for-humans/

[18]V.C. Müller"Legal vs. ethical obligations – a comment on the EPSRC's principles for robotics". Connection Science. doi:10.1080/09540091.2016.1276516. 2017

[19]HLEG, "Ethics Guidelines on Artificial Intelligence", Brussels: EC, 2018.https://ec.europa.eu/futurium/en/system/files/ged/ai_hleg_draft_ethics_guidelines_18_december.pdf