Does the Transfer of decision-making Responsibility to Artificial Intelligence need Limits?

Artificial intelligence (AI) is the branch of computer science, dealing with reproduction of human-level intelligence, self-awareness, knowledge, conscience, thought in computer programs, or an intelligence exhibited by  (non-natural, man-made) entity, or the essential quality of a machine which “thinks” in a manner similar to or on the same general level as a real human being.

The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.  The general problem of simulating (or creating) intelligence has been broken down into a number of specific sub-problems.  These consist of particular traits or capabilities that researchers would like an intelligent system to display. The discovered traits have received the most attention in research. The attempts are not geared towards “challenging” Allah (the God) but to discover the benefits of His permissible handiwork for the common good of mankind. To create artificial beings perhaps, or thinking machines! Artificial intelligence was founded on the claim that a central property of humans, intelligence, can be so precisely described that it can be simulated by a machine. SubhaanAllah!  This raises philosophical issues about the nature of the mind and the ethics of creating artificial beings that are “intelligent”, issues which have been addressed by myth, fiction and philosophy since antiquity.  Let us move on our contemporary topic.

Are there decisions that humans should not leave to machines even if the latter are classed as being “intelligent”? To answer the question, a social debate is needed. Thus, it becomes a case for determining the limits of the rational reconstruction of moral beliefs. We are now used to the AUTOPILOT in aircraft (cockpit voice recorder and flight data recorders in an aircraft designed to aid in determining the cause of an accident), automated creditworthiness checks at Banks, and algorithmic stock trading. Since 2012, predictive policing has been regularly used mainly in the United States (relatively insignificantly used in Germany) to prevent crime. The use of artificial intelligence to evaluate the performance of students at our universities and workers is currently undergoing practical tests. In the meantime, AI software can identify skin cancer on photographs as effectively as human specialists…intelligent machines!

Many experts believe, however, that there is a limit beyond which it is ethnically unacceptable to transfer decision-making responsibility to artificial intelligence. One example of this is the “killer robot” capable of eliminating enemy soldier during a military conflict. The International Committee for Robot Arms Control, for example, has reached a consensus that: “Machines should not be delegated with the decision to kill or use violent force”.

In other fields, artificial intelligence could possibly help to avoid false judgements. Here is an example: a study in Israel showed that the decisions of criminal judges are influenced by the length time until their next meal break. Judgements are clearly milder with a full stomach! An algorithm (of an AI computer program) does not get hungry, so perhaps it would make fairer rulings. In practice, however, until now the exact opposite has been observed. For example, a study conducted in 2016 for Pro-Publica, the US news organisation, found that algorithms meant to estimate the risk of recurrence among convicted offenders falsely predicted crimes twice as often among African Americans as they did among whites. To err is human. Yes, this is a popular belief. However, if the intelligent agent (non-human) errs, then this type of error occurs only by coincidence, for the agent can hardly plan, predict, and learn, except by the limited strength of the expert systems based on the experience and knowledge of the expert.

As background here, you need to know that the advances made in artificial intelligence over recent years are primarily based on a group of methods that can be categorised under the heading “machine learning”. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. For a practical example, the computer program that triumphed over a series of world-class players of the board game known as “GO” was initially just fed the rules of play. Afterwards, the machine only trained by playing against itself. That worked perfectly for GO. In the social world, however, a mechanism like this leads to existing prejudices being reinforced rather than overcome. That is why IT researcher Kate Crawford writes that, the disadvantages of AI systems over-proportionately affect groups that are already disadvantaged, for example, because of the colour of the skin, their gender or their socioeconomic background.

Machine learning is also responsible for a second worrying characteristic of many AI systems. An artificial intelligence that trains itself is a “black box”, a device with known input and output characteristics but unknown method of operation. In other words, a black box is an opaque system whose decision-making processes are almost impossible to follow. That is why the AI NOW Institute is demanding that public institutions responsible for criminal justice or health, for example, should not use black box algorithms. At the very least, however, the programs should be made accessible to the public for tests and checks. That sounds like a rational demand.

By Alhaji Dr. Foday M. Kallon, Lecturer, Artificial Intelligence, Njala University

on 26 June 2018