Recently, the world started talking about artificial intelligence (AI for short). The idea is indeed old, and the first papers that discussed using computers to create intelligent machines appeared as early as the 1950s (including concepts such as machine learning, neural networks, etc.). However, the ChatGPT stone that was thrown into the world markets, did what generations of wise men failed to do in the past, and artificial intelligence became the subject of the day.
Today it is clear to almost everyone that generative artificial intelligence can improve our lives – in almost every field. Computers can now read and analyze a huge amount of data, beyond the digestive capacity of the biological brain, and extract insights, “by themself”, without the guidance of humans.
This may completely change many areas in our life. Suddenly, new areas have opened up such as cars without a human driver (autonomous vehicles); the development of medicines that can be personally adapted to the specific genetic and clinical history of a certain patient; or even the analysis of radio-signals that we receive from outer space enabling us to distinguish between “natural” signals and “artificial” ones that may be sent by intelligent beings (if there are other intelligent creatures).
Along with the promise there is also great fear: Are there risks coming with the new technology? And if so, can we control it and limit the development and release of products that could cause a disaster? Or – the greatest fear of all – are we getting closer to the moment when the machines will be so much smarter than us that they will decide that they no longer need us, and get rid of us? Could it be that we started a process that will bring an end to the human race? In short – are we plummeting toward the age of the machines?
Before we go any further, let me tell you that if this happens, it won’t be the first time in human history.
I guess the reader who has come this far will now raise an eyebrow. How does he not know that once upon a time there were already intelligent machines on earth that were smarter than the human beings who lived then, and that used their intellectual superiority to overcome, and eventually to destroy them?
To answer this question, let me dwell a little on the concept of “machine”. What is a machine? In the context of a digital computer, we are currently talking about something made up of many components, mainly transistors that are made of silicon and other materials. Each transistor represents one “bit”. If current flows through it, we say that the value of the bit is 1, and if no current flows through it, the value of the bit is 0. Today, there is a race to develop other calculation technologies, primarily quantum technologies. In the case of quantum, the central element is not a bit (as in a transistor) but a simultaneous mixture of 1 and 0. We call such an element a quantum bit (qubit for short).
As far as we know today, nature (including the human brain) is quantum, and the digital calculation we do today with bits (which have only two states) is primitive compared to that which is done in nature. Either way, even if the human brain itself is an “advanced” quantum computer and not an “inferior” digital computer, like the ones we build from transistors, it is still a machine. This human “machine” is not made of silicon, it is much more complex than the digital computers we have invented, and currently, it is more intelligent. Still, it is a machine made of (biological) materials found in nature, and subject to the laws of nature.
Human-like creatures (hominids) began to walk on earth several million years ago. Only about a quarter of a million years ago, a new mutation was created, somewhere in central Africa, of a new species that we call Homo Sapiens. One of the main advantages was a larger brain volume, which was reflected in higher intelligence. The evolutionary process resulted in the new “machine” (Homo Sapiens) gradually taking over the world while reducing the number, and even leading to the disappearance, of the older “machines” (such as Homo Erectus or the Neanderthal man, for example).
Although it is hard for us to accept it, we are also “machines” in the sense I described above. Moreover, we are also learning machines. Our behavior is not only dictated by the internal software we were born with (such as DNA), but also by the experience we gather in our lives that changes our behaviour. More than that: we also know how to produce such “new machines”. We call them “kids”.
Once we understand this, it becomes self-evident that an intelligent machine can also cause unexpected things, some of which will have extremely negative consequences. “Machine learning” means that those machines can change their behavior based on their “life” experience, like our kids do, no matter how much parental control we try to apply. In principle, it is not possible to guarantee that intelligent machines will never do bad things.
Imagine that in some future world, it will not be allowed to make children without a license from the government, and that the government will not give such a license to anyone who does not prove a priori that the child he wants to create will never do anything bad or wrong. It is clear from the above argument that this is basically impossible.
Many governments in the world are currently trying, alongside the accelerated promotion of artificial intelligence technologies, to control its development through regulation that will not allow future damage from these technologies. As I explained above, this is impossible. An intelligent machine is no longer “mechanical” in the old sense: its operation is basically unpredictable.
However, there is much to be done in this area, as we do with the intelligent learning “machines” we call kids. We need to find a way to “educate” the machines and teach them not only what they have to do, but also what must not be done. It is also possible that we will have to give them power only gradually, after we prove that they are “well educated”. Ethics of artificial intelligence is therefore an integral part of the development of this new field.
Incidentally, this is the reason why ethics and regulation were included as one of the main topics in the national plan for artificial intelligence that was submitted to the government several years ago, led by me and my colleague, professor Matania. Unfortunately, the instability and deterioration of the Israeli political system in these years resulted in a very partial implementation of the plan. Although we have lost several years, it is still not too late to pick up the stick and start running.
This will be one of the main topics that will be discussed at the Cyber Week conference that will be held on the Tel Aviv University campus in the last week of this coming June.
Major Gen. (Ret.) Prof. Isaac Ben-Israel is the head of ICRC – Blavatnik Interdisciplinary Cyber Research Center, Tel Aviv University.