Interview Sequences with Leon Tsvasman powered by "Intelligent World"
Intelligent world: Dr. Tsvasman, do we need fundamental rules for strong AI in the future?
Tsvasman: As I indicated in our last conversation, AI primarily means learning systems. With a certain degree of autonomy, which is essential for learning, these will soon program themselves. Or upgrade themselves so that they can fulfill the human mission.
The question now is what “future” we are looking at. If we look at the “AI era”, which will probably last as long as we have to actively deal with the issue in order to be able to intervene operationally, we will adjust these rules. But if we look to the future, I will call it the “post-AI era” - analogous to today's post-industrial information society. Under favorable circumstances, this phase could occur in the next generations. And I believe that in this post-informational consciousness society, no AI presence would be recognizable. Because AI is invisibly involved in everything and in everything - as an equalizing, managing and reality-generating force. The most important developments will then no longer be “technical” but will be determined by the human thirst for knowledge, the will to create and the power of vision.
Asimov’s robot laws are not suitable as ethical rules for AI
However, Asimov’s famous robot laws do not apply as an ethical rule of thumb for AI. Its author Isaac Asimov was a scientifically excellently educated polymath and my favorite author of my youth. But as a science fiction writer, like all belletrists, he was primarily concerned with human relationships. Otherwise, he would not have been successful as a writer and would not have had a chance to help humanistic science fiction literature reach its peak. He developed his characters, including the robots, to hold a mirror in front of his fellow human beings. We readers should look at each other through “strange” eyes. The older classics used fairytale or mythical beings such as the Arab jinn or the Jewish golem. Science fiction liked to instrumentalize aliens or human-like robots with hearts and minds. Such a being could be given commandments, so Asimov's first robot law is: "A robot must not cause harm to a human being or, through inaction, allow harm to be done to a human being."
When cybernetic systems “think”, internal data processing takes place without terminology. This is more like the communication between our brain and our intestines - without words, and not in numbers either.
So, what does “inflict damage” mean for the internal data processing of an AI system? It's like good and bad. Most dictators, sect leaders and conspirators have always believed in acting in the interests of humanity. The blurring remains if people make decisions and take responsibility for other people.
How can an informationally closed self-referential system, as we are all from a cybernetic point of view, take responsibility for a different autopoietic system? And with the real, non-anthropomorphic and often invisible AI from the cloud, we have no common biological evolution and therefore no “structural coupling”. That's why our relationship with her is different. I will try to explain this briefly now.
From the Kantian imperative to cybernetic ethical rules for artificial intelligence
I would like to introduce another author: Heinz von Foerster, who is praised by connoisseurs as the “Socrates of cybernetic thinking”. Based on Immanuel Kant, he established the ethical imperative for “second-order cybernetics”. This applies to people, but also to all self-regulating autonomous systems, and reads in a similar way: "Always act in such a way that the number of options increases". For me, the ethical principle of action emphasizes personal responsibility and individuality and enhances the freedom of orientation of the other in a community. Later von Foerster added the aesthetic imperative: "If you want to see, learn to act like this". This is to be understood in terms of evolutionary biology because what we see has more to do with our origins than with “objectivity”. This thought also determines the title of the volume of conversations "Truth is the invention of a liar" by Heinz von Foerster and Bernhard Pörksen from 2003.
These two imperatives gave me the idea of establishing the ethical rules for AI on a cybernetic basis. This is based on the idea of man that everyone should reach and implement their potential. If “becoming a butterfly” is the caterpillar's potential, a perfect “caterpillar paradise” is not really what the caterpillar would like.
Because of the “structural coupling”, all people have a common potential. We don't know what it really is, but there are indicative clues from human culture. If we don't know, there are different images of people, but we feel our way towards the truth together. And if the global AI is to help us in the sense of our human potential, it will not cause damage, but support us. With my own system-theoretical considerations, I formulated the following rule for AI in our book “AI-Thinking”, in this case analogously: “Operate to enable human potential, orientation and integrity. Avoid irreversible, long-term distortions in the equilibrium of the human environment. "