Search

#2 Why artificial intelligence is different from consciousness

Interview Sequences with Leon Tsvasman powered by "Intelligent World"*

Intelligent world: Hasn't the term “artificial intelligence” been chosen unhappily? Because normal people quickly associate it with artificial consciousness and “thinking” machines.


Tsvasman: AI is supposed to automate the “intelligent behavior” of people - behavior that above all requires the ability to learn. For this reason, computer scientists like to equate AI sub-areas such as "machine learning" or "deep learning" with AI. In addition to this ability to optimize itself, AI research also includes other construction sites. Like neural networks, trying to imitate the human brain, and other exciting projects.


What is automated is the expert ability of "intelligent behavior", not the whole person.


That is why AI systems are also referred to as "expert systems". As assisted intelligence (“weak KI”) it is used to automate highly focused tasks in order to carry them out more efficiently.


The higher and more complex form of AI (often called augmented intelligence) should help us to make better situation-related decisions. If you automate assistance and consulting expertise with the help of a huge amount of data (big data), that is definitely a remarkable data processing achievement. But it is still not a mental achievement of a conscious being. To explain the difference, I have to make a brief philosophical digression.


As a conscious person you are an individual - unique and irreplaceable. In the role of an expert, the person solves problems that are not part of the work and can be replaced in this role by the owner of a comparable profile. Consciousness enables a human individual to live among other individuals in a society. A conscious individual remains largely autonomous as a whole, has free will, can judge and is responsible for his or her actions.

From an evolutionary point of view, this autonomy is the most important prerequisite for consciousness, and the rule of thumb applies: the higher the consciousness, the more autonomy. "Lower" animals such as insects, for example, are less autonomous. They are massively controlled by instincts and reflexes and can hardly overcome their behavioral patterns - in the behavior of an individual organism - when the environmental conditions change. But they are fascinating in their efficiency and often develop amazing swarm intelligence.


Differences between machine and human


In cybernetics we say that consciousness is "informationally closed". At the same time, it is structurally linked to other subjects because all human brains look back on the same evolution, and each is located in an autonomous body. Therefore, a subject is fundamentally unable to make valid statements about its environment without having to constantly experiment with it. From an evolutionary point of view, humans have achieved the highest possible degree of autonomy - with all privileges and disadvantages. One of the privileges is thinking - being able to weight experiences internally in order to act adequately in a changing environment. Typical disadvantages include the dependency on languages ​​or media.


Autonomous learning software, on the other hand, remains just an expert system. Although she can answer questions, she does not have to ask questions about the meaning of a knowledge. The property of data transmission makes such AI efficient and precise. But it is doomed to remain a tool. Such tools can solve job problems more efficiently than a person in his role as an expert. But its ability to transfer data makes AI “informationally open” - it remains a “trivial machine” without consciousness.


Technically, AI could only achieve consciousness under one condition: if autonomy, which it does not need to have due to its nature, is simulated, and if media-mediated communication with structural coupling takes the place of data access. That would then correspond to the idea of ​​autonomous intelligence or “strong AI”.