An adopted translation script from German, source: "AI-Thinking" (2019), p. 159
The interdisciplinary research, briefly presented in the original edition of "AI Thinking", is followed by principles that we have already announced in the book title. Understood and implemented, they can save a lot of trouble for decision makers and executives, implementing AI, as well as responsible developers in the future.
It comes either way: AI cannot be prevented. It's up to you, if you're already targeting the good - enabling, simplifying, and liberating - AI to help you develop great AI solutions, then you'll get your AI off the ground faster and with no losses. Bad AI would only be biased by occasional large interest groups, lobbies or corporations and should be viewed critically as such. Only in the longer term it does not help to demonize AI, as long as you work on good AI solutions. Good AI is the one that allows people more degrees of freedom in the sense of the ethical imperative of Heinz von Foerster, bad AI is in the same sense "preventing AI".
First of all, the following applies:
Starting from the "good AI", avoid the "bad AI"!
The human being should be in the foreground of every technical development (Human-Centered AI). What is meant is not the deficient person with AI in the head, but the emancipated, sovereign, wise, creative and improvising human (Human Difference).
Because human potentiality is the highest value:
Prioritize human potential (over technical or formal actuality of "facts")!
Who seeks, finds. Simply enabling human orientation out of potentiality gives AI its only legitimate sense. It is not about any blanket clarity for all, but the momentary best possible clarity of a sovereign human subject, the certainty of orientation.
Enable human clarity!
AI is the tool of the tools. It can only "unfold" as such and only function in the human sense if AI is freed from the host of the human brain and "cleanly" or appropriately transferred to suitable networks. Although liberated, AI remains a tool of human being, which is why:
Only entrust AI with facts, and leave meaning to humans!
Driven by a non-objective validity motive, one likes to create facts that put other people in a position of reaction. Whether in a human team or in an immediate application - such an attitude does not affect AI, but it causes resource-consuming Distortions.
So: No power decisions!
Nobel Laureate in Chemistry in 1954 and Nobel Peace Prize winner in 1963, Linus Pauling, said: "The best way to come up with a good idea is to have lots of ideas." The more creative, authentic and spontaneous impulses emanate from people, as well the more data AI has, the better it can fulfill its role:
Data and steering impulses come from people!
Human being draws from his longings, AI - from factual knowledge or data.
Thus, it is clear:
Act intersubjectively, trust in the subject!
Staying infallible by representing any supposed solid attitude is not a sign of good knowledge. This attitude is only beneficial to people's exercise of power over other people, not to an intelligent, efficient machine whose technical brain enables us to dynamically equalize the global distortions of our human reality equilibrium. Man should unfold in his upright spontaneity. Because for us humans the world is not trivial as a wholeness, but AI operates in its trivial segments. The only source of non-trivial information remains the inherently coherent human input. Therefore, it should be noted:
Always orient yourself upright in the world!
This attitude correlates with the aesthetic imperative of Heinz von Foerster, who used to say: "If you want to see, learn to act."
All in all, different slogans apply to humans and AI:
Man should commit to the following values in the AI era:
Potentiality. Orientation. Longing.
AI, on the other hand, unfolds as an intelligent enabling infrastructure that has the following values to serve:
Topicality. Efficiency. Clarity.