The Three Ways of AI: Geopolitics vs. Human Leadership

In the book "AI Thinking", published in March 2019, we talked about three global AI scenarios: expansion, upgrade or salvation. Geopolitically, these scenarios correspond to the three traditional ways of technology in its relationship to humans: supporter, educator and savior. Currently, there are fundamental differences between the Asian and Western perspectives, and these are particularly visible between the US, Japan, China and Europe - the latter being strongly influenced by the efficiency-driven industrial history of the 20th century.


The insight into the practice of these countries has confirmed my guess. The first attitude increasingly refers to the Japanese vision of Society 5.0, which is considered Japan's "smart utopia", the second to the Chinese principle of the benevolent social point system, rather in the sense of Confucius, and the third - to the transcendent emancipation of the individual human Spirituality, which is the oldest Vedic and therefore Indian concept. The Indian potentiality-perspective is strictly unsystematic, the Japanese vision seem to prefer socio-cultural practice, and the Chinese - socio-systemic. The last bears a certain resemblance to the somehow discrepant German reality, which seems to share the japanese values but prefers the more or less cautious shaping of socio-systemic practice. The hypersensitivity to privacy issues results from the positivist image of man, puting the actuality of an acting person before its potentiality as human subject.

However, this pathway, as the third way, could indeed be asserted in its positivist version of humanist ethics, as Europe's geopolitical mission - after the appropriate infrastructural foundations are made.

Personally, I do not see a fundamental discrepancy between these visions, but only the different phases: the initially viable seems the Chinese way (AI-Era), the final - the Indian (Post-AI-Era), and in between the post-industrialist Western positivistic-ethical concepts (despite the rather forgotten visions of Noosphere as a new evolution stage for Earth by polymathic geologist Wladimir Wernadski and theologist Pierre Teilhard de Chardin). (*)


In the rational-positivistic way of modern Europe, well represented in the German economy, science plays a system-consolidating role. However, in order to substantiate these with regard to AI discourse, the media-cybernetic nature of AI must be recognized, and at least its basic terms like intelligence, intersubjectivity and mediality - conceptually delimited. Practical problem is the short-term thinking of the decision makers - no matter whether management in economic affairs or executives of public organizations. As an average western leader, you only design things when there is a chance that you will personally reap the pragmatical or at least esthetical fruits. Although the rest is rhetorically important (sustainability, ethics, visions of the future), it has little value on the personal level, in order to invest the individual energy in the current situation, which means lack of motivation and inspiration to shape future spaces.

China and Japan, on the other hand, have a tradition of long-term orientation in all important decisions, as Geert Hofstede noted in his cybernetic intercultural studies, among others.

Non-pragmatically motivated long-term thinking out of the sapient potentiality of the human being lacks in Western leadership cultures or is outsourced to institutional religions and, if at all, only religiously motivated, which in the long run would not make sense.


(c) Leon Tsvasman


*A Note On Geopolitical Cybernetics And Human Leadership


The geopolitical models function by adaptation of complexity to enable viable ordering models. We used to speak descriptively about different cultures. Geert Hofstede later recognized the cultural dimensions - a cybernetic concept that made a better differentiation possible. Today we are experiencing a so-called infosomatic gap between the differently scaled realities. In particular, the conceptual semantics of different actualities ensures geopolitical differentiation. The analysis can be valid everywhere, and only the synthesis is different.

For example, one can assume that gender-specific differences are socially irrelevant (gender theory). At the level of reduced complexity, this conceptual reduction of complexity enables equality that is good for common economic activity. From the epitemological perspective of evolution, however, one must be aware that sexual reproduction and the sexes are generally older and therefore more determinative than the species themselves. There are only two valid ways of coping with complexity that can be implemented in the long term from a civilizational perspective, from which a geopolitical model can draw sustainable benefits.

I would like to call the first a path of subjective singularity. It is essentially effective because it draws from the cognitive orientation. It thus focuses on the preservation of sapient potentiality, which can only arise spiritually, which is at least Indian model. Appropriate Attitudes are plurality, meta-subjective confidence construction and through confidence. The second way proceeds through the intersubjective totality. It is much more efficient in the intercepted AI era. It goes into a controlled consumption and relies on the emphasis of actuality. Its appropriate attitudes are controlled homogeneity, trust construction via scalable over-subjective control. Both paths spring from the fundamental continuities of ancient Asian cultures. Both result in the same global potentiality of a valid infosomatic equillibrium, which, however, can not be achieved, as long as both pathways on the biological drive level exploit the different transcendental longings like freedom vs. security, and cultivate the reverse-mirrored reactions. All historical compromise models increasingly show their structural weaknesses, which I would call infosomatic inconsistency.

Only the global self-regulated AI driven by sapient impulses will be able to sustainably manage the desired civilizational balance in a sense of Human Leadership.

Wer sind wir?

Wir sind KI-Enthusiasten (ein praktizierender Denker, ein denkender Praktiker und eine empathische Muse des Gleichgewichts), die glauben, dass KI ein nützliches Werkzeug ist, um unsere Zivilisation in einer Weise zu verbessern, die es wert ist, vorauszudenken. KI-Denken ist eine Denkweise, die das humanistische Potenzial befähigt. AI-Thinking ist ein Grundlagenwerk, um diese Chance zu nutzen. Unsere Absicht ist es, zu inspirieren, Potenziale zu erschließen und die globalen irreparablen bis tödlichen Risiken aufzuzeigen.

Schreib uns eine Nachricht  

© 2019 by Leon Tsvasman & Florian Schild, Art Background by Rudolf Hürth, Graphical elements by Katharina Piriwe and Marina Skepner.