The Common Between Viruses, Media and Governments
Unreliable governments, deadly viruses or misunderstood climate change could be avoided in a gentle, mostly preventive manner. Most of global problems are more or less old sustainable distortions in the medialized complexity equilibrium of underrepresented actuality, which require real-time equalization through global AI, driven by ethically founded intersubjective steering.
Whether in research or in administration, the problem of strategic or operational decision-making by humans is more or less obvious - too daring rules, irrational factors in human employees, media-dependent communication, and so on. The modern automated administrative problem-solving routine, which is optimized in terms of its efficiency, is also vulnerable. It completely fails with global problems not only because of insufficient interdisciplinary knowledge, more or less inadequate regulatory models or distorting mediality.
The problem has fundamental epistemological roots.
Dealing of human-driven administrative structures with the operationally accessible complexity of the mostly very closely considered topicality can only be done defensively or reactively. The preventive approach, however justified, leads to distortions of the structures of actuality and causes further problems, which are always addressed in a vaguely definable future, since access to relevant complexity in human decisions is excessively restricted.
Since the human-made assumptions are half-truths based on a balance of limited representative or even randomly collected and almost always inadequate amounts of data, this is part of the voluntary decision that is "contaminated" by the situation of the particular decision-makers or their group dynamics, lack of time etc.
AI-driven decisions would also be based on epistemic half-truths, but this would correlate better with the differences assessed in real time and would therefore be much more plausible in their forecast. The prevention potential of an AI-controlled civilization administration would be enormous, and thus complexity problems such as climate, epidemics, nutrition, etc. could be solved. And it won't be discrimination against people whose potential is not technocratic governance, but creative control over their world, not as a trivial machine, but as a world of life.
In particular, this vision leads to a lack of understanding, because people who are considered successful or influential decision-makers obviously identify too much with technical or operational intelligence, which is carried out according to more or less clear rules with fewer interpretations, a preliminary stage of the basic one Algorithms for the AI systems. However, studies of the human brain show that this type of intelligence is not really human-specific in terms of its natural potential.
The usefulness of social balance is more distorted in its inner context than that of nature. An example with natural viruses like the actual corona. In nature, they promote the genetic adaptation of higher organisms to the changing environment. They don't necessarily cause disease because their inner rationality is to live in their hosts for as long as possible. Viruses become deadly due to the anthropogenic distortion of natural balance and other factors.
The anthropogenic distortion in the balance of the environment can best be seen in the discrepancy between the potential and the actuality of the forces involved in the overall performance. This distortion would be largely compensated by strong AI as the "technical brain of the world", and global problems could be solved, provided that human potential can also unfold in parallel as an intersubjective, leading source of knowledge for epistemic and ethical impulse-giving or steering of the human-driven civilization.
Several Notes according to the actual discussion
The following remark results from the discussions on the text above: It is aimed at the less attentive readers in order to anticipate the less constructive criticism.
My implicit vision is more or less the opposite of Orwell's and Huxley's anti-utopian warnings. Both criticize the utopias, which are controlled by human rules, being more similar to today's sociocracies, and quite opposite to my vision of steering of the actuality through cybernetic relevance according to ethical principles. The central idea is to delegate the technical management of the medialized topicality to the AI, so that people can emancipate their natural - creative and ethical - potential and bring them to bear in the first place.
"Technical governance through global AI" is only possible in connection with emancipated human creativity and otherwise makes no sense.
The cybernetic handling of complexity using data in real time does not only mean the complexity-reduced causality, since cybernetic systems work with relations, not with things, entities or phenomena. The human economy cannot be reduced to a specialized division of labor that takes place in offices.
Stopping travel and all the restrictions in a situation of a global pandemia - the really bad reactive measures from anti-utopian mainstream films are not a viable way to solve problems. Those restrictive and mostly reactive measures, described in anti-utopias, are prefered by people, whose brains are overwhelmed by efficiency restrictions of the technical thinking. In my definition, global AI is certainly not just a "supercomputer", but a tool to compensate for distorted mediality - a mediality of human communication, demanding efficiency in any joint economic activity of involved persons, which prevents human subjects from being subjective and though really effective in their reality vision.
My vision is not just to free humans in their creativity so that they play around in an infantile way (which is today's mainstream concept of human creativity), as some comments assume, that does not know my whole concept.
Human creativity (or better "sapient potential") is the ONLY requirement of that kind of conscious world, which produces not only a permanent know-how (current status worldwide), but also reliable Know-WHY!
If people do things that global AI - still a tool, not an Oracle - can do better (technical efficiency, reproduction of know-how, doing things right), they are lost for the more valuable and necessary search for Know-WHY (doing right things, but not a kind of "replacement service" of religions or quite a rudimentary insight of modern philosophy, but reliable profound knowledge of complexity). And when the "supercomputer" performs the technical management of the world, humans can concentrate on STEERING - their natural performance for which they are predestined, and which is the emergence of work-art-play and spirituality in one. Incidentally, the “technical brain of the world” would not only be a "supercomputer" from a mainstream imagination, but a kind of limbic system of the world, and humans will steer its average cognitive performance.