top of page

Domesticated Disasters? Scaling Complexity With AI


1 Disasters as an inevitable consequence of reducing complexity

2 AI takes care of the trivial things, humans take care of mutual enabling

3 Civil protection as an individual advisor for each individual

4 AI against global warming?

5 We don't know how the world really works

6 Everything depends on the right task

7 Complexity adjustment instead of complexity reduction


Disasters as an inevitable consequence of reducing complexity

IW*: Of course, AI has its place in weather models, the forecast of heavy weather events and other prognoses relating to climate change. But is it realistic to expect AI to help protect against disasters?

Dr. Tsvasman: I don't want to go too theoretical at this point. Nevertheless, I have to mention again the statement that has now reached even uncompromising technocrats: It's not about building things as firmly and stably as possible. After all, we know from experience that anything that is rigid and unchangeable has no long-term endurance. Much more important are dynamic quantities of adaptability that go hand in hand with scalability and ergonomics. Sustainability is not possible without resilience - not only researchers know, but also sufficiently trained engineers, developers, and some politicians.

However, knowing does not automatically mean that this knowledge will also be implemented. In addition, the infrastructures built according to rigid concepts will shape our landscapes for a long time to come. As a rule, we cannot act sustainably and resiliently at the same time. It is not the weather that is our enemy, but our inability to live with its unpredictable effects. And by that I don't mean simply accepting destruction and being ready to make sacrifices - but rather improving our resilience, for example by making living spaces more flexible.

If people, with their tendency to reduce complexity, turn their environment and their own nature inside out, they can only do so roughly - in view of the immense friction losses due to media-supported interpersonal communication and unjustified hierarchies. So very selective and exemplary, anything but comprehensive, reliable and, above all, individual. The consequence may then be catastrophes with many victims.

AI takes care of the trivial things, humans take care of each other

Dr. Tsvasman: This is exactly where I see a great opportunity for AI. As we have repeatedly worked out in the previous episodes of our conversation series and 360-degree series, I favor reducing these friction losses through comprehensive, data-supported real-time management. Meaningful preventive and reactive measures would then flow into the recommendations of the human-AI intersubjectivity outlined by me.

When, without AI, we had almost no tools to guarantee the comprehensive and individual adaptability of our living spaces, we acted according to the circumstances. But if we do not consider AI, which is now expandable or at least plannable, to make our infrastructures resilient and scalable, we are acting negligently. However, to take the strong global AI into account in a timely and meaningful manner, we must rethink our entire civilization - that is the challenge.

To get back to the point: AI can only make human judgment possible by freeing us from the need to compete with one another, to overwhelm others. Simply because we then have better things to do. Namely to recognize the world and ourselves in our potentiality and to implement this knowledge with more claims to truth instead of continuing to fight with chimeras and burning witches and straw men. In my now comprehensive vision of the future, AI takes care of the trivial and trivializable things and infrastructures. And people dedicate themselves to the "non-trivial" - the mutual enabling within the framework of their specifically human potential.

In connection with disaster control, this means people can react, they can recognize dangers and limit them socially - that is, they can also draw conclusions when it is not, they themselves but others who are affected. You can help each other. But without a globally learning auxiliary intelligence, we cannot achieve reliable prevention in areas of complexity such as weather, epidemiology, etc. And we now must accept that almost all areas of life become such areas of complexity.

Civil protection as an individual advisor for everyone

IW: How could such support from a global AI infrastructure in the context of disaster control look in concrete terms?

Dr. Tsvasman: Comprehensive AI-supported real-time monitoring with the highest degree of customization could mean that every person affected continuously receives the information and rescue instructions intended only for him. Instead of blanket advice such as “Head in all directions A”, preventive or reactive telematic control in the event of a catastrophe means that every person receives optimal advice that considers the peculiarities of their current position dynamically and in real time. Prevention and response merge into a comprehensive, highly individualized disaster control. He could take into account all relevant factors in real time due to the constantly changing conditions. Provided, of course, that this data is also available in real time and of high quality.

According to my research, most of the victims in the recent tsunami-like floods in Rhineland were due to incorrect assumptions or a lack of information. For example, many ran outside instead of going up on the roofs of the houses that had mostly remained intact for a longer period. But for some, the situation was exactly the opposite. And many could not be warned - they were surprised in their sleep. Otherwise, the usual rules of conduct are often overtaken by unexpected or specific factors and are almost always too general in the event of a catastrophe. Telematically supported real-time monitoring would instruct each person individually and dynamically and guide them on their own optimal escape route. That would of course be much better than general rules of thumb such as: "In the event of a disaster, you should all do this or that."

Almost all those affected describe the feeling of powerlessness. I fear that without the support of what I call human-AI intersubjectivity, this could increasingly become the leitmotif of our reality. We can react, but we cannot prevent - especially not if it must be sacrificed. We do not understand complexity, and politics is not a suitable tool for this either - you cannot negotiate with nature.

AI against global warming?

IW: The vast majority in science agree that increasing extreme weather events are a consequence of climate change. Can AI help limit global warming? Dr. Tsvasman: When we ask ourselves whether “man” or “nature” caused global warming, we separate man from nature - as if man were not the result of natural evolution. In science it is often more important to get a question right than to answer it. If the question is wrongly asked, the answer will only appear to be useful. If we then hold such an answer to be “true”, we can go very far and be firmly convinced that a kind of spotlight of knowledge illuminates our path. Is our way of industrial capitalist consumer civilization such a “way of the blind”? What worries me most is the dictation of "realism" or the illusion of reliable knowledge that does not need to be constantly revised. What happens to a beetle that thinks the table is endless and it runs over the edge? He will still have time to be very surprised when he crashes. In our case, the unexpected realization that the world is different or simply more complex than we thought leads to unpredictable social upheavals and even planetary panic. We are already experiencing approaches to such a scenario today, and if we use artificial intelligence incorrectly, this can escalate. For some reason we believe that "the machine knows better". The problem is not the machine, but who programs it and how. If you leave AI to the usual groups of influence, who have already shown their "successes" in all their glory, AI will "reach the edge of the table" faster than humans - but that's about it. AI will accelerate the uncovering of our most likely false paradigm of being.

We don't know how the world really “works”

Dr. Tsvasman: The fact that the cycles of nature have always caused disasters because they collided with our civilization based on abbreviated linear concepts suggests that they really exist and determine the existence of ecosystems. However, it would be a mistake to stop at such a conclusion. Absolutely all scientific achievements bear the stamp of one stage or another in the development of our society. We don't know how everything works - and yet we mask our concern about it with “scientific findings”. The problem was recognized as early as the 1920s. At that time, several thinkers suspected the emergence of a global intelligence - in Russia, for example, in the form of the “noosphere” according to Vladimir Vernadsky. This construct should already compensate for the inevitable distortion that the cognitive process brings into reality. If we do not observe something, this “something” simply does not exist, says the uncertainty principle of quantum mechanics. All of these are attempts to somehow survive in a world that is likely to increasingly elude our understanding. But what role do tools play in this? Did the calculator make everyone great mathematicians? Schoolchildren who hide a calculator under their desks do the math faster. But it is unlikely that even outstanding mathematicians would understand what a number as such is better than Leonhard Euler understood it three hundred years ago.

Everything depends on the right task

Dr. Tsvasman: Used incorrectly, the analytical attitude of our thinking - as a representative of the "outsourced" AI - still supports constructs such as "social system" with bureaucracy therefore or "rational action" in the context of the too trivial economy. AI could become a serious tool for understanding “reality”. AI is just a tool, like a hammer: you can build houses or kill someone with a hammer - it all depends on the correct use. You could also say: from the task at hand. That's why I started this little philosophical excursion. To make it clear how important it is to get the task right for global AI.

AI is an ideal tool for correcting the technical, rational side of life, i.e. the trivial or trivializable areas of our shared reality. Ideally, it should take over the self-regulation of these aspects to free ourselves from our present. The greatest danger on this path is the dictatorship of often self-generated facts or false "relevance" that creates irreversibility. Then the potential of a more complex lifeworld is killed by primitive incarnations of a simplified survival world. The future is always ambivalent.

In addition to the universal powers of self-regulation, potential and topicality, discerning people who want to take control are at work on this planet. They therefore compulsively reduce their complexity and make concepts such as “linear time” their trivial guiding models.

Complexity adjustment instead of complexity reduction

Dr. Tsvasman: Today we operate with the best concepts of the 19th and even earlier centuries. When we invented and produced plastic, for example to package food so that it lasted longer, it was good. Then we threw the plastic waste into the sea, now plastic is frowned upon. Such contradictions and changes are very typical of a world of linear dependencies. If we now remove plastic from our lives, there is a great risk that we will replace it with something that will cause a problem again in the long term. To escape this cycle of self-generated problems, we have to understand that and, above all, how everything is connected.

Incidentally, I did not demonize complexity reduction as a practical tool for actively discerning people. But I also suggest a scalable “complexity adjustment” as a civilizational upgrade. And in my opinion that can only be done with human-AI intersubjectivity. Because the more complexity we can control with the help of AI, the less destructive our complexity reduction becomes. I once formulated my position on this aphoristically:

The good is never evident, but always possible; evil is what distracts from the possibility of good.

* - german online-journal for technology and research.

bottom of page