top of page
Search

#5 Artificial intelligence as a future enabling infrastructure

Interview Sequences with Leon Tsvasman powered by "Intelligent World"


Intelligent world: What are the risks if such rules are not enacted or not adhered to?


Tsvasman: The greatest risk is still that universal tools can be used as weapons. This danger persists as long as individuals think it makes sense to threaten other people. The danger always comes from the people themselves.


I called AI a tool, an agent for the purpose. But we humans live in complex communities, which means that tools become infrastructures at some point, and technologies become technologies. The socio-technical infrastructures established in our civilization are therefore always also enabling infrastructures. They are intended to counteract the former structural irregularities, i.e. to equalize systemic imbalances.


The media industry has recognized that infrastructures are revolutionizing our value creation models. In the pre-industrial agricultural craft society, roads were built to enable goods traffic. Later in the industrial society the power grids to transmit power. Then came the Internet to counteract the information deficits in the analog world, i.e. to equalize knowledge. And now comes AI. It is like the new electricity - the equalizing "optimization current" of an information society.

Such enabling infrastructures must not be in exclusive ownership. Like road traffic, electricity or the Internet, they must be generally accessible.


Powerful tools can also be used in a maximally destructive manner


My co-author Florian Schild vividly outlined the devastating dangers of non-regulated AI in exclusive private ownership in our dialogue book "AI-Thinking". I share his stance that AI as a weapon can be destructive enough to put an end to humanity. At this point, however, I do not want to deal with further terrifying scenarios that I like to leave to the imagination of the readers or the currently predominantly dystopian science fiction. But one aspect is still important to me:


Statistics that are more or less skilfully distorted are already having a massive impact on our economy and politics. The public is aware that too often the interests of individuals are hidden behind “objective” figures - under “scientific” guise. And since AI is programmed math or statistics at its core, it can be used for manipulative purposes. This is possible as long as the data available to it are incomplete or can be intentionally manipulated. You are then not committed to human potential, but at best to the rationality of a company or organization. However, distortions due to data manipulated in any way would never be in the spirit of a democratic order.

Future challenges cannot be solved without AI


With all these dangers, however, we must not forget that we will not be able to cope with the coming global challenges without AI. Not because all decisions are already based on statistics, which are at best incomplete - that is, roughly, extrapolated, deliberately, methodically or distorted by the media. Mood disorders used to be treated with opium, for example, because experience data was available that could prove an improvement. The symptoms, factors or indicators in the higher complexity area, such as long-term side effects, were not taken into account due to a simplified data collection.


Without AI, with today's statistics, you will either shoot sparrows with cannons or you will not even notice the relevant dangers, opportunities or risks. That's why I don't understand the attitude of many highly specialized experts who still emphasize the purely technical or at best economic side in the current AI discourse. Well I think you can't help it. To counteract this, a more diverse education must be upgraded again.
bottom of page