There is a nuance to consider when it comes to AI Rules.
AI – factually not an “intelligence”, but an automation of mediality – has at least two interrelated fundamental differences from what we call natural intelligence*.
As data driven self-reference software, AI applications are not necessarily physically bound, and as globally networkable, they are not physically located, but may be decoupled in certain applications for security reasons, which would make in a long-term perspective little sense.
A human being, on the other hand, is a subject being: while humans communicate with one another, human communication requires mediality - that is, media in the broadest sense - from languages to AI.
Moreover, we should not forget that technology with its rational science is an instrument of intersubjective construction of reality by means of joint action, whereby the fundamental function of AI is being an automation medium of intersubjectivity, i.e. human communication, which enables the functional overcoming of subjectivity, the enabling infrastructure or the actuality management.
The fundamental differences indicated above must be considered in the KI rules.
“You should…” inspired by biblical "Thou shalt…" or "Be…" are the wrong formulas to propel self-regulating systems without being subjectively pre-emptive.
In addition, AI rules are used to "address" systems that, for the most part, are not yet there but should be organized according to the rules (the discourses of AI-Emergence or Post-AI-Era). So, the rules have to pay attention to the epistemological specifics of the other intelligence in advance, if these rules are to be more than just PR of tech giants.
* In an actually leading discourse, strongly influenced by economic and technological interests of the short-term management thinking, AI is defined not out of its epistemological relevance, but just as an emerging somehow necessary practice-tool. For example, according to the cognitive scientist and an influential practitioner Marvin Minsky, AI is “the science of making machines do things that would require intelligence if done by men.” As a practice-tool it is also mainstream-defined through its technological appearance – deep or machine learning, neural networks etc. In both categories of intelligence, the actuality of things is focused, not their potentiality, which – especially in relation to human intelligence – a useful, but not a very consistent view.
According a consequently thought out perspective of cybernetic philosophy, which I use to refer, we should understand both “intelligences” in their potentiality or as a dialectical unity (in categories of Hegelian logic).