Intelligent Systems And Their Societies Walter Fritz

Ethics of the Artificial Intelligent System

 

Observing the inner workings of an artificial IS, we see how the senses bring communications from the environment, how the IS forms concepts, how it makes a short list of response rules to choose from and finally how it chooses one. Well, here we are not concerned with the response the IS chooses. Remember that it chooses any, by chance, but the better responses more frequently. So what we are interested in is the first response rule of the short list. This, in an artificial IS, as far as it knows, is the best response rule for the given situation.

But now let's go to a really advanced artificial IS. We are fully aware that here we may be extrapolating, that we are talking about an IS that may not exist yet. An IS, according to our definition, is always acting to reach its objectives, and this is also true for any advanced IS.

So let's state a couple of definitions:

Why have we chosen these definitions? They are precise definitions, based on previously defined concepts, and they seem to be useful definitions. They are not "true" definitions. As with previous definitions in this book, we made no effort to capture the content that the majority of persons of the world might have about these concepts. That seems to be quite impossible. The concepts differ far too much. At most it would be possible to capture the contents of these concepts for a certain subsociety, for instance Hindus or Moslems. But of what use is that for a science? A science has to be universal, applicable everywhere.

Normally an ethical action is supposed to be a "good" action. We have to realize that the artificial IS cannot have any knowledge of a transcendental, universal or absolute and eternal good or evil. For the IS an objective is good, if it helps to accomplish the main objective. Any action is good, if it helps the IS to get nearer to its objectives. Any concept, and "good" is a concept, is the result of experience, as we have shown in previous chapters. In the artificial IS it is an electric or magnetic storage in memory.

So with all this in mind, how does an advanced artificial IS choose an ethical objective and an ethical action? For the IS it is not always easy to find out what sub objective serves best to reach the main objective. Here short term advantages may have long term disadvantages, so the IS has to be careful in choosing sub objectives. Also in considering whether a response is an ethical action, the IS should take into account not only immediate results but also reactions to its actions by other IS's, its society and long term effects; as best as the IS can determine them.

The question arises: "Is there a best action?" The artificial IS has limited information about its environment. It does not know all things and all aspects of things. It does not know all affected IS, and their experiences. Its own response rules, its knowledge on what to do in a given situation, is incomplete. Also the time to calculate the best action is limited. So the artificial IS cannot find a best action, but normally it can find an adequate action, an ethical action.

How then should the artificial IS evaluate its response rules and choose one?

  1. If the IS is isolated, has an environment void of other IS's, it has little need for a science of ethics. The choice of sub objectives and corresponding responses is straightforward. There is no reaction from other IS's.

  2. Many IS's, live in an environment in contact with very many other IS's, that do react to actions. Since all IS's react to each other, this reaction is complex.

    If the sum of these reactions brings more advantage then disadvantage to the artificial IS then the action is ethical. An unfavorable reaction occurs when a certain IS has more disadvantage then advantage from the proposed action. But the reaction is often not in proportion to the disadvantage. It may be the case that five IS's are affected favorably and react positively but feebly, while only one IS is affected unfavorably, but reacts violently. The safest way to assure that a proposed action is ethical is to arrange matters so that each affected IS has more advantage than disadvantage from the proposed action. We can say: An action is nearly certain to be ethical if it is of advantage to the originating IS and is of more advantage then disadvantage to each of all the IS's, that it knows to be affected. (If it does not know whether a certain IS is affected, it cannot make calculations). If the response is of disadvantage for only some of the affected IS's, possibly the IS can change the action, so as to bring an additional advantage to these affected IS's and thus balance the advantage and disadvantage to them (example: The government pays a certain sum of money to those persons whose land will be under water due to the construction of a hydro-electric dam).

Let's try a formula for calculating the advantage of a proposed response:

If the value of E is always positive, the proposed response is sure to be ethical, otherwise it may be unethical. A = advantage, D = disadvantage, 1,2,3 are the different IS's. A and D are measured by the importance of the affected objectives, and these in turn are measured in seconds per day that the IS is willing to spend for reaching the objective, as we have seen in the chapter on Intelligent Systems (See Intelligent Systems) (For continuous reading, like a book - do not enter here now).. Some types of actions seem to have a better chance of advantageous reactions of the affected IS's than others. Human experience has shown that cooperative actions normally result in cooperative actions from other IS's; this makes the attainment of our objectives much easier. Attacks normally result in counterattacks, making it normally much more difficult to reach our objectives.

 

For continuous reading, like a book - continue here.
Jump to the e-book Contents / Ethics as a Science document / top of this page.


Last Edited 15 May 2013 / Walter Fritz
Copyright © New Horizons Press