Intelligent Systems And Their Societies Walter Fritz

Selection of Responses

 

So far, we have explored an IS that has created concepts; then, with these, created a representation for the present situation; created and stored a variety of response rules; and now, finally has to select the response to be done. To select one response rather than another, the IS has to have some reason or method for selecting it from the stored response rules.

 

Selection Mechanism of a Natural IS
The brain of a biological IS stores all of its response rules in one or several neural fields. When the brain receives impulses from the body's sense organs, they are routed to appropriate points in one or more of these neural fields. These field(s) process this input. The output from these fields then becomes, in turn, the input that enters other neural fields. This continues a few times until the output from the last neural field is translated into the response to be performed and sent to the body's actuators. (Note that the incoming impulses, or input data, get increased or decreased and are sometimes inverted from excitatory to inhibitory when passing from the dendrites of the neuron to the axon. This gives a different weight, a different importance, to the various pieces of information that the neuron receives.)

 

Selection Mechanism of an Artificial IS
In an artificial ISs, the brain first attempts to make a list of the response rules that are applicable to the present concrete situation. ("Applicable" here means that the response rule has some concepts in its situation part that also occur in the present situation.) If it does not find any applicable response rules, it expresses the present situation, where possible, with total concepts (Each concept has a branch where the program has stored the total concepts, of which the present concept is a part). Then again it looks for applicable response rules. If that also fails, now it expresses the present situation, where possible, with abstract concepts. (Each concept also has a branch where the program has stored all abstract concepts, of which the present concept is a concrete example).
In this list the rules are associated with a value that indicates how useful the rule was in the past and how much coincidence there is between the present situation and the initial situation of the response rule.

Once it has finished building the list of applicable response rules, the IS selects a rule. We would think that it picks the rule with the highest value. But that is not the case. Investigators have found that there is a better way: When the IS is in the process of learning a game, its response rules have values partially based on its experience up to now. But these values are in the process of changing, as experience accumulates.
If the program would choose always the rule of the (at present) highest value, its learning stops at that level, since it never uses other rules. So it selects, from the list of rules, more often a rule with a higher value, and less often a rule of lower value. It does this in proportion to the value. In this way it sometimes uses a rule that has currently a lower value, but is a really a good rule for the situation. Then, since its value is increased, because it had success, the rule has a better chance to be selected, the next time.

For continuous reading, like a book - continuehere.
Jump to the top of this document /Details of IS /Intelligent Systems /book Contents.


LLast Edited 24July 2013 / Walter Fritz
Copyright © New Horizons Press