|Intelligent Systems And Their Societies||Walter Fritz|
There exists a widespread belief that robots, artificial IS's, will be dangerous for humanity. Many books and movies dramatize this. But humans build machines to serve them. No sane person would build knowingly a machine that hurts him (and the great majority of persons who build artificial IS's are sane). Steam boilers are useful, but they can explode. So we build them with safety valves. Cars are extremely useful and also dangerous. So we equip them with brakes, air bags and require driver licenses.
The same is true for artificial IS's. Once they have sufficient mental capacity, they can be dangerous. So we have to add a "safety valve". For instance the famous three laws of robotics of Dr. Isaac Asimov would be such a safety valve. This includes giving a robot the character of a faithful dog, not the character of a wolf. (See "Main Objectives") (For continuous reading, like a book - do not enter here now)..
Let's be more specific on this point. Why are even supremely intelligent robots of no danger to humanity?
A robot is an intelligent system and as such needs objectives. A robot acts by choosing learned actions. This choosing means making an evaluation of which action (which response rule) to do. This evaluation needs a "measuring stick", something with which the proposed action is compared, to see if it is a good action. This "measuring stick" is the robots objective. This objective is a subroutine of the computer program, which gives higher marks to a response rule when a human approves of this action and lowers the marks when a human disapproves. In this way the robot learns which actions are "good" or "bad". We want robots to help us, to make our lives easier. So a good objective, to build into a robot, would be to "please human beings".
By the way, also animals have objectives. It is the survival instinct. They also cannot change this objective. Naturally both, robot and animal, can choose many levels of subobjectives, immediate objectives that serve to reach their main objective.
It is important to distinguish between objective and intelligence. A good intelligence is the result of fast and accurate mental processes that will result in reaching an objective soon and with ease. Robots (in the future) will have mental processes that are very much faster and much more accurate than those of a human being. They will reach their objectives faster and better. But this does not change the objective itself.
Change of Objectives by Human Beings
An objective is part of the computer program that receives information from the senses, chooses an action, and does it. It is not something that the robot can change. If we want to change such a program we have to switch off the robot, make the changes in the source program, compile the program and put the new program into the computer. A robot, while running, cannot do this. Further, once we give it the objective of "pleasing human beings", and train it accordingly, changing its program is not something that the robot can want to do.
However, possibly not all builders of robots and stationary artificial intelligent systems are sane? Naturally it is possible to give a robot the objective of its own survival. To give an extremely intelligent robot such an objective is a good way to eliminate the human species. This robot will build many, and they will want all the available energy and material resources for themselves; human use of energy and materials would be stopped, we could not go on living. So, when such a robot gets built, other robots will have to be dispatched soonest to destroy it.
Change of Objectives by Robots
We could say that possibly a robot could modify the programs of other robots. If it is only a moderately intelligent robot, it could not modify the program of another robot and have it work.
But what if an extremely intelligent robot notes that many robots are needlessly destroyed, because persons give them orders that are dangerous to the robot itself? Such a robot could possibly want to strengthen the self protection part of the main objective (This would be like strengthening the third of Asimov's Law of Robotics at the expense of the first and second law). But it would not strengthen it beyond the point where robot survival comes first and pleasing humans comes second. Even if the robot is so intelligent that it notes needless destruction of robots, it still cannot act against its own main objective. In other words, while it would be intelligent enough to modify the computer program of another robot, putting survival first, it could not want to do so.
Nearly everybody agrees that we should not build robots to replace us. We should build them to help us. Since the robot is built to please us, it cannot wish to modify this program. When a robot with a "wrong" objective gets build, it has to be destroyed immediately.
In this way, even extremely intelligent robots will be our helpers, not our masters, and they will not replace the human species.
All Play and No Stress
But maybe robots are dangerous in a different way, maybe an IS, that can think more precisely and faster than a human being, will take away all initiative from humans and condemn them to inaction? History has shown that the opposite is true. In ancient Greece, when slaves performed most work, there was a bloom of artistic and philosophic activities and sports. They build beautiful marble statues, wrote both comic and tragic theater pieces, and created the most varied philosophies. They excelled in sports (the Olympic games started then). To be sure, there was the other side, persons who lived only for pleasure, for instance orgies were invented. It looks to me that enough interesting activities remain, even if we only do what we like to do and the robots take care of all our material needs and all activities that we do not wish to perform.
For continuous reading, like a book - continuehere.
Jump to the e-book Contents / Consequences / top of this page.