Intelligent Systems And Their Societies Walter Fritz

Meaning and Symbol Grounding

 

Can an artificial intelligent system really think or is it just simulating thought? Do the concepts it uses have meanings?

When we use a computer to write a letter, we know that the computer does not understand the words we type. Now, suppose we put into a computer many words and also, somehow, all relationships between words. Can it now understand these words?

Suppose you want to understand a Chinese word. You do not know Chinese but you have a dictionary of Chinese words. Each word is explained by a few words in Chinese. So you look up the word. But you do not understand the explanation. Naturally you can look up each word of the explanation. But you just get more explanations in Chinese. If you know no Chinese words, even though you look up all the words of all the explanations, you cannot understand the first word you looked up.

Let's try it differently. Suppose an intelligent system needs to know what a zebra is. Well, a zebra is a horse with dark stripes. But what is a horse? A horse is an animal with 4 legs. But what is an animal? And so on. Unless, in the explanations, the system gets to concepts it knows, it can never understand what a zebra is.

The first concepts a system learns are based on sense information. But once some concepts exist, by using words for existing concepts, you can explain further concepts. You can tell to a person what a zebra is, using words for which she/he already knows the concept.

Much of the above is from "The Symbol Grounding Problem" (Exterior link) by Stevan Harnad.

How does our intelligent system get to an understanding of the concepts it uses?The intelligent system has senses. The first concepts it learns are based on simple sense inputs. Now we have to train the system. We have to give it slightly different experiences so that it can build up composite and abstract concepts from these simple concepts of sense inputs. It will then use these new higher level concepts and again build upon these. So it finally may get to the concept for "zebra".

In conclusion, if all concepts are finally based on sense inputs (spatial relationships) and their change (temporal relationships), then they are related to something known, something that has meaning for the system. Having "meaning" means that the system knows how to use them in building up the present situation , in building up response rules and in using response rules to reach its objectives.

 

For continuous reading, like a book - continuehere.
Jump to the e-book Contents / A Scientific Philosophy /Robby / Artificial Intelligent Systems /
top of this page.


Last Edited 25 April 2013 / Walter Fritz
Copyright © New Horizons Press