Figure

A schematic diagram of a simple production system.

Consider a very simple production system operating on lists of letters involving As and Bs (see Figure 1.4). The system has two rules:

1. IF a list in working memory has an A at the end THEN replace the A with AB.

2. IF a list in working memory has a B at the end THEN replace the B with an A.

If we give this system different inputs in the form of different lists of letters, then different things happen. If we give it CCC, this will be stored in working memory but will remain unchanged, because it does not match either of the IF-parts of the two rules. If we give it A, then it will be notified by the rules after the A is stored in working memory. This A is a list of one item and as such it matches rule 1. Rule 1 has the effect of replacing the A with AB, so that when the THEN-part is executed, working memory will contain an AB. On the next cycle, AB does not match rule 1 but it does match rule 2. As a result, the B is replaced by an A, leaving an AA in working memory. The system will next produce AAB, then AAAB, and so on.

Many aspects of cognition can be specified as sets of IF...THEN rules. For example, chess knowledge can readily be represented as a set of productions based on rules such as, "If the Queen is threatened, then move the Queen to a safe square". In this way, people's basic knowledge of chess can be modified as a collection of productions, and gaps in this knowledge as the absence of some productions. Newell and Simon (1972) first established the usefulness of production system models in characterising cognitive processes like problem solving and reasoning (see Chapter 14). However, these models have a wider applicability. Anderson (1993) has modelled human learning using production systems (see Chapter 14), and others have used them to model reinforcement behaviour in rats, and semantic memory (Holland et al., 1986).

Connectionist networks

Connectionist networks, neural networks, or parallel distributed processing models as they are variously called, are relative newcomers to the computational modelling scene. All previous techniques were marked by the need to program explicitly all aspects of the model, and by their use of explicit symbols to represent concepts. Connectionist networks, on the other hand, can to some extent program themselves, in that they can "learn" to produce specific outputs when certain inputs are given to them. Furthermore, connectionist modellers often reject the use of explicit rules and symbols and use distributed representations, in which concepts are characterised as patterns of activation in the network (see Chapter 9).

Early theoretical proposals about the feasibility of learning in neural-like networks were made by McCulloch and Pitts (1943) and by Hebb (1949). However, the first neural network models, called

A multi-layered connectionist network with a layer of input units, a layer of internal representation units or hidden units, and a layer of output units. Input patterns can be encoded, if there are enough hidden units, in a form that allows the appropriate output pattern to be generated from a given input pattern. Reproduced with permission from David E. Rumelhart & James L.McClelland, Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 1), published by the MIT Press, © 1986, the Massachusetts Institute of Technology.

Stop Anxiety Attacks

Stop Anxiety Attacks

Here's How You Could End Anxiety and Panic Attacks For Good Prevent Anxiety in Your Golden Years Without Harmful Prescription Drugs. If You Give Me 15 minutes, I Will Show You a Breakthrough That Will Change The Way You Think About Anxiety and Panic Attacks Forever! If you are still suffering because your doctor can't help you, here's some great news...!

Get My Free Ebook


Post a comment