Figure

A multi-layered connectionist network with a layer of input units, a layer of internal representation units or hidden units, and a layer of output units. Input patterns can be encoded, if there are enough hidden units, in a form that allows the appropriate output pattern to be generated from a given input pattern. Reproduced with permission from David E. Rumelhart & James L.McClelland, Parallel distributed processing: Explorations in the microstructure of cognition (Vol. 1), published by the MIT Press, © 1986, the Massachusetts Institute of Technology.

Perceptrons, were shown to have several limitations (Minsky & Papert, 1988). By the late 1970s, hardware and software develpments in computing offered the possibility of constructing more complex networks overcoming many of these original limitations (e.g., Rumelhart, McClelland, & the PDP Research Group, 1986; McClelland, Rumelhart, & the PDP Research Group, 1986). Connectionist networks typically have the following characteristics (see Figure 1.5):

• The network consists of elementary or neuron-like units or nodes connected together so that a single unit has many links to other units.

• Units affect other units by exciting or inhibiting them.

• The unit usually takes the weighted sum of all of the input links, and produces a single output to another unit if the weighted sum exceeds some threshold value.

• The network as a whole is characterised by the properties of the units that make it up, by the way they are connected together, and by the rules used to change the strength of connections among units.

• Networks can have different structures or layers; they can have a layer of input links, intermediate layers (of so-called "hidden units"), and a layer of output units.

• A representation of a concept can be stored in a distributed manner by a pattern of activation throughout the network.

• The same network can store many patterns without them necessarily interfering with each other if they are sufficiently distinct.

• An important learning rule used in networks is called backward propagation of errors (BackProp).

In order to understand connectionist networks fully, let us consider how individual units act when activation impinges on them. Any given unit can be connected to several other units (see Figure 1.6). Each of these other units can send an excitatory or an inhibitory signal to the first unit. This unit generally takes a weighted sum of all these inputs. If this sum exceeds some threshold, it produces an output. Figure 1.6 shows a simple diagram of just such a unit, which takes the inputs from a number of other units and sums them to produce an output if a certain threshold is exceeded.

These networks can model cognitive behaviour without recourse to the kinds of explicit rules found in production systems. They do this by storing patterns of activation in the network that associate various inputs with certain outputs. The models typically make use of several layers to deal with complex behaviour. One layer consists of input units that encode a stimulus as a pattern of activation in those units. Another layer is an output layer, which produces some response as a pattern of activation. When the network has learned to produce a particular response at the output layer following the presentation of a particular stimulus at the input layer, it can exhibit behaviour that looks "as if' it had learned a rule of the form "IF such-and-such is the case THEN do so-and-so". However, no such rules exist explicitly in the model.

Networks learn the association between different inputs and outputs by modifying the weights on the links between units in the net. In Figure 1.6, we see that the weight on the links to a unit, as well as the activation of other units, plays a crucial role in computing the response of that unit. Various learning rules modify these weights in systematic ways. When we apply such learning rules to a network, the weights on the links are modified until the net produces the required output patterns given certain input patterns.

One such learning rule is called "backward propagation of errors" or BackProp. BackProp allows a network to learn to associate a particular input pattern with a given output pattern. At the start of the learning period, the network is set up with random weights on the links among the units. During the early stages of learning, after the input pattern has been presented, the output units often produce the incorrect pattern or response. BackProp compares the imperfect pattern with the known required response, noting the errors that occur. It then back-propagates activation through the network so that the weights between the units are adjusted to produce the required pattern. This process is repeated with a particular stimulus pattern until the network produces the required response pattern. Thus, the model can be made to learn the behaviour with which the cognitive scientist is concerned, rather than being explicitly programmed to do so.

Networks have been used to produce very interesting results. Several examples will be discussed throughout the text (see, for examples, Chapters 2, 10, and 16), but one concrete example will be mentioned here. Sejnowski and Rosenberg (1987) produced a connectionist network called NETtalk, which takes an English text as its input and produces reasonable English speech output. Even though the network is trained on a limited set of words, it can pronounce the words from new text with about 90% accuracy. Thus, the network seems to have learned the "rules of English pronunciation", but it has done so without having explicit rules that combine and encode sounds.

Connectionist models such as NETtalk have great "Wow!" value, and are the subject of much research interest. Some researchers might object to our classification of connectionist networks as merely one among

Diagram showing how the inputs from a number of units are combined to determine the overall input to unit-i. Unit-i has a threshold of 1; so if its net input exceeds 1 then it will respond with +1, but if the net input is less than 1 then it will respond with -1.

Stop Anxiety Attacks

Stop Anxiety Attacks

Here's How You Could End Anxiety and Panic Attacks For Good Prevent Anxiety in Your Golden Years Without Harmful Prescription Drugs. If You Give Me 15 minutes, I Will Show You a Breakthrough That Will Change The Way You Think About Anxiety and Panic Attacks Forever! If you are still suffering because your doctor can't help you, here's some great news...!

Get My Free Ebook


Post a comment