From Puzzles To Expertise

We have seen that the key issue posed by the Gestalt school of psychology was whether problem solving was productive or merely reproductive, as the associationists claimed. Problem-space accounts of puzzle research can be read as support for the productive claim, as they show us that people have general heuristics that they can apply to situations about which they have little prior knowledge. Hence, they are not merely recollecting solutions to problems but actively and dynamically constructing solutions by applying different heuristics.

However, it is also true to say that there are many different ways to conceptualise reproductive problem solving, ways that are much richer and productive than the associationists ever imagined. More recent problem-solving research has shown that there are important reproductive components to problem solving. People can recall partial solutions and use prior knowledge to classify and define problems. Human problem solving seems to rely on a lot of specific knowledge about particular situations. Even though this is strictly-speaking "reproduced knowledge" it is not "mere reproduced knowledge" because of the amazing variety of this knowledge, the complexity of the mechanisms used to acquire it, and the flexibility of the ways in which it is used.

In much of the remainder of this chapter we turn to a consideration of this research, as it is represented by studies of expertise in thinking. Puzzle solving is only one branch of a large tree of problem types. Indeed, one could argue that puzzles are a fairly marginal type of problem. Many jobs in everyday life are concerned with solving specific problems based on expertise in an area. Most of the puzzles we have met were well defined, in the sense that the initial states, goal states, and operators were well specified. However, real-world problems tend to be ill defined rather than well defined.

In the next few sections, we consider how experts solve ill defined problems in specific domains like chess, physics, and computer programming. The keynote of this work is the importance of knowledge to the solution of ill defined problems. Problem-solving expertise hinges on having considerable knowledge of the problem domain; by definition, expertise means being good at specific problems in a specific domain. In the domain of physics, an undergraduate student has less knowledge than a lecturer. Even though both of them may have equivalent intellectual abilities, the differences in their knowledge makes one a novice and the other an expert problem solver. Many of the domains studied in expertise research have enormous practical significance and represent a major move in cognitive psychology away from laboratory-based puzzles and towards everyday, ecologically valid problems. We review chess, physics, and computer programming because they manifest several important theoretical and practical aspects of expertise research.

We have already seen the importance of problem representation in determining the difficulty of a problem. We also see that in expertise a major source of difficulty is the representation/definition of problems. Expert problem solvers have the right sorts of knowledge to encode problems easily and represent them optimally, whereas novices often lack this knowledge.

The skill of chess masters

Differences in problem-solving expertise were first studied in the domain of chess. One view is that chess masters are masters because they have much specific knowledge about the game. Chess fits nicely into problem-space theory. The initial state of a game consists of all the pieces on the board in their starting positions, and the goal state is some specific checkmate against an opponent. Many alternative moves are possible from any state; from the initial state one can move legally any of the pawns or either of the knights. For each possible turn, a player can make one of a large number of replies and an opponent can counter each of these replies with many more moves and so on. In computational terms, one faces a "combinatorial explosion" of possibilities. The sheer number of possible paths is overwhelming; the problem space is truly vast. From the initial state after 2 ply (i.e., a turn each by both sides), given the 20 possible moves by both White and Black there are 400 possible positions. At only 6 ply from the opening position there are more than 9 million distinct board positions.

Most chess-playing computer programs search through a considerable number of alternatives and evaluate each alternative. For example, Newell and Simon (1972) reported a program called MANIAC, developed at Los Alamos in the 1950s, that explored nearly 1,000,000 moves at each turn. Even so MANIAC only considered each alternative move to a depth of four turns (an initial move, an opponent's reply, a reply to this move, and the opponent's counter move). Even with this brute-force computation, it did not play chess well and occasionally made serious mistakes. Current chess programs do almost unimaginable amounts of search. The current state-of-the-art, Deep Blue, considers 90 billion moves at each turn, at a rate of 9 billion a second; using this amount of search it beat the World Chess Champion, Gary

Kasparov, in May 1997. People do not appear to (want to) search this much, so something else seems to underlie the expertise of chess masters.

DeGroot's chess studies

DeGroot (1965, 1966; DeGroot & Gobet, 1996) provided the first indication of what this "something else" might be. DeGroot compared the performance of five grand masters and five expert players on choosing a move from a particular board position. He asked his subjects to think aloud and then determined the number and type of different moves they had considered. He found that grand masters did not consider more alternative moves than less expert players and did not search any deeper than expert players, although they took slightly less time to make a move. However, independent raters judged the final moves made by the masters to be better than those of expert players.

In contrast to chess programs, the human players manifested a paradoxical mix of laziness and efficiency. They tended to consider only around thirty alternative moves and about four alternative first-moves. At most, they searched to a depth of six turns although frequently they searched a lot less (see Charness, 1981a; Saariluoma, 1990, 1994). Wagner and Scurrah (1971) examined this behaviour in further detail and found evidence that chess players used a progressive deepening strategy (proposed by DeGroot). Players only check a small number of alternative first moves. These moves are then returned to repeatedly and explored to a greater depth each time that they are re-examined.

So, where do the essential differences lie between grand masters and experts, and between human players and computer players? DeGroot proposed that experts and masters differed in their knowledge of different board positions. Chess players study previous games and can recall their own games in detail. Therefore, good chess players recognise previous board positions and remember good moves to make from these positions. This use of prior knowledge excludes the need to entertain irrelevant moves and a host of alternatives. DeGroot argued that if chess players had stored previous board positions in some schematic fashion (see Chapter 9) then this knowledge should be reflected in tasks that measure memory.

Therefore, DeGroot gave subjects brief presentations of board positions from actual games (i.e., ranging from 2 to 15 seconds) and, after taking the board away, he asked them to reconstruct the positions. The main finding was that chess masters could recall the positions very accurately (91% correct), whereas less expert players made many more errors (41% correct). Thus, chess masters were better at recognising and encoding the various configurations of pieces than less expert players. Researchers working with DeGroot also found that when the pieces were randomly arranged on the board (i.e., were not arranged in a familiar configuration), both groups of players did equally badly. Neither group had the knowledge available to encode the unfamiliar configurations, although recent evidence has called this particular finding into question (Gobet & Simon, 1996a, b).

Chunking in chess

Simon and his associates extended DeGroot's findings (see Figure 14.13; Chase & Simon, 1973a, b; Simon & Barenfeld, 1969; Simon & Gilmartin, 1973; but see Vincente & Brewer, 1993, on mistakes surrounding the uptake of DeGroot's work). Chase and Simon proposed that players "chunked" the board (see Miller, 1956; and Chapter 6); that they memorised board positions by breaking them down into seven or so familiar units in short-term memory. The essential difference between chess masters and expert players lay in the size of the chunk that they could encode. So, the seven chunks in a master's short-term memory contained more information than the seven chunks in a poorer player's memory.

Chase and Simon tested this hypothesis using a modified version of DeGroot's task. Chase and Simon asked their three subjects (a master, a class A player, and a beginner) to reconstruct a board position on a



Beginner (normal) Beginner (randomised)

Master (randomised)

Healing The Inner Child

Healing The Inner Child

Get All The Support And Guidance You Need To Be A Success At Changing Your Life. This Book Is One Of The Most Valuable Resources In The World When It Comes To What You Need To Know About Spiritual Emotional Freedom.

Get My Free Ebook

Post a comment