designed in 1821. Although this and Babbage's later designs were never built to completion during his lifetime (his machinist eventually quit and the British government tired of supporting his increasingly expensive project), Babbage's machines are widely considered the first examples of serial-processing computers, the computers on which most contemporary machine-vision strategies are implemented. Because its mechanisms are visible, Babbage's machines (some of which are on display at the British Science Museum in London) give a more vivid impression of what logical computation entails than looking at an integrated circuit. (Courtesy of Eric Foxley)
Wherever Marr might have taken these ideas had he lived, it seems unlikely that the brain operates like the serial-processing computers we are all familiar with today—a machine that uses a series of specified, logical steps to solve a problem (a so-called universal Turing machine). Of course, algorithmic computations based on physical information acquired by photometers, laser range scanners, or other devices that directly measure aspects of the world can solve some of the problems that confront biological vision. And the "visually guided" behavior of robotic vehicles today is impressive. But these and other automata are "seeing" in a fundamentally different way than we do. The limitation of machine vision in this form is its inability to meet the challenge that has evidently driven the evolution of biological vision: The unknowability of the physical world by any direct operation on images (the inverse problem). Machines such as photometers and laser range finders accurately determine some physical property of the world (such as luminance or distance) by direct measurement. But as should be apparent from previous chapters, this is not an option for biological vision, nor for machine vision if it is ever to attain the sort of visual competence that we enjoy. Only by evolving circuitry that reflects the outcome of trial-and-error experience with all the variables that affect the successful behavior is a machine likely to generate "perceptions" and "visually guided" behavior that works well in real-world circumstances.
Given this caveat, it is important to recognize that computers can solve complex problems in another way, an alternative that gives cause for some optimism about the future of machine vision and ultimately understanding what the complex connectivity of the brain is accomplishing. In a paper published in 1943, MIT psychologist Warren McCulloch and logician Walter Pitts pointed out that instead of depending on a series of predetermined steps that dictate each sequential operation of a computer in logical terms, problems can also be solved by devices that comprise a network of units (neurons, in their biologically inspired terminology) whose interconnections change progressively according to feedback arising from the success (or failure) of the network dealing with the problem at hand ( Figure 13.2). The key attribute of such systems—which quickly came to be called artificial neural networks (or just neural nets)—is the ability to solve a problem without previous knowledge of the answer, the steps needed to reach it, or the designer's conception of how the problem might be solved in rational terms. In effect, neural nets reach solutions by trial and error, gradually generating more useful responses by retaining the connectivity that led to improved behavior. As a result, the architecture of the trained network—analogous to evolved brain circuitry—is entirely a result of the network's experience.
Figure 13.2 Diagram of a simple neural network comprising an input layer, an output layer, and a hidden layer. The common denominator of this and more complex artificial neural networks is a richly interconnected system of nodes, or neurons. The strengths of the initially random connections between the nodes are progressively changed according to the relative success of trial-and-error responses, which is then fed back to the network during training. The result is that the connectivity of the system is gradually changed as the network deals ever more effectively whatever problem it has been given. (After Purves, Brannon, et al., 2008)
Was this article helpful?