A collaboratively editable version of Robert Horn’s brilliant and pioneering debate map Can Computers Think?—exploring 50 years of philosophical argument about the possibility of computer thought.
The magnificent set of seven 3′ x 4′ maps created by Robert Horn and his team of researchers focuses on the philosophical debate around Alan Turing’s assertion in 1950 that:
“by the end of the century… one will be able to speak of machines thinking without expecting to be contradicted”
i.e. that a computational system can possess all important elements of human thinking or understanding.
Each map explores an overarching question in deep and concise detail:
- Map 1: Can computers think?
- Map 2: Can the Turing test determine whether computers can think?
- Map 3: Can physical symbol systems think?
- Map 4: Can Chinese Rooms think?
- Map 5a: Can connectionist networks think?
- Map 5b: Can computers think in images?
- Map 6: Do computers have to be conscious to think?
- Map 7: Are thinking computers mathematically possible?
…wtih more than 800 arguments advanced by over 300 protagonists in the debate.
At the time of the publication of the maps, Robert Horn described the background to this Herculean endeavour as follows:
Karl Popper, the century’s great philosopher of science, suggested that science and philosophy, indeed, all of human thinking-progresses from conjecture to refutation, that is, from claim to rebuttal to counterrebuttal or new claim. In the combat of ideas, some survive and others fall. But the clash of ideas always results in new thinking, in new distinctions, new concepts, new frameworks, new ways of viewing the world.
Argumentation has always been crucial to the development of ideas from Socrates in the Athenian marketplace through the debates of the new universities of the Middle Ages to the 20th Century’s proliferating scientific journals. Today, however, the increasing complexity of information and specialization has relegated most scientific and philosophical argument into the seminar rooms of the academy. Debates are still carried on worldwide by hundreds of participants in different disciplines, but who often don’t read each other’s literature.
Complexity/specialization is not only the dilemma of the modern student, however, it is a misfortune for any reader who hopes to stay informed of humanity’s greatest questions. Nowadays it is almost impossible to answer the question: What is the current status of any one of these great debates? What arguments have been answered and rebutted? What points still stand at the moment? Traditionally textbooks were expected to answer such questions, but they have become simultaneously bloated and selective as well. Interdisciplinary studies are supposed to solve many of these problems, but they usually fall victim to departmental struggles over students, budgets, publications, and promotions
The modern study of argumentation began in 1957 with the philosopher Stephen Toulmin’s recognition that most real-life arguments did not resemble formal argumentation schemes in place since Aristotle. Toulmin, a student of Wittgenstein, showed how much of modern philosophy, especially that associated with formal logic and the so-called analytic schools, had become a technical study drifting apart from the way thinking was actually done in many other fields-scientific, technological, legal, medical, and practical.
The Mapping Great Debates Project
Horn started the Great Debates project in 1987. For 25 years he had been CEO of Information Mapping®, Inc an international consulting company-a company he founded. that is now a world leader in developing documentation and training for industry and technology. Its approach is based on a methodology of analyzing complex subject matters he had developed in the mid-60’s while doing research and teaching graduate courses at Columbia’s Institute for Educational Technology.
He had written a book in the late 80’s, Mapping Hypertext: Analysis, Linkage, and Display of Knowledge for the Next Generation of On-line Text and Graphics, that foreshadowed the development of the World Wide Web. With this book Horn solved many of the problems of structuring and organizing information in hypertext. He also began to devise graphic approaches to Stephen Toulmin’s innovations in argumentation analysis.
The Wider Vision
Horn imagined a seminar room of the future that would have wall-size, electronic screens enabling students to and faculty to keep the big picture in context while exploring settled and disputed regions of the inquiry. He imagined that we would be able to ask: “What is the current status of the great philosophical and scientific debates that humanity has struggled with over the centuries?” And he imagined clicking with a TV zapper on to the debates and moving back and forth easily between the big picture and details in the papers and chapters written by protagonists. That way any interested reader could navigate the critical debates-philosophical and practical-that fascinate us today, but are beyond our reach because of information overload and time constraints.
If the great debates were to be displayed on a wall, what would they look like? That question launched Horn into the argumentation mapping project. His initial idea was to use some kind of diagraming approach for a major debate, and as the project evolved the mapping metaphor seemed most appropriate. He then had to select a great question. He did not want to work on “toy problems” (problems too small to test his developing methodology.) Rather he picked an “industrial strength” argument- the Turing Debate-Can computers think (or will they ever be able to)?
Can Computers Think?
Horn’s choice of the debate about whether computers can think was especially apt, focusing on the crucial area of our species awareness of its uniqueness. The Can Computers Think? debates raises fundamental questions about human identity, as the human species has often been defined as the rational animal. But if actual reasoning can be incorporated into silicon chips, then we humans are no longer as unique as we thought. The Can Computers Think debate surfaces regularly in the news-when the Mars robot explores the red planet on TV while hundreds of millions watch and when IBM’s Deep Blue computer beats the top human Grandmaster at chess or when Hans Moravec, Director of the Mobile Robot Lab at Carnegie Mellon University proclaims that we will be eclipsed by robots in the next 100 years. “Today, our machines are still simple creations, requiring the paternal care and hovering attention of any newborn, hardly worthy of the word ‘intelligent.’ But within the next century they will mature into entities as complex as ourselves and eventually into something transcending everything we know-in whom we can take pride when they refer to themselves as our descendants. Unleashed from the plodding pace of biological evolution, the children of our minds will be free to grow to confront immense and fundamental challenges in the larger universe. We humans will benefit for a time from their labors, but sooner or later, like natural children, they will seek their own fortunes while we, their aged parents, silently fade away. Very little need be lost in this passing of the torch-it will be in our artificial offspring’s power, and to their benefit, to remember almost everything about us, even, perhaps, the detailed workings of individual human minds.” Moravec, Hans. Mind Children: The Future of Robot and Human Intelligence, Harvard University Press, Cambridge MA, 1988.
In 1950 the eminent British mathematician and inventor of the computer, Allen Turing, wrote in the journal, Mind: “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.” Since then, the debate has engaged literally thousands of scholars, philosophers, cognitive scientists, mathematicians, physicists, neurobiologists and researchers from other fields. And it has engaged some of the best minds of all time: great mathematicians Godel and von Neuman; pioneer cognitive scientists Philip Johnson-Laird, Alan Newell, and Nobel laureate Herbert Simon; the inventors of the field of artificial intelligence, John McCarthy and Marvin Minsky; physicist Roger Penrose; and, of course, philosophers from Leibnitz and Descartes to such contemporaries as Hubert Dreyfus, John Searle, Daniel Dennet, Douglas Hofstadter, and Paul and Patricia Churchland, among many others.
Horn initially mapped the parts of the debate that flowed out of one of Searle’s moves in the argument, the Chinese Room thought experiment. (On map 4 appear over 100 arguments that have resulted from this argument). By 1994 Horn began to assemble his team of graduate students to help research and write the claims and link up the numerous, intricate threads of the arguments. Over one thousand books, articles and papers were digested and considered for possible inclusion. Criteria for ultimate selection and summary included such items as earliest or first to make the argument, non-triviality of the claim, formal publication of the argument, and falling within the scope of the map. By late 1997, the project had grown from 50 initial claims to seven maps with over 800 arguments represented.”
*** Please note that the material posted to the Artificial Intelligence debate map prior to the adoption of the Creative Commons License on 28 February 2008 was not posted under the terms of the Creative Commons License. Please confirm the current status of this material with Thoughtgraph prior to any reuse beyond Debategraph.***