Barr, A., & Feigenbaum, E. A. (1981). The handbook of artificial intelligence (Vol. 1). William Kaufmann. p. 11.
Physicists ask what kind of place this universe is and seek to characterize its behavior systematically. Biologists ask what it means for a physical system to be living. We in AI wonder what kind of information processing system can ask such questions.
What you will need: A piece of paper and a writing utensil.
Each chapter in this book will include one or more “cognitive self-experiments.” These are small experiments that can be done by any reader who is a human.
The point of these experiments is to give you a chance to get direct, first-hand experience with interesting ideas that turn out to be fundamentally important in AI.
A word of advice: You will learn a lot more by actually doing these experiments than by just passively reading through them. Nothing can replace the valuable learning that happens when your brain is actively wrestling with a problem or activity.
Moreover, many of these experiments have “spoilers” at the end, and so if you just read through them without doing them, you may never be able to go back and recreate the experience.
We are going to kick off our adventures in intelligence by considering a very famous puzzle called the nine-dot puzzle.
Exercise 1.1 The nine-dot puzzle
What you will need: A piece of paper and a writing utensil.
First, draw nine dots on your piece of paper, as shown above.
Now, your task is to draw a series of straight lines that will connect all nine dots. The catch is that you can use no more than FOUR lines, and they must all be connected, i.e., you have to draw them WITHOUT picking up your pencil from the paper.
Ready? Go!
Give yourself at least 5 minutes to work on this.
…
…
…
…
…
In American culture, there is no more iconic “thinking sound” than the music from the Jeopardy game show. (If you are like me, there is also, ironically, no sound more guaranteed to completely disrupt your thinking processes.)
(Jeopardy music plays in background)
…
…
…
…
…
If you have never seen it before, the nine-dot puzzle is quite difficult, and can seem downright impossible. Four straight lines? WITHOUT picking up your pencil? Inconceivable!
Chronicle, E. P., Ormerod, T. C., & MacGregor, J. N. (2001). When insight just won’t come: The failure of visual cues in the nine-dot problem. The Quarterly Journal of Experimental Psychology: Section A, 54(3), 903–919.
If you find yourself struggling…don’t worry, you’re in good company. In most research studies given to “naive” adult participants (i.e., people who have never seen this puzzle before), a whopping zero percent of participants are able to solve it. (The difficulty of this puzzle is the reason that it’s such a classic.)
MacGregor, J. N., Ormerod, T. C., & Chronicle, E. P. (2001). Information processing and insight: A process model of performance on the nine-dot and related problems. Journal of Experimental Psychology: Learning, Memory, and Cognition, 27(1), 176.
For example, in one study, not a single one of 27 undergraduate students was able to solve this puzzle, even after ten separate attempts.
Click here for a basic HINT.
Click here for a more detailed HINT.
Click here to reveal the solution to the nine-dot puzzle.
Once you have found it or seen it, the solution to the nine-dot puzzle can seem blindingly obvious. But, getting there usually requires a leap of intuition regarding where you believe your line segments can start and end.
…which is why solving this puzzle is often wryly described as “thinking outside the box.”
In particular, most people attempting the puzzle for the first time will assume that each of their line segments must start and end on a dot. It takes significant creativity and cognitive flexibility to realize that line segments can stretch beyond the invisible “box” defined by the grid of dots.
The nine-dot puzzle is an example of what cognitive scientists call insight problem solving, as opposed to routine problem solving. Insight problem solving is characterized by some requirement that the problem solver has to make a “leap of intuition” and/or fundamentally rethink some aspect of the problem in order to solve it. Routine problem solving, on the other hand, only requires the problem solver to systematically carry out some sequence of steps. For example, doing long division would be a type of routine problem solving, because if you know the steps, then it’s just a matter of applying them (correctly) until you are done.
“This is all well and good,” you might be thinking, “but what does the nine-dot puzzle have to do with AI?”
WELL. I am glad you asked.
It turns out that the nine-dot puzzle is actually quite an effective metaphor for what AI is all about. To understand this metaphor, let us first describe the nine-dot puzzle using more precise terminology.
One way to describe the nine-dot puzzle is as a search problem:
Given all possible straight line segments, find a contiguous set of four segments that connects all nine dots.
In your early attempts to solve the nine-dot puzzle, you might have tried out some different solutions like these:
In making these attempts, you were searching through a space of possible collections of line segments. Using this terminology of search and search spaces, we can describe what makes the problem so difficult:
If possible line segments are restricted to those that start and end on a dot (call this Search Space A), then the correct answer is impossible to find, i.e., the correct answer is not contained within the space of possibilities being searched.
A quick back-of-the-envelope calculation can tell us how many possibilities we are searching through, even using the (incorrectly) constrained Search Space A:
— Assume you start the first line segment at any one dot (out of 9); then your first segment ends at any other dot (out of 8 remaining); second segment ends at any other dot (out of 7); third segment ends at any other dot (out of 6); and finally the fourth segment ends at any other dot (out of 5).
— So, \(9 \times 8 \times 7 \times 6 \times 5 = 15,120\) possible sets of four connected line segments, give or take. This is a slight overestimate, as it includes line segments that might overlap or be continuations of the same line.
— However, this estimate should be sufficient to convince ourselves that this is indeed a sizable search space. (This kind of rough estimate is common in AI, as we shall see throughout this book!)
Even if the search space is defined in this limited (and, it turns out, incorrect) way, the search space is still quite large. In other words, there are many, many possible line segments that would fit within this incorrect definition of the problem, which is probably why people can spend such a long time trying all kinds of various (and inevitably incorrect) solutions, like the ones shown in the above figure.
We can also use this terminology of search and search spaces to describe the magical leap of intuition that will allow finding the correct answer:
In order to find the correct solution, the space of possible line segments must include those that connect two dots as well as those that start on a dot and end somewhere in the empty space outside the square defined by the nine dots (call this Search Space B).
We can observe that Search Space B is much bigger than Search Space A. (In fact, A is a strict subset of B. Moreover, A is finite, whereas B is infinite!)
Therefore, even if you somehow knew that you should be using Search Space B, you might find yourself lollygagging around for quite a while before finding the solution—and there are no guarantees that you ever would find the solution. In particular, even if you have the correct search space:
It turns out these search properties—using an iterative, trial-and-error type of process; having to remember previous attempts; and trying to use some additional cleverness or insight to guide the search—are common to many AI techniques. We will come back to these properties many times throughout this book.
These three points are characteristics of the process that you use to search within a given search space. We call this your search algorithm, where “algorithm” is just our fancy computer science term for a sequence of steps that you take.
So, putting it all together, we might describe someone’s nine-dot puzzle-solving experience as follows:
All four of these steps require intelligent effort, and these brief descriptions gloss over a lot of important details. For example, when/why would someone decide to finally give up on Search Space A? How might someone decide to try Search Space B (as opposed to other possible search spaces)? Etc.
More generally, we can say that finding the correct solution to the nine-dot puzzle requires landing on the right search space AND having an effective search algorithm.
If you don’t have the right search space, it doesn’t matter how good your search algorithm is. You will never find the correct answer, because the correct answer isn’t even contained anywhere in your space of possibilities.
If you don’t have a good search algorithm, it doesn’t matter if you are in the right search space. You are not likely to find the correct answer, because you might just end up blundering about the search space forever. (Though there’s a chance you might get lucky and find the answer.)
It turns out that these basic principles about search spaces and search algorithms are EXACTLY what AI is all about.
Interestingly, finding the right search space and the right search algorithm for a given problem is still mostly the province of AI researchers. In other words, it is mostly through human insight that we’ve managed to make AI techniques work for so many interesting problems. We are still quite a long way from having AI systems that can really, truly look at a problem from scratch and make up their own search spaces and search algorithms to solve it.
Virtually all of AI is about trying to solve complex problems using various kinds of search algorithms. However, like the nine-dot puzzle, it is more of an art than a science to (1) figure out what the right search space is. Then, for a given search space, it is again more of an art than a science to (2) come up with the right kind of search algorithm.
We will come back to these ideas about search later in this chapter. But first, we’ll take a slight detour to start building the definition of AI that we will use throughout this book.
Ah, the million dollar question, a.k.a., what are we all doing here in this AI textbook?
It is easy to get bogged down in philosophical discussions when talking about what AI is, or isn’t, or might someday be. However, an accurate and pragmatic definition begins with observing that AI is, first and foremost, a scientific discipline. You take courses in AI, there are AI textbooks and AI professors, and you can say, “I will use AI techniques to solve this particular problem.”
Thus, you can think of AI in the same way that you think about other disciplines like chemistry, physics, math, or mechanical engineering. AI is a particular field of human knowledge that has to do with studying certain kinds of problems and certain kinds of solutions to those problems.
The chemistry analogy is a useful one, as people have usually had a bit more exposure to chemistry in high school and college than to AI. Here are some useful parallels from this analogy:
One difference between AI and chemistry comes in the values that people often attach to the term AI (especially, but not only, in the news media or popular writings), for instance assuming that every AI system is somehow sentient or eerily similar to humans in some magical way.
Although, just as in chemistry, we can do some pretty cool things with AI techniques and systems that might seem pretty magical to lay observers.
However, AI is not magic, any more than chemistry is. AI systems are just computers (or robots) running computer programs. The programs can be pretty sophisticated, including having elements that change or adapt in response to ongoing conditions, or behaviors that aren’t totally comprehensible or predictable to the original human designers of the system. But, they are still just programs running on computing machines.
And, while there has always been a lot of hype surrounding AI and its “human-like” or “human-level” capabilities, the gaps between even our most advanced AI systems and humans are still extremely large. To continue with the chemistry analogy, I would say that we are still at the level of studying combustion reactions in the lab (AI systems) to learn about wildfires (biological intelligence). While there is certainly a lot we can learn about wildfires by doing our lab research on combustion (including doing some strange and interesting things in the lab that don’t occur in nature), the two are still very, very different kinds of phenomena, with completely different scales of ingredients and complexity.
So, given that we can view AI primarily as a scientific discipline…what is AI the study of, exactly? Earlier, we said that AI is a particular field of human knowledge that has to do with studying certain kinds of problems and certain kinds of solutions to those problems. More specifically, this book defines AI as follows:
Notice that we did NOT use the word “intelligence” anywhere in this definition. There are a lot of circular definitions of AI floating around that say something like, “AI is the study of computational systems that emulate human intelligence”—not the most helpful sort of definition!
Definition 1 Artificial intelligence (AI) is the scientific study of computational systems that solve complex problems using knowledge representations and search algorithms.
There are four key elements of this definition that bear further scrutiny: (1) problems; (2) complex problems; (3) search algorithms; and (4) knowledge representations.
Definition 1.1 A problem is any kind of situation that requires a particular response, i.e., an input-output pairing where only certain outputs are considered to be valid or correct.
Under this very broad definition, the nine-dot puzzle is a problem, and so is walking down the street. The question, “What is 2+2?” is a problem, and so is behaving properly at a job interview. Proving Fermat’s last theorem is a problem, and so is writing a novel.
Number of solutions. For some problems, like the nine-dot puzzle, there may be a single, clearly-defined correct solution. Other problems, like writing a novel, admit to many (possibly infinite) solutions that might be equally valid but that differ in their details. Still other problems may not have any solution—or we may never know for sure if there is a solution or not.
Degrees of goodness. For many problems, solutions are not just right or wrong but can also have degrees of goodness associated with them, e.g., what is the fastest way to propel oneself down the street, or what is the best way to behave at a job interview, or what is the most efficient schedule for a bunch of airplanes to land at the airport, etc. Solutions might also have multiple dimensions of goodness, e.g., an airport scheduling solution might be cheapest in time but most expensive in monetary cost.
Definition 1.2 A complex problem is a problem for which there are many, many possibilities to consider along the way to finding a correct response.
Note that this AI definition of complexity is a bit different from the notion of complexity coming from algorithms and complexity theory. In the study of algorithms, a problem is complex if it requires lots of resources to solve (i.e., time and/or memory). If a problem is complex under our AI definition, it will also be complex under the algorithms definition, but not always vice versa.
For example, trying to find a needle in a haystack is a complex problem under both definitions: there are many possibilities to consider, AND it will take a long time to go through each piece of hay.
However, how counting emerged as an intelligent capability in humans IS very much an interesting and complex AI problem, because counting is not just “implemented” in humans as some simple function. When we study counting in humans, we are asking: in the soup of social, cultural, developmental, and biological processes that led to our current cognitive capabilities, how and when did we come up with the ability to conceive of numerical mental representations and assign them to external objects—from among all of the possible kinds of mental representations we could have come up with?
However, suppose we are just trying to count how many pieces of hay there are in the haystack. This problem is still complex under the algorithms definition (in particular, it would take \(O(n)\) time steps to count \(n\) pieces of hay, assuming we can count one piece of hay at a time). However, this problem of counting is not complex under our AI definition, because there aren’t really multiple possibilities of anything to consider along the way.
Under this AI definition of complexity, we can observe that the nine-dot puzzle is indeed a complex problem, because there are many possible sets of line segments (and indeed many possible search spaces) that can be considered along the way to finding the correct solution.
Many problems involve considering an infinite number of possibilities. For example, if I said to you, “I’m thinking of an integer, see if you can guess it,” there are an infinite number of integers for you to choose from. However, just as mathematics gives us different levels of “infinite” (e.g., integers versus irrational numbers), the same is true in AI, where there might be different levels of “infinite” to describe different problems or search spaces.
The complexity of a problem can often be quantified by simply counting up how many possibilities there are to consider. For example, in the nine-dot puzzle, in our initial (incorrect) Search Space A, we estimated that there would be at most 15,120 possible sets of four contiguous line segments to choose from.
However, this example also tells us something very important about the complexity of problems in AI: The complexity of a problem often depends on the choice of a search space. And, as with the nine-dot puzzle, choosing the search space is often part of solving the problem in the first place! So complexity is partially a property of the problem itself, but also partially a property of the methods being used to solve it.
Being able to quickly and roughly estimate the complexity of a problem, in quantifiable terms, is a valuable skill to have as an AI person. Often, these kinds of estimates can suggest how difficult a problem might be, and what kinds of solution methods might work best. Later on, we’ll talk more about estimating problem complexity, including things like using combinatorics and reasoning about upper and lower bounds.
Definition 1.3 A search algorithm is a sequence of steps for sifting through a large number of things to find one or more specific things.
When you are looking for your keys in the morning, you are running a search algorithm: you are carrying out a sequence of steps to check multiple possible locations in your home until you find your keys. When you are writing a term paper, you are also running a search algorithm: you are carrying out a sequence of steps in your mind to select, at each moment in time while you are writing, which word, sentence, or idea to communicate next.
There are many, many different kinds of search algorithms that can exist, including simple ones that we can write down explicitly as computer programs as well as more complicated and messy ones that we might be implicitly “running” in our own minds.
What all of these search algorithms have in common is that they strive to be at least somewhat more efficient than just having to search exhaustively through every single possibility. In AI, we have special names for this kind of search; we call it brute-force search or exhaustive search.
Brute-force search can be great if the search space is not too big. For example, if I said to you, “I’m thinking of a number between 1 and 10, you need to guess it, and you can have as many guesses as you want,” instead of trying to psychoanalyze me to try to figure out what number I am thinking of, you may as well just run through the numbers 1 to 10. It’ll take you about 15 seconds, and then boom, you’ve solved the problem!
However, what drives pretty much the entire field of AI is that many problems are too complex to be solved using brute-force search. For example, in the game of chess, there are more possible games than there are atoms in the known universe! So, trying to choose the best chess move using brute-force search through possible game outcomes is not going to work.
Therefore, a lot of work in AI is about coming up with new search algorithms that offer some kind of efficiency advantages over brute-force search. Often, these search algorithms are customized to work well for particular types of problem or search spaces.
Inventing good search algorithms in AI often uses the same kinds of commonsense improvements that would apply to searching for your car keys in the morning, such as:
While these intuitions can be straightforward to understand, the tricky part often lies in (1) figuring out how to implement them as concrete algorithms, and also in (2) figuring out which intuitions to apply to which problems.
Both of these tasks are examples of the kind of work that AI people do. Notice that both tasks require not just knowing how to write code and think about algorithms but also applying creativity to come up with new ways of looking at problems and solutions.
Definition 1.4 A knowledge representation is a structured collection of symbols and rules that serves as a model for something.
If you have some familiarity with different sub-areas within AI, you might be thinking, “Wait a minute! I thought knowledge representations are only used in knowledge-based AI or expert systems, but other parts of AI, like machine learning, are all about making AI systems that don’t NEED any knowledge.”
Not quite!
It is true that certain specialized areas of AI focus on using certain types of hand-engineered knowledge bases. However, it is not true that other parts of AI don’t use knowledge representations at all! The knowledge representations are just of a different sort. For example, a neural network is a distributed, connectionist representation of a high-dimensional nonlinear function.
By analogy, consider how molecular biology is one specific sub-area of biology, but molecules play a critical role in all kinds of biology. All biologists know about molecules, and could probably tell you how and why molecules are important in their particular field of study. Molecular biology just happens to focus on certain kinds of biological problems and experimental methods at the molecular level.
Thus, just as molecules are ubiquitous in biology, knowledge representations are ubiquitous in AI. And, the specific subfield of knowledge-based AI is like the subfield of molecular biology: it happens to have “knowledge” in its name but that does not mean that it is the only part of AI that uses knowledge representations.
Knowledge representations are the substrate in which a problem gets described to an intelligent system trying to solve it, and also the substrate in which the system stores and executes its search algorithm.
There are usually multiple levels in the knowledge representation for a particular problem. For example, in the nine-dot puzzle:
You may not be conscious of all of these levels of knowledge representation while solving the nine-dot puzzle. (Part of why human intelligence is so fascinating is how fast and automatic much of it tends to be.)
There is a lot of talk in the AI subfield of machine learning these days about AI systems that can “learn their own representations,” e.g., using deep learning. What this means is that one level of representation that previously would have been human-created is now generated by the system. However, there are still many other levels that remain human-created. Getting more and more of these levels to be machine-created remains a significant challenge for AI.
In AI systems, the knowledge representations (like the search algorithms) used for a given problem are nearly always human-created, and this activity is also a significant part of what AI people do. Sometimes, an AI system might have some built-in flexibility in being able to manipulate its own representations of a problem in very specific ways, but these are usually fairly limited in scope compared to the diversity of representations that a human would be able to imagine.
The vast majority of AI problems and solution techniques can be broken down in terms of the four elements defined above: (1) the problem, (2) the complexity of the problem, (3) the knowledge representation (including the choice of search space), and (4) the search algorithm.
Here are some examples.
(Under construction.)
Some chapters of this book include optional sections that delve a little more deeply into various AI things. These will be labeled, helpfully, as “optional.”
There is a fifth term in our AI definition that we didn’t yet define, and that is the “computational system” part. As a shortcut, and in most practical applications of AI, you can just think of this as “computers.”
However, if you really want to grapple with AI at a more fundamental level, or if you want to engage with more theoretical and/or philosophical discussions about intelligence (including human and animal intelligence), then it is worth thinking more carefully about what computational systems actually are.
These definitions draw from: — Newell, A., & Simon, H. A. (1975). Computer science as empirical inquiry: Symbols and search. In ACM turing award lectures. — Piccinini, G., & Maley, C. (2021). Computation in physical systems. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/sum2021/entries/computation-physicalsystems/.
Definition 1.5 A computational system is a physically realized entity that performs computations.
Definition 1.6 A computation is the execution of an abstract function over a set of abstract symbols, where “abstract” means that the function and symbols can be described in a medium-independent way.
The idea of computation is a profound one. Essentially, what we are saying is that:
And, the really, really profound part of all this is that the symbols and function are abstract, which means that we can implement them in any physical medium and get the same result (modulo logistics of the time/space/physics of the medium).
A concrete example of this is easy for computer science students: consider any program that you have written (in any programming language). You have defined a function that fulfills some purpose. The function manipulates symbols, i.e., variables and data structures, and produces some new variables and/or data structures as the result.
Symbols are powerful because of their referents. Just on their own, the variables and data structures have no value. They become valuable when we make them stand for something else in the world. For example, we can write a sorting program and give it some inputs and get outputs, but ultimately, it doesn’t really matter unless we are using the program to sort something that matters. Maybe we pass in a data structure of strings that stands for book titles in a library, or a data structure of floats that stand for stock prices.
Ada Lovelace quote.
Something about how using the same algorithm on symbols that stand for different problems is where the power of AI comes from….
Under construction: Feynman story about using people in computer room…. Under construction: Building redstone computers in Minecraft.
Computation is powerful because of its abstract nature. And, we all know that part of what makes programs so powerful is that you can run the same program on any (working) computer and get the same result. In fact, we can run a program on ANYTHING and get the same result, though it might take a lot more time, resources, etc. We could assemble a line of hobbits carrying buckets of mushrooms that are empty or full, and we could use THAT as a computer. (Though it would take a long time to run a program, and we’d need a lot of hobbits and mushrooms!)
(Does this part really matter? Counter-example: a digestive system? a washing machine?)
Under construction: intelligence, cognition, computation. physical systems. etc.
computational perspective. humans and other animals are an existence proof of what is possible.
An ecological view of intelligence. Compare: Take a simple robot, and add this cool capability. Versus, take this cool capability in humans, and simplify it enough to express it computationally. The second approach will help you to explicitly think about the gaps between an AI technique and human intelligence.
There is a sixth term in our AI definition that we didn’t yet define, and that is the “scientific study” part. What does it mean to study something scientifically?
The scientific study of science itself (!) falls under the fascinating discipline of philosophy of science.
One of the most amazing bits of luck I had in my education was having Nancy Nersessian, a world-renowned philosopher of science, working two doors down from my PhD advisor’s office in graduate school. Nancy’s research centers on understanding the cognitive processes by which scientists make new discoveries, which, if you think about it, is an incredible demonstration of our species’ intelligent powers. Her classes and writings completely changed the way I think about science, creativity, and AI.
This book is an excellent primer: Nersessian, N. J. (2008). Creating scientific concepts. MIT press.
Getting AI sytems to demonstrate this level of creative intelligence remains an aspirational goal, though there have been attempts! For more, see the chapter on Creativity.
You may be familiar with definitions of science that revolve around using the scientific method: coming up with hypotheses, making predictions, and then doing experiments to test those predictions. While this rather strict definition does describe some scientific doings, real-world science has always been, and continues to be, much more diverse in how it progresses. Real science involves storytelling, accidents, influences of contemporary politics and culture and economics and technology, collaboration, competition, thought experiments, drawings, field trips, mistakes, arguments, and all sorts of other messy and interesting human activities.
While all sciences (including AI!) share these kinds of activities in common, there are certain “flavors” of science that lend themselves more to certain systematic methods of study. Three of the most common flavors are:
AI (and really computer science more broadly) is a very interesting sort of science because it combines aspects of three distinct types of scientific fields.
where does knowledge come from
Newell and Simon.
etc.
Under construction.
Each chapter of this textbook will include one or more “extra tidbits” towards the end that go off on various interesting tangents from the chapter material.
If you found the nine-dot puzzle interesting, here are a couple more “insight problems” for you to solve, taken from cognitive science research on insight problem solving.
But, as a disclaimer…I believe you may now have a bit of an advantage on these problems compared to a truly naive research participant. Why is that?
It would be interesting to do an experiment on this! We could have one group of students read this chapter and then attempt the following two problems, while another group of students only sees the nine-dot puzzle and its solution, without any of the accompanying discussion about search spaces. What do you think would happen?
Well, we have just discussed search spaces at length, and how insight problems like the nine-dot puzzle often require changing the search space and having a good search algorithm. These observations may end up serving as pretty powerful hints for solving future insight problems, especially if you know (as you do right now) that that’s the kind of problem you are about to solve.
Under construction.
Nine-dot puzzle variant 1.
Nine-dot puzzle variant 2.
Nine-dot puzzle variant 3.
Matchstick problem.
Eight-coin problem.
Chi, R. P., & Snyder, A. W. (2012). Brain stimulation enables the solution of an inherently difficult problem. Neuroscience Letters, 515(2), 121–124.
Under construction.
Page built: 2022-12-06 using R version 4.1.1 (2021-08-10)
Please cite as: Kunda, M. (2022). Triangle AI Book. https://www.triangleaibook.org
View source
Website analytics provided by Plausible.io, a deliberate choice made to preserve your privacy. (See here for more on the rationale behind this choice, and the role of AI in modern surveillance.)
This work is licensed under the Creative Commons BY-NC-ND 4.0 License. This means you are welcome to redistribute material from this book but only: (1) WITH attribution, (2) for NON-commercial purposes, and (3) WITHOUT modifications, in order to preserve the intellectual integrity of this work.