To supervise or not to supervise in AI?

If you look carefully at how humans learn, you see surprisingly little unsupervised learning.

By Mike Loukides
July 11, 2016
"First Steps, after Millet," Vincent van Gogh, 1890. "First Steps, after Millet," Vincent van Gogh, 1890. (source: Metropolitan Museum of Art on Wikimedia Commons)

One of the truisms of modern AI is that the next big step is to move from supervised to unsupervised learning. In the last few years, we’ve made tremendous progress in supervised learning: photo classification, speech recognition, even playing Go (which represents a partial, but only partial, transition to unsupervised learning). Unsupervised learning is still an unsolved problem. As Yann LeCun says, “We need to solve the unsupervised learning problem before we can even think of getting to true AI.”

I only partially agree. Although AI and human intelligence aren’t the same, LeCun appears to be assuming that unsupervised learning is central to human learning. I don’t think that’s true, or at least, it isn’t true in the superficial sense. Unsupervised learning is critical to us, at a few very important stages in our lives. But if you look carefully at how humans learn, you see surprisingly little unsupervised learning.

Learn faster. Dig deeper. See farther.

Join the O'Reilly online learning platform. Get a free trial today and find answers on the fly, or master something new and useful.

Learn more

It’s possible that the the first few steps in language learning are unsupervised, though it would be hard to argue that point rigorously. It’s clear, though, that once a baby has made the first few steps—once it’s uttered its first ma-ma-ma and da-da-da—the learning process takes place in the context of constant support from parents, from siblings, even from other babies. There’s constant feedback: praise for new words, attempts to communicate, and even preschool teachers saying, “Use your words.” Our folktales recognize the same process. There are many stories about humans raised by wolves or other animals. In none of those stories can the human, upon re-entering civilization, converse with other humans. This suggests that unsupervised learning may get the learning process started initially, but once the process has been started, it’s heavily supervised.

Unsupervised learning may be involved in object permanence, but endless games of “peekaboo” should certainly be considered training. I can imagine a toddler learning some rudiments of counting and addition on his or her own, but I can’t imagine a child developing any sort of higher mathematics without a teacher.

If we look at games like Chess and Go, experts don’t achieve expertise without long hours of practice and training. Lee Sedol and Garry Kasparov didn’t become experts on their own: it takes a tremendous investment in training, lessons, and directed study to become a contender even in a local tournament. Even at the highest professional levels, champions have coaches and advisors to direct their learning.

If the essence of general intelligence isn’t unsupervised learning, and if unsupervised learning is a prerequisite for general intelligence, but not the substance, what should we be looking for? Here are some suggestions.

Humans are good at thinking by analogy and relationship. We learn something, then apply that knowledge in a completely different area. In AI, that’s called “transfer learning”; I haven’t seen many examples of it, but I suspect it’s extremely important. What does picture classification tell us about natural language processing? What does fluid dynamics tell us about electronics? Taking an idea from one domain and applying it to another is perhaps the most powerful way by which humans learn.

I haven’t seen much AI research on narrative, aside from projects to create simple news stories from event logs. I suspect that researchers undervalue the importance of narrative, possibly because our ability to create narrative has led to many false conclusions. But if we’re anything, we’re not the “rational animal” but the “storytelling animal,” and our most important ability is pulling disparate facts together into a coherent narrative. It’s certainly true that our narratives are frequently wrong when they are based on a small number of events: a quintessentially human example of “overfitting.” But that doesn’t diminish their importance as a key tool for comprehending our world.

Humans are good at learning based on small numbers of examples. As one redditor says, “you don’t show a kid 10,000 pictures of cars and houses for him or her to recognize them.” But it’s a mistake to think that tagging and supervision aren’t happening. A toddler may learn the difference between cars and houses with a half dozen or so examples, but only with an adult saying, “that’s a car and that’s a house” (perhaps while reading a picture book). The difference is that humans do the tagging without noticing it, and the toddler shifts context from a 2D picture book to the real world without straining. Again, our ability to learn based on a small number of examples is both a strength and a weakness: we’re plagued by overfitting and “truths” that are no more than prejudices. But our ability to learn based on a relatively small number of examples is important. Lee Sedol has probably played tens of thousands of Go games, but he certainly hasn’t played millions.

I’m not arguing that unsupervised learning is unimportant. We may discover that unsupervised learning is an important prerequisite to other forms of learning, that unsupervised learning starts the learning process. It may be a necessary step in evolving from narrow AI to general AI. By sorting inputs into unlabeled categories, unsupervised learning might help to reduce the need for labeled data and greatly speed the learning process. But the biggest project facing AI isn’t making the learning process faster and more efficient. It’s moving from machines that solve one problem very well (such as playing Go or generating
imitation Rembrandts) to machines that are flexible and can solve many unrelated problems well, even problems they’ve never seen before. If we really want general intelligence, we need to think more about transferring ideas from one domain to another, working with analogies and relationships, creating narratives, and discovering the implicit tagging and training that humans engage in constantly.

Post topics: AI & ML
Share:

Get the O’Reilly Radar Trends to Watch newsletter