Briana Brownell

Uncanny Learning


What is uncanny learning?

My first interaction with ELIZA was late at night in the university’s Physics Building while I was working as a research assistant at the Subatomic Physics Institute. I was trying to match mountains of experimental data to theoretical predictions about how quarks should have been behaving, and my code took awhile to run each scenario. This left me ample opportunity to chat with ELIZA, one of the first AI chatterbots, during my off time.

These were my first conversations with an artificial intelligence. At that time, I wanted to be a theoretical physicist, but Douglas Hofstadter’s column in Scientific American, Metamagical Themas, had begun to spark some ideas about creativity and mathematics, and how computers could do some of those creative tasks. ELIZA wasn’t particularly creative, but she would ask you questions and repeat back some of what you said in a mostly sensical way. I remember the feeling of unease deep in my stomach when ELIZA would say something that seemed particularly poignant. I began to imagine; what if there really was an intelligence on the other end?

I found this a mixture of exhilarating and terrifying. And if I’m being honest, I still do. Why do some things seem so fundamentally human that we are put off by the very idea of technology replacing them? Meanwhile, when it comes to other tasks, machines to help us can’t come fast enough?

Who are the Uncanny Learners?

Each decade brings a new slate of technologies, some of which we embrace enthusiastically, and others which cause trepidation, unease, or resentment. As attributes that we think of as being genuinely human are increasingly taken on by this technology, we change our expectations of what it could and should be capable of. Will technology advance to the point where all human idiosyncrasies are imitated by smarter and smarter machines? If that happens, what are the risks to our human world?

These Uncanny Learners might do things that we feel are too close for comfort to our own minds. They may show a striking depth of character in conversation, create a moving piece of music, or show insight into a decision that an expert trains their whole life to build. In light of the rapid advancement of AI in many areas of our day-to-day life, re-examining what we consider makes us human becomes an ever more important imperative. As we subject our humanity to a level of scientific scrutiny that has never before been reached, rather than limiting or dismissing us, these advancements give us the power to understand ourselves in a much more profound and worthwhile way. Or so we hope. Without understanding these very personal reactions we risk losing out on helpful technological advancements on one side, or human relevance on the other.

But the concept expands beyond this sphere to include us as potential uncanny learners too. Humans can learn about themselves through working with artificial minds. Through my career I have seen this first hand, when an artificial intelligence I’ve built has shed light on my own human experience.

It is my hope that my work about uncanny learning can help us through this challenging time, where artificial minds give us new ways to think about ourselves. There are many compelling themes, each of which is related to many others.

All Life on Earth

Josephine Salmons knew immediately that the fossilized skull on her friend's mantle belonged to an ape, but since she wasn’t able to identify the species, she had a hunch it could be a species that was extinct. The skull had been found during the blasting of a limestone cliff at Taung, South Africa and was taken home as a curiosity by her friend’s father, a manager at Rand Mines Limited. It was common for fossils to be found in the mined limestone Tufa in South Africa, but none that had so far been found were quite like this one.

Her chance dinner with a friend precipitated one of the most significant discoveries in anthropology. Salmons was an anatomy student demonstrator at the University of Witwatersrand in the mid 1920’s, and so she brought the fossil to Raymond Dart, one of her anatomy professors. Dart also recognized its uniqueness, eventually proclaiming it as a new species of hominid which Dart named Australopithecus Africanus.

As a result of the geographic location where it had been found, Dart suggested that Africa rather than Asia was the Cradle of Humankind. After many decades of debate in the scientific community, mounting evidence convinced the initial skeptics.

Discoveries such as this one have provided some insight into the minds and capabilities of extinct and but there are many places where questions still remain: When we think of intelligence, what do we include? Are plants intelligent? How about viruses? What separates humans from any other living thing?


Philosophers have debated our place in the world of living things for millennia, and scientists have added a wealth of knowledge to the conversation. The capabilities of other life forms we discover keep astounding us. They might be very different from us, like the nearly immortal tardigrade that can survive dessication, extreme heat and cold, radiation, pressure and starvation, or octopi whose complex nervous system is only partially confined to the brain and can “see” by touch.

However, humans share some of our more cherished traits with other species on Earth, suggesting, we have at least a partially shared experience with our evolutionary relatives. We sometimes share surprising similarities, like bees’ ability to understand zero, or plants that respond to anesthesia. But do we consider these traits fundamental to our humanity? We sometimes find the parallels uncomfortable: Chimpanzees creating up with swear words close in meaning to ours, birds using their common language to lie to others, zebras murdering their kin, and elephants grieving dead loved ones.

In some cases we find things that are uniquely human. Theories about why this is so are oftentimes incomplete or controversial. Where do mysteries still remain?

From early in history, humans wanted to build machines that shared some of these traits. Sometimes the machines we build mimic our abilities, like the time-saving washing machine frees up time for other tasks. But what about machines that mimic human cognition and attempts to create stories, poetry and art?

Machine Learning is a new paradigm of design that allows technology to improve based on data - learning and becoming better. How will this affect some of the new kinds of machines we want to build like self-driving cars, AI assistants and concierges, and chess opponents?

Next: Feast and Famine


Feast and Famine

Imagine the smell of your favorite food. Maybe it’s the earthy scent of freshly baked bread coming out of the oven. The spice of a steak on the BBQ. The familiar aroma of your dad’s signature curry. Chocolate cupcakes your spouse makes from an old family recipe. More than likely, this evoked memories of some occasion in your past. Perhaps you remember who you were with, or where you were.


Eating, of course, is ubiquitous to living things. While plants “eat” light, many other animals eat similar things to us - with one important difference. We are the only species to have begun to cook food.


Cooking food in human cultures made an enormous impact on the way we live. Our relationship with food shaped many cultures, and can be found at the heart of religious and social traditions. This made great changes to our collective world. Food needed to be prepared: a fire started, raw ingredients processed, and a procedure applied. Instead of eating lightly as food was found when foraging was the main means of nourishment, groups of humans began to eat at the same time. This new ritual was important in the ancient world, and remains so to this day, where many social events prominently feature sharing food with friends and family. Shared meals and feasts have longtime been common in religious practices in many different traditions.

Biologically, cooking food was important because it allowed us to eat nutritious food that we were unable to digest without first processing and grinding them such as rice, wild wheat, oats, millet, as well as starchy potatoes. It also allowed us to eat less, but gain the same amount of energy, since the cooking process made it easier for us to access the food energy. As we began planning our food intake, our society shifted radically.


Agriculture undeniably shaped human culture as we know it. Food surplus made ever larger populations possible. Both cooking food and agriculture are fundamental to the development of human society as we know it.

Humans have even made machines that eat. Why was the digesting duck, a mechanical duck that ate, digested, and excreted grain, was heralded “the glory of France” by Voltaire? How did this shaped the way we think about creating useful machines?

At an abstract level, any artificial intelligence needs some kind of power source, which we can think of as similar to eating (perhaps more similar to plants than us) but will they ever share a meal? And in doing so, is the shared social bond around eating, which was so important for the development of communities in our history, a unique human trait?

Next: The Roots of Art

The Roots of Art

Three million years ago, one of your and my hominid ancestors was living in the Northern part of what is now South Africa. This individual picked up a brownish-red stone and carried it over 32 kilometers to their home - at least a 6-and-a-half-hour hike. For almost 50 years after its discovery in 1924, this stone remained a mystery - why had our ancestor picked it up and carried it such a distance?


The Pebble of Many Faces


The key to solving the mystery was to understand a brain that was similar, yet still different from ours. On one side of the stone, there was a “large brained” face that was obvious to the human paleoanthropologists that were studying it. It’s hard to miss: the stone shows two circular “eyes” pitted deep into the stone that were about the same size as our eye’s iris, a rounded “forehead” complete with a line demarking what could be a “hairline” and an open, gaping “mouth”, and rounded “chin”.

But it wasn’t a human that picked it up, it was an australopithicus. It contained two other faces: a young, laughing, smiling face with a wide jaw and shorter forehead, and if inverted, an old, wrinkled individual with a serious expression. The individual who picked it up likely recognized the two faces, possibly as a friend or family member, but certainly as a likeness of themselves.



This is the earliest known example of pareidolia, the phenomenon we perceive a familiar pattern where none may really exist. There are many examples, like “The man in the moon”. Some see a rabbit pounding herbs or a woman carrying a bundle of sticks. These images are woven into folklore and customs.

Even if we might not know it, this artifact is extremely important to human culture - it is considered to be the roots of art.

Machine Vision and Bias

If our brains are prone to see these apparitions, and we are creating artificial intelligence to see as we see, what is AI prone to see?

In machine vision, artificial intelligence also has its own pareidolia that we are unable to see. Usually we think of bias in machine learning systems as an error to be avoided, a mistake that will be fixed by better algorithms, more advanced technology, larger data sets. And most of the time we are right to think of it this way. But thinking of it only as a deficit denies our own biases and flaws as only deficits. Especially when our own pareidolia, which is a bias in our cognitive system, we consider full of meaning.


The man in the moon is the source of folklore, and these shared stories connects us with one another. It has shaped human culture for millennia.

Instead of dismissing errors that AI makes as problems to be overcome, what would happen if we respect them as meaningful, and a window into another, artificial mind?

When the AI we are building has an epiphany like our ancestor had 3 million years ago, how might we recognize it? How might we realize it is important, rather than simply an error in the technology we are building?

Next: The Hero with a Thousand Cases

The Hero with a Thousand Cases

Stories have brought humans together (and sometimes divided us) for thousands of years. They can help us understand our world in a deeper way, providing insight on experiences of others, teach us morals or give warnings, and remember our history. Shared myths have been a major part of every human culture and can strengthen community ties. Even in present day, how much of our lives are spent either hearing or telling a story?



The epic of Gilgamesh, the oldest written story we have found, was recorded about 4000 years ago, although we know that storytelling has a long oral tradition that more than likely predated this text. What is the biological reason we’re all so good at telling stories and why we love to hear them?

Even though humans have a special knack for remembering through stories, artificial intelligence fails to share this capability. Why does AI struggle so much with comprehension and how can they become better at telling stories humans like to hear?



But stories that humans tell are not all in the form of grand myths - we also love to tell stories about each other. Most of the stories we hear are about people we know - our family, friends, acquaintances - and people outside our social circle like celebrities, athletes, and artists.

Many of us have heard the saying “Great minds discuss ideas. Average minds discuss events. Small minds discuss people.” But is this really so? Gossiping about other members of your social circle has a surprising, positive effect on early human society, and by passing along second-hand information about other individuals, we are able live in far larger social groups. Even today, gossip still performs an important function in our society. Will the day come when our AI assistants gossip to each other about us?


Our Minds

Humans see stories everywhere. Why do our minds love a narrative? Scientists have shown that we create meaning and intention out of random events and inanimate objects. Since we are creating many prediction machines to try to understand the human world, should the AI we’re building have the same biases? In the machine learning world, our propensity for apophenia runs counter to most AI design. Yet, this is an important part of understanding human behavior and decision-making. How can this be resolved?

Next: Lies, Noble and Otherwise

Lies, Noble and Otherwise

What was the first lie you remember telling? I remember mine. One day when I was in kindergarten, I noticed a tulip blooming in front of the school. I was completely enamored with it, so during recess, I snuck out to the front of the school and picked it. When I got caught, I denied taking it from the front, claiming I had found it elsewhere in the schoolyard. The lie was obvious to the teacher supervising the class as we played outside. At five, how could I have known that tulips are not a flower that commonly bloomed wild? I was caught in the lie, and had to spend the last ten minutes of recess inside - and what’s worse, I lost possession of that cheerful tulip I had so badly wanted. 


Learning to Lie

Many parents feel that when their children begin to lie, it is a sign of the loss of innocence of childhood. But should we really bemoan this new behavior? Under the surface, lying is evidence of a very important new ability: the ability to understand that others have minds, and that these minds have their own knowledge, beliefs, and intentions just like ours, that are separate from us. Children learn to lie when they understand two things: that another person can have different beliefs that they have, and that their words can influence the beliefs of others. Typically, learning that words can convey falsehood happens before children realize that other minds can have different beliefs, but language is still a one of the most important peculiarities that allows us to lie.


Although it’s common to consider this kind of deception as a uniquely (and detrimentally) human trait, lying is common in some other species, including birds who frighten others away by warning about a fake predator to have the first share of food, and primates who share many of our prevarication abilities.

However, this type of lying is not the only type of deception in the animal kingdom: there is a spectrum of lies. There are butterflies, moths, fish, birds and even lizards that evolved a distinctive physical pattern that has the appearance of large eyes, in the hopes that the pattern will frighten potential predators and keep them safe. Some animals have evolved to mimic the likeness of another plant or animal, either to blend in or to cause predators to confuse them for something dangerous.

These kinds of “lies” are not the result of intention, but instead the result of optimization. There are many examples in machine learning where this kind of result can come about. Is evolution the only mechanism that can generate this? Or are their other learning methods that could produce a similar result? Where can we draw the line between intentional lies and lies which come about by accident?

Noble Lies

Even since Plato, the social purpose of lying was an important part of the human way of life. Language gives us the ability to share our knowledge and experiences with others and a way to communicate about things that happened without others present. But it also lets us lie and influence the perceptions of others. Malicious gossip, whether true or not, is one of the easiest ways to exert power over someone else in the same social circle.


Although stories and gossip can be positive forces for a social community, malicious gossip and divisive stories can be a destructive force.

Ethically, however, the noble lie might be a worrying possibility in artificial intelligence, and it is a theme that is frequently found in science fiction stories. In Kubrick and Clark’s 2001: A Space Odessey, HAL’s reaction to lying gives us an insight into how important it might prove in AI: “He was only aware of the conflict that was slowly destroying his integrity - the conflict between truth, and concealment of truth.”

Lies come in many forms aside from self-preservation deception. What kinds of lies have been identified by scientists? Can artificial intelligence exhibit this fibbing behavior?



Truth and falsehood give us some interesting paradoxes in logic. How does this influence how we think about and ultimately build artificial intelligence? Self-reference, is essential for these paradoxes and also essential for creating an understanding of the self. So far, paradoxes have been an especially challenging construct to add into machine learning.

Even some logical decision rules have brought challenges to early learning machines. The Perceptron, the first learning machine, was not able to learn an “exclusive or” operations - one that allows us to know that you might have a hamburger or pizza for dinner, but are quite unlikely to have both. We encounter these types of decisions often in everyday life. Although this problem has been solved by better machine learning structure, we explore whether there are other types of decisions like this that are impossible with the learning structures we now use, especially the challenging paradoxes and uncertainties.

Paradoxes bring many problems of their own: undecidable statements, ambiguity, and uncertainty. How might we approach them?

Next: Poise


Earlier this fall, I attended a party which featured a robot petting zoo. Intrigued, I found myself surrounded by mechanical creatures: robotic arms gingerly picking up balloons, a snuggly robot pillow, a soft tube that could explore its way through mazes, a sentinel guarding the doorway. But two life-like robots had the ability to dance. And dance they did. This was the first time that I had experienced the sudden, inexplicable fear and unease when interacting with technology - the so-called Uncanny Valley. I had no words to explain why the child-sized shiny human-ish piece of electronics had caused me to recoil so dramatically. It was involuntary, and decidedly memorable.

Uniqueness in human movement spans evolutionary ones like walking upright and grasping tools, and cultural ones like dance. These actions had a significant effect on human culture and customs. In robotics, replicating this human movement has been a common trope throughout history, appearing in many societies, and in works of fiction. 


Grasping tools with opposable thumbs allowed us to invent new and better means to survive, walking upright, a trait shared with our cousins for about 6 million years. The ability to grasp is innate: babies are born with this ability, although it takes a few months of practice for them to learn to reliably hold an object. Culturally holding hands can signal closeness and affection between individuals, and shaking hands in Western culture is a common business greeting.


Walking on two feet instead of four was a major evolutionary change in our ancestral family tree. At about a year old, toddlers begin to take their first steps on two legs, joining the rank of “upstanding” citizens, “standing tall” in the human world. More than just a psychological implication, the act of standing has an important place in human culture: it can show respect or appreciation as a standing ovation, be an act of defiance or insubordination such as in a political demonstration.

Transcendence through dance is one of the most storied parts of human culture, and it remains relevant to the present day. Pop icon Michael Jackson famously noted “On many an occasion when I am dancing, I have felt touched by something sacred. In those moments, I felt my spirit soar and become one with everything that exists.” Human babies are also predisposed to dancing - moving to the beat of music. But dance not only exists in many human cultures but it is also common in animals. What are the differences between these phenomena?


But we have a very strong adverse reaction to machines whose movement mimics ours. This is one facet of the uncanny valley, first discussed by Japanese professor Masahiro Mori, who described the challenges with designing robotics that had an increasingly human-like appearance. And yet attempts to replicate the human form and movement has a deep history. Some of the very first attempts at replicating humanity have been in the form of human-looking automatons.

Why is dancing such a common publicity stunt performed by robots, sometimes amusing us, but just as often unsettling us? Is there an evolutionary reason for this? Where to mysteries still remain?

Back to Intro

More coming soon!