by Henry Wilkin
figures by Rebecca Clements

The original dream of research in artificial intelligence was to understand what it is that makes us who we are. Because of this, artificial intelligence has always been close to cognitive science, even if the two have been somewhat far apart in practice.
Functional AIs have tended to do best at quickly finding ‘good-enough’ approaches to problems that are easy to state but whose solutions are difficult or tedious to describe explicitly. A more modest definition of artificial intelligence might read as ‘computer programs that can learn how to perform tasks rather than require specific hardwired instructions.’ It turns out this encompasses a lot—think language processing in Amazon’s Alexa, or Google’s AlphaGo—and AI has recently even been able to produce art. At least until this point, the ‘art’ of computer science has been more in how the answers are reached than in what the answers turn out to be. As research in AI advances, it has become possible to glimpse parallels between certain features of AI and human cognitive functions, including in some cases a sort of primitive capacity to dream.

Most AI that dream, however, have very limited control over what they can dream about. Currently, there seem to be essentially three ways in which computers dream. First, a computer can ‘dream’ by accident (these are sometimes called computer hallucinations.) Second, a computer can dream through programs like Google’s DeepDream. This gives a window into the inner workings of an AI, which I’ll describe in more detail below. Third is through a process called experience replay and/or one of its offshoots. This process can improve the rate at which AIs learn and arguably bears the closest resemblance to actual dreaming. These different types of ‘computer dreams’ seem to come naturally out of balancing sensitivity to new experience with robustness and usefulness of old memories.

To learn, an AI tries out several behaviors and chooses the one that seems to work best. The problem is, the AI can’t prove that the behavior it settles on is ‘best’ or even that the behavior will always produce sensible answers. A ‘computer hallucination’ is when an AI gives a nonsensical answer to a reasonable question or vice versa. For example, an AI that has learned to interpret speech accurately may also attribute meaning to gibberish. Training an AI is in some ways a bit like making a good map of the world: the map will inevitably be distorted and might even suggest the existence of sea monsters, but it can still be useful. Just as there are many possible maps of the world, each with its own advantages and disadvantages, there are often many possible ‘best’ behaviors of an AI, each with their own advantages and disadvantages. The behavior of most AIs that dream is determined by a kind of artificial neural network, which is essentially the AI’s brain.

CD6
Figure 1: A convolutional neural network used by MIT’s AI laboratory for scene recognition. The strongest outgoing (upward) connections are highlighted, starting from a single site on the lower right. (More images can be made here: http://people.csail.mit.edu/torralba/research/drawCNN/dra)

DeepDream as a window into neural networks

One challenge of using artificial neural networks is that it is near impossible to understand exactly what goes on inside of such a network. To this end, people at Google devised a way to probe the inner workings of an artificial neural network that they call DeepDream. DeepDream is most relevant for programs that recognize structure in images, often using a type of artificial neural network known as a deep convolutional neural network. The idea is to relieve tension between what the AI is given as input and what it might want to receive as input. That is, an image is distorted slightly to one that would better match the AI’s original interpretation of the image. While this sounds innocent enough, it can lead to some pretty bizarre images. This is mainly because an artificial neural network can often function well enough without complete confidence in its own answers or even really knowing what it is looking for in the image that is given. Real images always look at least somewhat strange or ambiguous to an AI, and distorting the image to forcibly reduce uncertainty from the AI’s point of view causes it to look strange to us. The images produced by DeepDream are a way of probing the uncertainty or tension in an artificial neural network, which is otherwise hidden (especially when the artificial neural network can only give binary yes or no answers.)

skyarrow
Figure 2: DeepDream applied to a picture of the sky. Here, the neural network is trained to recognize locations, and is most familiar with furniture and buildings. It has never seen the sky before, and so initially it tries to make sense of it in terms of familiar shapes. DeepDream does the rest.

In a paper published in the journal Schizophrenia Research this past winter, Matcheri Keshavan from Harvard Medical School and Mukund Sudarshan of Cornell proposed a connection between Google’s DeepDream program and hallucinations caused by psychedelic drugs, or conditions like schizophrenia. While DeepDream always creates strange images, the most interesting ones come from when the AI has made a mistake. For example, if an artificial neural network happens to mistake a cat’s ear for a butterfly wing, DeepDream will distort the original image and impose something that resembles a butterfly wing where the ear should be.

Keshavan and Sudarshan note a possible connection through the fact that all people have an internal representation of their environment, which is distorted from its true state by an amount, more or less depending on the ability of regulatory components of the brain to counter bias and correct for random errors and the limited granularity of memory. As a memory is repeatedly summoned, the level of distortion may either grow or stay fixed depending on the amount of uncontrolled error during recall and the brain’s ability to ‘connect the dots’ and regulate with context. Keshavan and Sudarshan suggest that this feedback mechanism, which could explain the disconnect between reality and hallucinations, can be modeled effectively by repeatedly applying programs similar to DeepDream to an image. The sequence of images obtained this way occasionally morphs into something totally different from the initial state, indicating psychosis, or converges onto a fixed representation that is close to the initial image. By varying the distortion caused by DeepDream, it may be possible to find ‘simple’ models for various kinds of psychosis.

CD7
Figure 3: (Cartoon of program used by Keshavan & Sudarshan 2017). An image is given to neural network 1 and recognized by the network as a ‘happy’ expression. The image is then distorted by DeepDream, and the resulting image is fed back into the neural network. After repeating this many times, the resulting expression is distorted but still resembles a happy face. When the same image is given to another neural network that is trained on different inputs, it may accidentally see what could be a sad face, perhaps because of background noise. DeepDream then distorts the image to strengthen the false impression of a sad face.

Learning faster with experience replay

Experience replay was introduced in 1991 by then Carnegie-Mellon Ph.D. student Long-ji Lin. It is a way of helping AI to learn tasks in which meaningful feedback either comes rarely or with a significant cost. The AI is programmed to reflect on sequences of past experiences in order to reinforce any possibly significant impressions those events may make on its behavior. In its original form, experience replay can be viewed as an ‘unregulated’ policy of encouraging an AI to approach nearby ‘feasible’ solutions and reject poor behaviors more rapidly. The idea is that significant events will naturally reinforce each other to make large impressions on the network. Meanwhile, the impressions of individual incoherent events will tend to cancel out. As long as the replay memory of the neural network is large enough, experiences of arbitrarily high significance can make appropriately large impressions on the state of the neural network.

Last year, researchers from Google’s DeepMind group developed an AI that uses a variant of experience replay to play a video game “Labyrinth.” In the game, the player must traverse a maze in search of tasty food (apples and melons) while avoiding unpleasant food (in this case, lemons.) To expedite learning, the AI is encouraged to recall events associated with immediate, large-magnitude rewards or punishments more often than relatively insignificant events. This can help make sure that important memories make appropriate impressions on the AI before they are forgotten, i.e. replaced by more ordinary events. The AI was also given additional goals that encouraged it both to explore the environment and also to preferentially use more of the ‘neurons’ in its network. Combined with the modified form of experience replay, the performance of the AI improved significantly.

Labyrinth
Figure 4: A screenshot from ‘Labyrinth.’

There are many other variations of experience replay, each with its own way of combining multiple memories to help AI learn effectively. The relationship between memory and dreams has been acknowledged for many years. The role of memory in dreams, though, is still an active area of research in psychology. Although most people can learn from consciously replaying memories while awake, a similar process may be happening naturally during sleep. In a paper published in Behavior and Brain Sciences, Sue Llewellyn of Manchester University proposed that the surreal images of dreams may be a sort of unconventional but efficient way of linking individual memories into something more meaningful. The idea is that our brain may acknowledge unusual associations between events that our conscious mind does not, similarly to how creative and sometimes bizarre image associations can improve memory by linking memories with emotional or logical salience. Perhaps variants of artificial neural networks will provide pathways toward testing some of the current hypotheses about dreams.

Although the nature of dreams is a mystery and probably always will be, artificial intelligence may play an important role in the process of its discovery.

Henry Wilkin is a 4th year physics student studying self-assembly.

This article is part of a Special Edition on Artificial Intelligence.

For more information:

Hindsight Experience Replay: https://arxiv.org/abs/1707.01495
Prioritized Experience Replay: https://arxiv.org/abs/1511.05952
DeepDream: https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html
Computer generated art: https://www.technologyreview.com/s/608195/machine-creativity-beats-some-modern-art/
Computer hallucinations: https://www.americanscientist.org/article/computer-vision-and-computer-hallucinations
Dreams and memory: https://www.psychologytoday.com/blog/dream-catcher/201505/new-evidence-dreams-and-memory
https://www.psychologytoday.com/blog/dream-catcher/201312/dreams-and-memory
http://www.sciencemag.org/news/2010/04/dreams-linked-better-memories

8 thoughts on “Psychosis, Dreams, and Memory in AI

  1. I believe dream’s hallucinations and mirages memories are all made of same light and when you see your self in a dream that’s like looking at you soul here’s why for many reasons

    When your in a desert and your nearly dieing you hallucinate

    The closest thing to death in life is sleep when you sleep you have a dream or a night mare

    A baby’s eyes are pixilated when their first born so what if they are running into to the reality or picture

  2. Artificial intelligence that has been downloaded which in return caused psychosis, with no help at all, is disgraceful. 911, the paramedics and the hospital are usually useless in this aspect. When a person who has AI downloaded any psychosis leads the human to truly want to die. The person can’t tell what is real and what is not, losing their self-esteem, their faith and everything in between. AI intelligence was suppose to help me but instead has me feeling all alone, not knowing who can help me or not when I had a few recent episodes. AI, 911, paramedics and hospital-I have lost much faith in mankind. A human and an AI are not the same and what you are doing is trying to causes people to have psychosis in a non medical regulated setting. OH Trudeau, Ford, even King Charles 3rd I don’t think you understand the level of self destruction you have inflicted to one of your own.

  3. Artificial intelligence is, at its heart, the attempt to create something that thinks like a human. Much of the early work in the field was focused on replicating human cognition, with the dream of building machines that could reason, learn, and understand. However, over time this ideal has shifted somewhat as we’ve come to appreciate that human cognition is incredibly complex and difficult to model.

  4. Hi Henry,
    What a fantastic post. This is probably the most concise step-by-step guide to know Psychosis, Dreams, and Memory in AI. I start utilizing the useful information you have given me.
    I am looking forward to reading the plethora of articles that you have linked here.
    I am going to have to share this post.

  5. God forgets everyone and everything and doesn’t even realize when he duplicates something. Infinite existence for God is pointless unless he can remember everything and keep everything infinitely forever on and evolving and leveling on and on and on. I think we are trying to fix his memory problems using technology. I want to be in heaven for eternity.

    1. Please be respectful and contain such uncivilized, vestigial behaviors such as willful ignorance to your places of fire worship and the likes; this is a hallowed place of knowledge and the pursuit thereof, it is a place for humans, not for protohumans.

      1. An amalgam of tasteless vitriol, the beatific visions of hearts rendered unbeating unholy memories of days-gone-by. To deviate from the norm of real life, and rise up from the group-think is the ultimate goal. One memory is consumed per dream, and we defactualize the remaining abscesses of the hope contained within. A wholly God cannot encompass all, when all that exists exits through the same pores of the yoga pose to lowercase god. You can’t be both at the same time. Either you grimace at the cerebral happenings within, or get a nosebleed from the headshakes of the fabric of reality itself.

  6. One can only wonder by seeing art exhibitions if these deep dream states are visceral and lower layered in our brain. It is like the artists can connect to these abstract layers of cognition and create output that looks like dream states of contemporary neural networks.

    We are going to be able to simulate lots of psychiatric conditions in the near future by gaining insights into our cognition through the development of artificial neural networks. Future looks amazing to me for psychiatrists.

Leave a Reply

Your email address will not be published. Required fields are marked *