Uli Grasemann and Risto Miikkulainen aren’t the first computer scientists to use neural network systems to model what might be going on inside a schizophrenic brain. They’ve had an advantage, however, that others have lacked. Their neural network system, DISCERN, can understand and produce natural language.
Working with Ralph Hoffman, a psychiatrist at Yale, Grasemann and Miikulainen have also been able to pair their neural network results with a study of human schizophrenics, and the similarities have been striking.
“In other models of schizophrenia, there’s no direct link to the symptoms,” says Grasemann, a doctoral student in Miikkulainen’s lab. “That’s the big difference. We actually have language.”
As a result, when they found a way to model the excessive release of dopamine in the brain, they got a neural network that recalled memories in a distinctly schizophrenic-like fashion.
“The hypothesis is that dopamine encodes the importance—the salience—of experience,” says Grasemann. “When there’s too much dopamine, it leads to this exaggerated salience, and the brain ends up learning from things that it shouldn’t be learning from.”
According to this “hyperlearning” hypothesis, what happens to people suffering from schizophrenic psychosis is that their brains lose the ability to forget or ignore as much as they normally would. Without such forgetting, they lose the ability to extract what’s meaningful out of the immensity of stimulus that the brain encounters. They start making connections that aren’t real, or drowning in a sea of so many connections that they lose the ability to stitch together any kind of coherent story at all.
“The way I understand it,” says Grasemann, “is that, for instance, you eat lunch somewhere and you see 100 faces around you and suddenly they’re all intensely meaningful, and your brain is telling you to make sense of that. You see a face somewhere and it looks like a co-worker, and you say to yourself, ‘Wow you know that guy looks just like that other guy at work, so that guy at work must be following me.’”
In order to model this process, Grasemann and Miikkulainen began by teaching a series of simple stories to DISCERN. The stories were assimilated into DISCERN’s memory in much the way the human brain stores information—not as distinct units, but as statistical relationships of words, sentences, scripts and stories.
When asked to recall a specific memory, DISCERN doesn’t retrieve it, like a person getting a book from the library stacks. Instead, it assembles it from the traces and signs and scripts left behind, basing its assembly on the statistical relationships that experience has encoded.
“With neural networks, you basically train them by showing them examples, over and over and over again,” says Grasemann. “Every time you show it an example, you say, if this is the input, then this should be your output, and if this is the input, then that should be your output. You do it again and again thousands of times, and every time it adjusts a little bit more towards doing what you want. In the end, if you do it enough, the network has learned.”
DISCERN learned its stories well; it could recall them with almost 100 percent accuracy.
In order to model hyperlearning, Grasemann and Miikkulainen ran the system through its paces again, but this time with one key parameter altered. They simulated an excessive release of dopamine by increasing the system’s learning rate—essentially telling it to stop forgetting so much.
“It’s an important mechanism to be able to ignore things,” says Grasemann. “What we found is that if you crank up the learning rate in DISCERN high enough, it produces language abnormalities that suggest schizophrenia.”
After being re-trained with the elevated learning rate, DISCERN began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall (in one instance DISCERN claimed responsibility for a terrorist bombing). When the higher training rate was applied in a different way, DISCERN began showing evidence of “derailment”—replying to requests for a specific memory with a jumble of dissociated sentences, abrupt digressions, and constant leaps from the first- to the third-person and back again.
“The idea is that information processing in neural networks tends to be like information processing in the human brain in many ways,” says Grasemann. “So the hope was that it would also break down in similar ways. And it did.”
The parallel between their modified neural network and human schizophrenia isn’t absolute proof, says Grasemann, that the hyperlearning hypothesis is correct. It is, however, support for the hypothesis, and also evidence of how useful neural networks can be in understanding the human brain.
“We have so much more control over neural networks than we could ever have over human subjects,” he says. “The hope is that this kind of modeling will help clinical research.”
Comments