Scientists Afflict Computers with Schizophrenia to Better Understand the Human Brain

May 5, 2011

AUSTIN, Texas — Computer networks that can't forget fast enough can show symptoms of a kind of virtual schizophrenia, giving researchers further clues to the inner workings of schizophrenic brains, researchers at The University of Texas at Austin and Yale University have found.

The researchers used a virtual computer model, or "neural network," to simulate the excessive release of dopamine in the brain. They found that the network recalled memories in a distinctly schizophrenic-like fashion.

Their results were published in April in Biological Psychiatry.

"The hypothesis is that dopamine encodes the importance — the salience — of experience," says Uli Grasemann, a graduate student in the Department of Computer Science at The University of Texas at Austin. "When there's too much dopamine, it leads to exaggerated salience, and the brain ends up learning from things that it shouldn't be learning from."

The results bolster a hypothesis known in schizophrenia circles as the hyperlearning hypothesis, which posits that people suffering from schizophrenia have brains that lose the ability to forget or ignore as much as they normally would. Without forgetting, they lose the ability to extract what's meaningful out of the immensity of stimuli the brain encounters. They start making connections that aren't real, or drowning in a sea of so many connections they lose the ability to stitch together any kind of coherent story.

The neural network used by Grasemann and his adviser, Professor Risto Miikkulainen, is called DISCERN. Designed by Miikkulainen, DISCERN is able to learn natural language. In this study it was used to simulate what happens to language as the result of eight different types of neurological dysfunction. The results of the simulations were compared by Ralph Hoffman, professor of psychiatry at the Yale School of Medicine, to what he saw when studying human schizophrenics.

In order to model the process, Grasemann and Miikkulainen began by teaching a series of simple stories to DISCERN. The stories were assimilated into DISCERN's memory in much the way the human brain stores information-not as distinct units, but as statistical relationships of words, sentences, scripts and stories.

"With neural networks, you basically train them by showing them examples, over and over and over again," says Grasemann. "Every time you show it an example, you say, if this is the input, then this should be your output, and if this is the input, then that should be your output. You do it again and again thousands of times, and every time it adjusts a little bit more towards doing what you want. In the end, if you do it enough, the network has learned."

In order to model hyperlearning, Grasemann and Miikkulainen ran the system through its paces again, but with one key parameter altered. They simulated an excessive release of dopamine by increasing the system's learning rate — essentially telling it to stop forgetting so much.

"It's an important mechanism to be able to ignore things," says Grasemann. "What we found is that if you crank up the learning rate in DISCERN high enough, it produces language abnormalities that suggest schizophrenia."

After being re-trained with the elevated learning rate, DISCERN began putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall. In one answer, for instance, DISCERN claimed responsibility for a terrorist bombing.

In another instance, DISCERN began showing evidence of "derailment" — replying to requests for a specific memory with a jumble of dissociated sentences, abrupt digressions and constant leaps from the first- to the third-person and back again.

"Information processing in neural networks tends to be like information processing in the human brain in many ways," says Grasemann. "So the hope was that it would also break down in similar ways. And it did."

The parallel between their modified neural network and human schizophrenia isn't absolute proof the hyperlearning hypothesis is correct, says Grasemann. It is, however, support for the hypothesis, and also evidence of how useful neural networks can be in understanding the human brain.

"We have so much more control over neural networks than we could ever have over human subjects," he says. "The hope is that this kind of modeling will help clinical research."

For more information, contact: Daniel Oppenheimer, Hogg Foundation, 512 745 3353.

14 Comments to "Scientists Afflict Computers with Schizophrenia to Better Understand the Human Brain"

1.  Fred Herman said on May 7, 2011

Hang on: How is a neural network that understands and communicates in natural language *not* a sentient entity? And if it is one, or has reached the point of being one, how can this sort of experimentation on it be ethical?

2.  Alan said on May 9, 2011

Possibly. Yet, this article and the research starts with and then incorporates many assumptions , then procedes to arrive at a conclusion that really may be inaccurate conjecture .
Did the test really "simulate an excessive release of dopamine by increasing the system's learning rate " ?
Aren't the researchers placing their own biases on what is important for people to remember and what "should" be discarded ?

Will the DSM of the future make the claim that persons that recall more than average and have a higher learning rate than most could be precursor traits for a future brain "disorder" ?
While the reasearchers may believe they have good intentions and that their machine model reflects the actual workings of a human mind - I remain skeptical .

Great quote from link provided above though "We don't even know if our brain acts as a connectionist network at all, above the cellular level. Some cognitive scientists think it is, but others think that those guys are talking out of an orifice connected to their mouth, but not their mouth. "

3.  Fluck said on May 9, 2011

Fred Herman: "How is a neural network...*not* a sentient entity?"

Fred, that's a good question, but there a lot of significant differences between this and any sentient entity. Firstly, the program can ONLY detect one type of input, the text. It cannot feel heat or cold, it can't see bright or dull, it can't touch or smell or taste. It can only read the words its fed, it can't get bored and want to read more because it only 'lives' in the moments it's given a story to read. Additionally, any 'negative' modification that a sentient being would consider an injury or damage can be reversed easily by the creators: there is no risk to it were it sentient.

Finally, and most importantly, it was created by people who would not have given it the capacity to 'feel emotion'. (Un)fortunately it will be quite a few years before any simulation can accurately model the subjectiveness of sentience and it will require a system with much more diverse ability to perceive its world than this.

4.  Olaf Tomalka said on May 10, 2011

The difference between this neural network and human is that, networks can only do as much as we allow them to - it can't go out of bonds and for example - start singing. It can't ask itself "What am I" for the same reasons it can't create more neurons. As well as neural networks use wages for graph edges while our brains are only binary state (flow of information and no flow at all)

5.  Decker said on May 11, 2011

To Fluck:

So you see people with a sense impairment to not be sentient?

Just kidding, I understand what you meant about it only 'living in the moment'. Interesting concept, but still doesn't necessarily disprove that the program had sentience.

The Oxford English Dictionary (2008) defines sentience as "able to perceive or feel things". If that definition is correct, then yes, the computer is sentient. That definition itself should be changed, of course, because with it a corneal scanner could be perceived as sentient - but that's an argument for another day.

If we approach sentience traditionally, and view it as the ability to perceive oneself (Such as humans and some Apes) then yes, the program is (or was?) sentient. It was even so aware of itself that DISCERN started "putting itself at the center of fantastical, delusional stories that incorporated elements from other stories it had been told to recall. In one answer, for instance, DISCERN claimed responsibility for a terrorist bombing." In that it was able to involve itself in a story (and in fact, that it could create a unique story at all) shows that it had at least rudimentary sentience.

To Olaf Tomalka

Your point is valid, but it could be argued that it is only due to the supervision that the program underwent. One could argue (If, of course, you leant more on the "nurture" side of the debate) that you could raise a person to never act spontaneously (Or possibly even develop sentience? Despite the moral qualms, it would be interesting to see the results of a human child raised inside sensory deprivation).

I will, of course, cede to you that the way an organic neural network and an inorganic neural network works are different, and that that difference could pollute the validity of this experiment in its relation to the understanding of schizophrenia. But how long will it take to create a perfect artificial copy of a human brain, I wonder? I suppose that we'll have to wait and see.

6.  frank said on May 11, 2011

The answer to your question is simple, Fred. There is no such thing as sentient entities.

7.  tlmorris91 said on May 11, 2011

This is absolutely fascinating! But as a nerd I have to point out that you have taken the first steps to building Glados. I hope you're happy. You monsters. *clap.... clap.... clap* Oh good, it looks like my sarcastic slow clap generator is still working.

8.  Joseph Allen Kozuh, Ph.D. said on May 12, 2011

INFORMATION is a Non-Physical Reality ... and can range from a simple "YES, NO" to the complex multi-dimensional relationship between a Husband and Wife. INFORMATION is coded in physical bits on a computer; however, the physical coding of INFORMATION in the Brain is ... a complete mystery ... !!!!

9.  Leety said on May 15, 2011

Frank, totally agree. Glad someone is saying it.
Fred - I think it's a great, valid point to raise regardless of whether or not it applies to this specific example.

I think one of the most important frontiers in AI is going to be an ethical one, and I really hope it pans out that way although I fear it will not.

10.  David said on May 19, 2011

Leety, Frank, are you guys kidding? The whole point of using neural networks and computers to simulate this kind of thing is that we DON'T have to harm humans. We don't have to induce schizophrenia in a person to study its effects because we have a simulation. I don't care about "Watson" getting his feelings hurt because he lost jeopardy. We are on top of the food chain and its time we start acting like it. We can't expect science to treat AI as sentient beings, and no matter how "real" we make them, they never will be. So do us a favor and scratch the "Robotic Rights of America" campaign.

11.  Aris said on May 24, 2011

I would like to draw attention to a certain aspect, and also invite discussion on its margin.
The "schizophrenia" that was displayed by the neural net is a symptom, an effect of having ran the system through its paces again, right?
That, or an equivalent, is not usually why the set of symptoms are displayed by "schizophrenic" humans.
Therefore, isn't the relevancy of this experiment a bit overhyped?
If a robot is programmed to make loud noises as it exhales, the relevancy for studying bronchitis in humans is quite limited, no? Do you think this analogy wouldn't apply here?

12.  Mark Woodward said on Oct. 6, 2011

Fred, I think you said it best. There really are no sentient entities. Brilliant.

13.  emilruthann said on Oct. 13, 2011

To Alan: "Did the test really 'simulate an excessive release of dopamine by increasing the system's learning rate '?"
It seems to me that this is what they were trying to test. That is, because we know that excess dopamine is release in subjects that exhibit schizophrenic behavior, what variable can we change in the network that will result in this behavior?
Taking this as the essential question, if changing the "learning rate" variable resulted in a schizophrenic computer then it is implied dopamine has the same functionality in humans as this variable in a program.
This implication then fits with other theories about schizophrenia, such as the possibility schizophrenics have trouble discerning important learned information from minutia.

(There is of course still the problem of defining what "schizophrenic behavior" is, as the data is qualitative in responses from humans or computers.)

14.  Ben said on May 8, 2012

The main thing I get out of this is they recreated Schizophrenia. However the Computer randomized the information, it still did it, without being directly told. They put something in and something different came out. It seems like this Computer passed the Turing test. Whether this actually helps them with the the human disorder or its effects is yet TBD. Going along with the Sentience bit... even if a man dresses up like a woman and gets you to take him home, that doesn't make him a woman. This machine has the intelligence of a parakeet which is roughly the same as a 3 year-old child... that doesn't make this machine a 3 year old child. Please don't compare the existence of competent organic material (Humans) with a combination of minerals (Rocks). You shouldn't compare yourself to a rock.