Talk - John Hale (Cornell) Entropy Reduction and Asian Languages
Thu, May 17, 2012 • 11:00 AM - 12:30 PM • PAR 101
This talk presents a particular conceptualization of human language understanding as information processing. From this viewpoint, understanding a sentence word-by-word is a kind of incomplete perception problem in which comprehenders over time become more certain about the linguistic structure of the utterance they are trying to understand. The Entropy Reduction hypothesis holds that the scale of these certainty-increases reflects psychological effort. This claim revives the application of information theory to psycholinguistics, which languished since the 1950s. But in contrast to that earlier work, modern applications of information theory to language-understanding now use generative grammars to specify the relevant structures and their probabilities. This representation makes it possible to apply standard techniques from computational linguistics to work out weighted "expectations" about as-yet-unheard words. The talk exemplifies the general theory using examples from Korean and Chinese. The prenomial character of relative clauses in these languages is an important test case for any general cognitive theory of sentence processing.
Bio: John Hale's research focuses on cognitive models of human language. He is particularly interested in combining ideas from AI, (Computational) Linguistics and Psychology to address questions about human sentence comprehension. He has been an Associate Professor at Cornell University since 2008. His PhD is from Johns Hopkins University (2003).