Language is a sea of uncertainty. Over the past decade, computational linguistics has been learning to navigate this sea by means of probabilities. Given a newspaper sentence that has hundreds of possible parses, for example, recent systems have set their course for the most probable parse -- as defined by a probabilistic ("soft") grammar.
But how could one LEARN such a grammar? This is a higher-level navigation problem -- through the sea of possible soft grammars. I will present a clean Bayesian probability model that steers toward the most probable grammar. It is guided by (1) a prior belief that natural-language grammars tend toward internal consistency, and (2) some evidence in the form of sample parses from the language in question.
Optimizing this model naturally discovers common syntactic transformations that operate in the target language. The model can use these transformations to generalize, and therefore needs only half as much training data to match the performance of the best methods from previous literature.
Colloquia Series page.