Friday Jun 8th   15:30   Greenberg Room

Carla Hudson Kam

UC Berkeley

Getting it right by getting it wrong: Why learners change languages and what learning failures can tell us about the mechanisms involved in acquisition

One of the most notable features of first language acquisition is the almost universal success that children have; children world wide seem to easily acquire a language that looks very much like the language spoken by the people who provided their input. Occasionally, however, this is not the case. Instead, learners with completely normal cognitive capacities change the language as they learn it. These "failures to learn" are in apparent conflict with a great deal of work showing the incredible sensitivity of human learners to the statistics present in the input, and as such, are often pointed to as evidence supporting the operation of innate domain specific-learning mechanisms. In my work I focus on a particular kind of change that is often seen when children learn from non-native speakers — regularization — and ask about the nature of the mechanisms that might be responsible for such changes. I will present results from a series of artificial language experiments showing that children and adults appear to differ as to what they learn and what they regularize: Data from a production task shows that adults are relatively good at learning variation present in their input, whereas children are very likely to regularize probabilistic variation, suggesting that the children are doing something different from adults when learning from input containing inconsistent patterns. However, data from judgment tasks show that children (and adults who regularize) are sensitive to the underlying probabilities, despite the fact that their productions do not reflect the statistics. But why then are their productions regular? I propose that regularization actually emerges from the processes involved in language production, rather than directly reflecting learning differences. Thus, contrary to appearances, regularization may reflect the operation of a learning mechanism that is sensitive to statistics, in interaction with a domain specific production system that is less so.