ILS Colloquium

Agenda

18 March 2021
15:30 - 17:00
Online in Microsoft Teams

Steven Piantadosi

One model for the learning of language

Join MS Teams meeting here

University of California, Berkeley

A major target of linguistics and cognitive science is to understand what class of learning systems can acquire the key structures of natural language. Until recently, the computational requirements of language have been used to argue that learning is impossible without a highly constrained hypothesis space. Here, we describe a learning system that is maximally unconstrained, operating over the space of all computations, and is able to acquire many of the key structures present in natural language from positive evidence alone. We demonstrate this by providing the same learning model with data from 70 distinct formal languages which have been argued to capture key features of language, have been studied in experimental work, or come from an interesting complexity class. The model is able to successfully induce the latent system generating the observed strings from positive evidence in all cases, including regular, context-free, and context-sensitive formal languages, as well as languages studied in artificial language learning experiments. These results show that relatively small amounts of positive evidence can support learning of rich classes of generative computations over structures.