Presenter:

Tejas Kulkarni, Google DeepMind [PUBS]: [SLIDES], [VIDEOS]

Title: Computational Models of Human-like Perception and Control

Abstract: Humans explicitly abstract away experiences into abstractions such as objects, relations, agents, numbers and geometry. These building blocks later become the key towards solving other sensory motor problems with better combinatorial generalization and sample efficiency. In this class, we will discuss algorithms that perceive and control using such abstractions. In particular, we will discuss models that integrate neural networks and program synthesis as the representational structures to get at these abstractions. In case of perception, this amounts to systems that learn about objects and 3D geometry to explain visual scenes using program like representations. In case of control, this amounts to constructing and exploring using temporal abstractions in the space of objects and relations, leading to better sample efficiency. Finally, we will discuss how these building blocks could fit into a broader agent architecture for high order cognitive functions.

Readings:

Primary: Spelke and Kinzler [4] (PDF) Secondary: Ganin et al [1] (PDF) Kulkarni et al [2] (PDF) Lake et al [3] (PDF)

References:

[1]   Yaroslav Ganin, Tejas Kulkarni, Igor Babuschkin, S. M. Ali Eslami, and Oriol Vinyals. Synthesizing programs for images using reinforced adversarial learning. CoRR, arXiv:1804.01118, 2018.

[2]   Tejas D. Kulkarni, Karthik Narasimhan, Ardavan Saeedi, and Joshua B. Tenenbaum. Hierarchical deep reinforcement learning: Integrating temporal abstraction and intrinsic motivation. CoRR, arXiv:1604.06057, 2016.

[3]   Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. Building machines that learn and think like people. Behavioral and Brain Sciences, 40:e253, 2017.

[4]   E. S Spelke and K. D. Kinzler. Core knowledge. Developmental Science, 10:89-96, 2007.