people
director
Jay McClelland (website)
Over the years, my research has addressed a broad range of cognitive neuroscience issues in learning, memory, language and cognitive development. I view cognitive functions as emerging from the parallel, distributed processing activity of neural populations, with learning occurring through the adaptation of connections among participating neurons, as discussed in Parallel Distributed Processing (Rumelhart, McClelland, and the PDP Research Group, 1986). Research in the lab revolves around efforts to develop explicit computational models based on these ideas; to test, refine, and extend the principles embodied in the models; and then to apply the models to substantive research questions through behavioral experiment, computer simulation, functional brain imaging, and mathematical analysis.
Recently I have begun to focus on mathematical cognition, and the lab is actively recruiting students and collaborators on this topic. Please see the statement about this on my website.
visting scholar
Stephen (Steve) K. Reed (website) Visting Scholar, Center for the Study of Language and Information
I am interested in creating taxonomies and ontologies to organize research and theory in cognitive psychology. I am also interested in mathematical cognition, particularly problem solving and visual reasoning.
post-doctoral fellows
Frank J. Kanayet
I am interested in understanding how ideas are formed and transformed by the brain, and how the neural representations of those ideas are influenced by contexts and goals. My work at the PDP lab focuses on using multivoxel pattern analyses of fMRI data to examine the similarity structure of mathematical and geometrical concepts such as fractions or congruence.
Kanayet, F. J., Opfer, J. E. and Cunningham, W. A. (2014). The value of numbers in economic rewards. Psychological Science, 25,8, 1534-1545.
students
Arianna X. Yuan
I’m interested in a broad range of topics in mathematical cognition, including the representation of numerosity, the comprehension of mathematical proof, the differences between mathematical prodigies and ordinary children and the mechanisms underlying impaired mathematical skills. My current research focuses on the representation of negative numbers. By combining neural network models with experimental data, I’m trying to explain the mixed findings of previous studies and gain a deeper understanding of how people represent and manipulate negative numbers.
Andrew Saxe (website)
I'm interested in the theory of deep learning and its applications to phenomena in neuroscience and psychology. I study the impact of the brain's deep, layered structure on phenomena like human semantic development, visual perceptual learning, and experience dependent plasticity in primary sensory cortices. While complex ''black-box'' deep learning methods have recently had widespread success in engineering disciplines, I focus on minimal models that permit exact analytic solutions. Surprisingly, these simple models accurately capture many aspects of their more complicated brethren, replacing the problematic black-box with conceptual clarity. They also provide hard qualitative and quantitative predictions suitable for the empirical brain sciences.
Saxe, A. M., McClelland, J. L., & Ganguli, S. (2014). Exact solutions to the nonlinear dynamics of learning in deep linear neural networks. In Y. Bengio & Y. LeCun (Eds.), International Conference on Learning Representations. Banff, Canada. pdf
Saxe, A.M., McClelland, J.L., and Ganguli, S. (2013). Learning hierarchical category structure in deep networks. In M. Knauff, M. Paulen, N. Sebanz, & I. Wachsmuth (Eds.), Proceedings of the 35th annual meeting of the Cognitive Science Society. (pp. 1271-1276). Austin, TX: Cognitive Science Society. pdf
Rachel Lee (website)
I am an undergraduate expecting my B.S. with Honors in Mathematical Computational Science (Applied Math) in 2015. My most long term work is focused on modeling perceptual learning with deep networks.
Lee, R., Saxe, A., McClelland, J. L. (2014, July). Modeling perceptual learning with deep networks. Poster at CogSci 2014, Quebec City, Canada.
pdf
Steven Hansen
Kevin Mickey (website)
Cameron McKenzie
alumni
Will Zou
I obtained B.A. and M.Eng. from University of Cambridge, and defended my Ph.D. thesis at Stanford. My research have focused on deep learning with application to computer vision and cognitive science. Specifically, I have worked on unsupervised learning of representations for action recognition, and object detection with deep convolutional networks. Theme-wise, I look at facilitating `invariance' by means of weight-tying, unsupervised learning, and temporal coherence over time. I have most recently worked on unsupervised learning of number representations to understand underlying cognitive processes of the Approximate Numbers Sense.
Will Y. Zou, Xiaoyu Wang, Miao Sun, Yuanqing Lin (2014). Generic Object Detection with Dense Neural Patterns and Regionlets. British Machine vision Conference. (BMVC)
Will Y. Zou, Kai Yu, Shenghuo Zhu, Andrew Y. Ng (2012) Deep Learning of Invariant Features via Simulated Fixations in Video. Neural Information Processing Systems (NIPS)
Pavlos Kollias (website)
I'm interested in computational models of high-level cognitive functions and their neural basis. My work in the lab involved the development of a neural network model of verbal analogical reasoning. The model captured the development and deterioration of A:B::C:D analogical reasoning in young children and FTLD patients correspondingly. My current work extends this approach into a biologically inspired architecture. I am also interested in the formation of efficient cortical representations. Specifically, I am interested in how mutual-information structure in task-sets can lead to efficient coding in the prefrontal cortex.
Kollias, P. and McClelland, J. (2013). Context, cortex, and associations: a connectionist developmental approach to verbal analogies. Frontiers in Psychology, 4:857..
Cynthia Henderson
I'm interested in models and experimental approaches related to the visual perception of multiple objects. The model I've been developing with Dr. McClelland explores potential complementary contributions of the dorsal and ventral visual pathways to perception when more than one object is present.
Juan Gao
Ph.D., Mechanical and Aerospace Engineering, Princeton University, 2007. As a graduate student Juan Gao's research interests spanned a wide range of topics in dynamics and its application in neuroscience. Her current interests focus on dynamics of decision-making, especially the effects of reward conditions, viewing time and previous sequences. Together with Prof. McClelland Juan is responsible for the theoretical and computational aspect of the decision-making research in the lab.
J. Gao, K. Wong, P. Holmes and J. D. Cohen. Integrated models for sequential effects in two-choice forced-choice tasks serial reaction-time tasks (Under review).
J. Gao and P. Holmes. On Dynamics of Electrically-coupled Neurons with Inhibitory Synapses. J. Comp. Neurosci., 22 (1):39-61, 2007.
Katia Dilkina
I am interested in conceptual (aka semantic) knowledge. In my view, semantic representations are influenced not only by what we sense and interact with in our experience but also by language. My work with Jay McClelland has focused on developing neural network models of deficits in semantic dementia, as well as cross-linguistic behavioral data from normal individuals. Our approach emphasizes learning / experience and individual differences as major factors in explaining the variability among individuals and between groups.
Dilkina, K., McClelland, J. L., & Plaut, D. C. (2008). A single-system account of semantic and lexical deficits in five semantic dementia patients, Cognitive Neuropsychology, 25(2), 136-164.
Dilkina, K., McClelland, J. L., & Boroditsky, L. (2007). How language affects thought in a connectionist model. 29th Annual Meeting of the Cognitive Science Society, 215-220.
Jeremy Glick
I'm interested in statistical learning and computational models. Right now, I'm working on behavioral experiments in word segmentation, trying to distinguish between some competing models. I'm also interested in connectionist and probabilistic models more broadly, and I have a few side projects applying these to other domains, including memory for counterintuitive concepts.
Brenden Lake
I’m expecting my B.S. and M.S. in Symbolic Systems in 2009. My research in the lab has focused on perceptual category learning. For my Master's thesis, I am investigating how people learn categories from a combination of labeled and unlabeled stimuli.
Lake, B. M., Vallabha, G, K., and McClelland, J. L. (2008). Modeling unsupervised perceptual category learning. In Proceedings of the 7th International Conference on Development and Learning.
Lake, B. M. and Cottrell, G. W. (2005). Age of acquisition in facial identification: A connectionist approach. In Proceedings of the 27th Annual Cognitive Science Conference. Mahwah, NJ: Lawrence Erlbaum.
Sharareh Noorbaloochi
Sharareh holds B.Sc. and M.Sc. degrees from the Electrical Engineering (EE) department of the University of Minnesota. In 2007, she joined Prof. McClelland's lab where she works on understanding the neural basis of decision making. She is currently a graduate student at the EE department at Stanford University.
Sharareh Noorbaloochi, Jose F. Barbe, Ahmed H. Tewfik: Probabilistic Modeling of Multi-level Genetic Regulatory Logic, Workshop on Genomics Signal Processing and Statistics (GENSIPS), 2006.
Anna Schapiro (website)
I did research as an undergrad in the pdp lab and am now a graduate student at Princeton. I study how object representations change as a function of recently experienced semantics and temporal statistics, during both waking and sleep. I use a combination of fMRI, behavioral experiments, and pdp modeling.
Dahlia Sharon
Ph.D., Neurobiology, Weizmann Institute of Science. M.Sc., Physiology - Faculty of Medicine, Tel Aviv University. B.Sc., Biology - Faculty of Life Sciences, Tel Aviv University. I’m interested in visual perception and in the neural population dynamics that give rise to it. I have studied spatiotemporal neural response patterns in both cat, using voltage-sensitive dye imaging, and human, using combined MEG/EEG/MRI. My current work in human is focused on feature-based attention and perceptual decision-making.
Sharon, D & Grinvald, A (2002). Dynamics and Constancy in Cortical Spatiotemporal Patterns of Orientation Processing. Science 295: 512-515.
Sharon D, Hämäläinen MS, Halgren E, Tootell RBH, Belliveau JW (2007). The advantage of combining MEG and EEG: Comparison to fMRI in focally-stimulated visual cortex. NeuroImage 36 (4): 1225-1235.
Daniel Sternberg
I'm interested in how people make predictions and inferences about the causes and outcomes of events, and how to ground this type of reasoning in the framework of domain-general learning mechanisms. Currently I have been focusing on a series of experiments that investigate how causal framing and the learning task (e.g., prediction vs. reaction) affect participants' inferences about simple cue-outcome relationships. I am also interested in how to relate this work to computational models of learning and memory.
Sternberg, D.A., & McClelland, J.L. (2009). When should we expect indirect effects in human contingency learning? In N.A. Taatgen & H. van Rijn (Eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society, 206-211.
Sternberg, D.A., & McClelland, J.L. (2009). How do we get from propositions to behavior? Commentary on a target article by Mitchell, De Houwer and Lovibond. Behavioral and Brain Sciences, 32, 226-227.