Work in Yale’s Cognitive and Neural Computation Lab bridges neuroscience and AI, revealing how the brain flexibly interprets complex environments — powered by high-performance computing for large-scale modeling.
A new study from the lab reveals how the primate brain transforms flat, two-dimensional images into rich, three-dimensional mental models — a process that could revolutionize both neuroscience and artificial intelligence. Led by Ilker Yildirim and Ph.D. candidate Hakan Yilmaz, the team developed a computational model called the Body Inference Network (BIN), which mimics how the brain interprets visual input to infer 3D structure.
The researchers trained BIN to reconstruct 3D representations of human and monkey bodies from labeled 2D images. When compared with neural activity recorded in macaques, BIN’s processing stages closely mirrored brain activity in regions responsible for body shape recognition. This alignment offers compelling evidence for a shared computational strategy between biological and artificial systems — a concept the team calls “inverse graphics.”
This research was powered by Yale high-performance computing resources, housed at the Massachusetts Green High Performance Computing Center (MGHPCC). The computational demands of training and validating BIN, as well as analyzing neural data, required scalable infrastructure and advanced modeling capabilities. The study exemplifies how research computing accelerates discovery at the intersection of cognitive science, machine learning, and neuroscience.