Menu

AI pareidolia: Can machines spot faces in inanimate objects?

October 2, 2024

New dataset of “illusory” faces reveals differences between human and algorithmic face detection, links to animal face recognition, and a formula predicting where people most often perceive faces.

The MIT SuperCloud and Lincoln Laboratory Supercomputing Center provided HPC resources for the researchers’ results.

Read this story at MIT News

Story image: The “Faces in Things” dataset is a comprehensive, human-labeled collection of over 5,000 pareidolic images. The research team trained face-detection algorithms to see faces in these pictures, giving insight into how humans learned to recognize faces within their surroundings.
Credits: Image: Alex Shipps/MIT CSAIL

Related Publication:

Mark Hamilton, Simon Stent, Vasha DuTell, Anne Harrington, Jennifer Corbett, Ruth Rosenholtz, and William T. Freeman (2024), Seeing Faces in Things: A Model and Dataset for Pareidolia, arXiv: 2409.16143 [cs.VS]

Tags:
Previous Post:

Research projects

The US ATLAS Northeast Tier 2 Center
Yale Budget Lab
Volcanic Eruptions Impact on Stratospheric Chemistry & Ozone
Towards a Whole Brain Cellular Atlas
Tornado Path Detection
The Kempner Institute - Unlocking Intelligence
The Institute for Experiential AI
Taming the Energy Appetite of AI Models
All Research Projects

Collaborative projects

ALL Collaborative PROJECTS

OUTREACH & EDUCATION PROJECTS

See ALL Scholarships
100 Bigelow Street, Holyoke, MA 01040