Quantifying How We Intuit the Physical World

Aided by Moore-Sloan funding, researchers reveal insights about how humans perceive mass and force

How do we perceive properties of objects in the physical world? What strategies help us determine if a suitcase is too heavy to carry or how hard we should hit a tennis ball? While such physical intuition can’t easily be quantified in the real world, it can be modeled and measured with computer simulated environments. Neil R. Bramley, Moore-Sloan Post-doc Researcher, Tobias Gerstenberg and Joshua B. Tenenbaum of MIT, and Todd M. Gureckis, CDS Affiliated Associate Professor of Psychology Cognition & Perception Research at NYU, with new research, approach these fundamental concerns about human interactions with the physical world.

To determine whether people’s actions effectively reduce uncertainty about objects in their environment and to categorize those actions, the researchers built 2D digital microworlds, with the Javascript Box2D physics game engine, that reflected the latent properties of the physical world. The microworlds were like billiards or air hockey tables — bounded, continuously dynamic, two-dimensional settings. Each microworld had varying properties of surface friction and global (gravity-like) forces, and each microworld contained four pucks, two of which acted on each other with varying local (magnet-like) forces, and two of which were distractions.

The researchers paid Amazon Mechanical Turk workers to participate in two experiments using the microworlds. The first experiment involved sixty-four participants, sorted into one of three categories: passive, active, or yoked. All participants were asked to guess whether puck A or B was heavier (mass) and whether A and B attracted, repelled, or didn’t act upon each other (force). Passive participants watched a recording of pucks A and B moving in the microworld; active participants could interact directly with pucks A and B with a mouse; yoked participants could watch a recording of an active participant’s interventions. Active participants outperformed passive and yoked ones, and participants fared worse at inferring mass than force.

The second experiment was similar to the first, except it involved 120 participants separated into two blocks — one block was asked to focus on mass and the other on force. Participants were sorted into active, yoked-match (viewed an active participant’s interventions and asked to answer the same question), or yoked-mismatch (viewed an active participant’s interventions and asked to answer the opposite question).

Again, participants were better at identifying force than mass, and active participants outperformed yoked ones, except for force, where yoked participants performed equally well. This suggests that first-hand evidence is more important for determining mass than force. Interestingly, an Ideal Observer analysis (a model that performs as best as possible) showed that there was more evidence available for mass while participants were better able to identify force.

With these experiments, Bramley and collaborators approach fundamental questions of how we intuit latent properties of the physical world. The researchers’ work has implications for augmented reality and AI — augmented reality hardware might be able to offer even deeper insights into how we interact with objects around us, and AI systems that aim to mimic humans could benefit from training on this type of data.

By Paul Oliver