BendID @ CDesign Lab, Purdue University.

How to Design Devices that People can use.

Deriving usability from design and affordances of objects.

While ‘affordance’ has various meaning, my definition here is the “physical attribute” of a tangible object that hints to the user on how to interact with the object, as championed by Don Norman. A flat surface hints you to put your palm on it, touch it with finger(s), or drag your finger(s) across etc. A door-knob has a peculiar round shape that enables you to grasp it. The shape of the knob morphs into the negative shape to your palm hinting for you to hold it. These are the obvious ways you interact with those objects. It is as if they are trying to tell you how to touch them, grab them, interact with them.

Now if we look design and affordances from the lens of an embedded electronics, we can come up with innovative product that creates the user experience of the future. These electronics can leverage the affordances of the object or provide newer methods to interact with the product. Thereby ‘tangible interactions’ requires the clever marriage of the design and electronics which forms natural and intutive user interactions.

“Design ‘N’ Affordances — DNA of Tangible Interactions”

Leveraging the Affordances

Our current generation smart touch phones are a great example of this category. By physical shape and size it is a cuboidal object that fits perfectly to ones hand. Now the surface of this cubiod has flat surface which either use to house the keypad, screen (front) or provides a good gripping area. By putting an array of capacitive sensors on one surface, the designer leverages the affordance of the design(object) to make it a touch surface. Further superimposing this touch surface on a display, one was able to design an entire user interface that could intuitively navigate the user though tasks and options. IBM did this in 1994 when they launched the world’s first touch phone — Simon. Later the concept of touch smart phone came into existence where Apple’s iPhone become a popular option.

Left to Right: Nokia 232 (1994), IBM Simon (1994) and iPhone (2007)


A spherical shape will be likely to be grasped, a flat surface on a door would be likely to be pushed, a cubiod is nice to hold in the hand. These geometries have affordances coded into them. The designer pays attention as to which design he would be able to leverage the most affordance from that suits his need.

A typical example is the Norman-Door problem. Some doors are push some are pull. How does one know to push it or pull it? Many a time people are confused as to what to do, they try both pull and push. Some get frustated while some manage to pass through. Many of the doors have a reading stating either to push or pull. It is like inserting USB drives in the port, the intention is invisible to user so they iterate the interaction.

The Push vs Pull Dilemma of the Door.

Some solution to this problem are pretty intutive. It is easier to “pull” the door if we have rods sticking out (image on the right), where as it is easier to “push” if the door as a flat rectangular block, which is collapsable. It is indicative of the design to push a flat surface as compared to pulling it. Thus certain doors (specially universities and emergency exists) have the push bars which dis-engage the door on pushing, and the user gets out.

Designing the right shape to leverage its affordance is the key in intuitive tangible interactions.

The masthead picutre is of a project executed at C Design Lab at Purdue University, where we explore new attritubes of a certain shape and develop intelligent interactions by embedding it with electronics. We developed a sqaure puck made from Polymethylsiloxane (PDMS). Because of the material used, it is flexible and bendable. We embedded sensors and electronics and in the PDMS puck and attached a Machine Learning algorithm (SVM) to understand the user’s input intention. We then explored how can a square puck be used as an input device by the action of bending and twisting. The name of the project was BendID.

Although the implementation was pretty straigth forward. We embebed an array of localized capacitive sensors to capture the graph of touch and pressure data on the actions of bending and twisting. By attaching a machine learning algorithm, we classified the actions. This implementation was clean and required lesser data model development.

The bigger questions was “Is it intuitive for people to bend and twist a square puck as an input device in gaming application”. Turns out the answere is yes and no. While some felt it was more intutive in games like car racing or controlling a plane, it became increasingly difficult when they were asked to perform multiple commands on the virtual object (like shoot lasers and sheild yourself while navigating the car amongst obstacles).

BendID tries to break away from the philosophy of leveraging perceived affordances from the shape of the object and explore interactions. While for performing simple tasks it was easy to have counterintuitive affordances from the device but it became increasingly difficult when the scope of the commands increased. More information of can be found in the paper on the lab’s website — BendID.

There is a reason why mobile phones are not spherical….a Design reason.
One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.