Research

Input and output vocabulary

This post will keep looking at research that can help us find ways to define our input and output vocabulary for interactions.

Data Semantics

What are the types of data that our intervention can tokenize:

  • Demographics 
    Information like gender, age, race/ethnicity?, location, etc
  • Sentiment 
    Information like facial expressions like smile, brow furrow, inner brow raise, brow raise, teary eyes, lip depression, etc, speech expressions like tone, pitch, tempo, changes in paralinguistics, etc and valence of keywords
  • Intent/Meaning
    The main purpose of an utterance
  • Utterance
    Phrases carrying an intent
  • Entities
    Information like address, location, time, names,
  • Response
    Output utterance
Identifying the data tokens from an exchange in our transcript conversation

Ideasythesia

In 2001, Vilayanur S. Ramachandran and Edward Hubbard modified Wolfgang Köhler’s 1920 experiment using the words “kiki” and “bouba” and asked American college undergraduates and Tamil speakers in India “Which of these shapes is bouba and which is kiki?” In both groups, 95% to 98% selected the curvy shape as “bouba” and the jagged one as “kiki”, suggesting that the human brain somehow attaches abstract meanings to the shapes and sounds in a consistent way. The kiki bouba effect is a non-arbitrary mapping between speech sounds and the visual shape of objects.

Which one’s kiki and which is bouba?
Later studies built on top of Kiki Bouba

While this effect has been linked with synesthesia (the ‘bleeding of senses’ or one sense getting stimulated by another), newer research refers to it as ideasythesia, i.e, that some of our sensory perceptions are shaped by our conceptual understanding of the sensory stimuli. Unlike synesthesia which affects only some people, ideasythesia is common to all of us.

This has interesting potential for application in our visualization– how can we use visualization that creates certain universal perceptions and hence responses.


The Rorschach Test

“The Rorschach test is a psychological test in which subjects’ perceptions of inkblots are recorded and then analyzed using psychological interpretation, complex algorithms, or both. Some psychologists use this test to examine a person’s personality characteristics and emotional functioning”. 
(From Wikipedia)

What if we used Rorschach-Test-like mechanisms in our visualisation that not only supports the interaction, but also helps inform the person’s psychological profile created by the tool?