We have become used to tracking daily activities like exercise and sleep as a matter of course. Our online behavior is tracked in great detail by almost every website we use. Some people also track — or have tracked for them — online work indicators such as email response time. Proponents tout the benefits of this trend, while critics raise concerns about privacy (I think both are correct). But there is another, ultimately more impactful, form of quantification on the horizon: measurement and tracking of team interaction.
Team performance is a subject of great interest in every human endeavor. Businesses, the military, and schools have long taught and rewarded team behavior such as agility and collaboration. Team performance is traditionally measured by results with respect to objectives, e.g., creating a product on time and on budget. This is certainly the ultimate measure, but it is too coarse and often too distant in time to enable specific feedback to organizations and team members about how to improve their group and individual behaviors.
At SRI we have been working on team behavior analysis for more than a decade. Part of our DARPA-sponsored Cognitive Assistant that Learns and Organizes (CALO) project involved understanding meetings by analyzing spoken interaction along with relevant email and slides. The goal was to improve meeting understanding over time by observing a series of meetings and learning better and better models of meeting topics and action items.
Collaboration is an essential skill
More recently we have turned our attention to measuring and tracking collaborative behavior of teams in action. The focus is on evaluating how individuals work together as a group, measured by how they interact with each other. Knowing how often team members contribute to a conversation, where their gaze is directed, details of posture, etc. form a basis for assessing collaborative behavior. Low cost cameras and microphones have created an opportunity to gather real-time behavioral data in many environments. Rapid improvement in data-driven machine learning has created an opportunity to abstract meaningful information from those data. We have explored the use of this approach in the workplace but are currently focusing on educational environments.
Measuring collaboration in the classroom
A key venue for assessing and teaching collaboration skills is the classroom. National STEM standards and 21st century skills for employment guidelines include collaboration as an essential skill, but our ability to assess collaborative behavior is very limited. Some existing methods require trained human observers to be present, which makes collaboration assessment too expensive for many classroom budgets and too infrequent for those that can afford it. Also, it is possible that human observation can miss subtle behavioral indicators that are valuable for accurate assessment. Other approaches use automated analysis, but only in artificial contexts (e.g., online games) that take time away from instruction. SRI researchers are currently working to create tools that will enable collaboration assessment to occur in real classrooms during authentic learning tasks, without interrupting learning, and with minimal costs.
The SRI Project is a joint effort of the Education and Information and Computing Sciences divisions, led by Nonye Alozie, Amir Tamrakar, and Svati Dhamija. Classroom cameras and microphones gather data from students working together on class assignments. Machine learning is used to automatically analyze these data in order to relate directly observable low-level behavior (e.g., gaze direction, head nods, posture) to higher-level actions that are important for assessing collaboration (e.g., paying attention to the person who is speaking). Exploring the meaning of interactions through natural language understanding and activity recognition can be used to determine whether participants are building on each other’s ideas and are sharing tools and work products, versus arguing and withdrawing.
Opening the black box
In the SRI approach, collaboration is systematically described in terms of a rubric of detectable, human intelligible actions and states, based on existing collaboration research. The machine vision system populates this rubric using video captured from the classroom as input. Unlike black box end to end deep learning systems, the SRI approach results in collaboration described in terms of human intelligible component actions and states. This forms the basis of actionable advice to stakeholders to improve collaboration in the classroom. This framework also tests the empirical question of what the components of collaboration are. In other words, it simultaneously works as an experimental platform for investigation of the components of collaboration and as an automatic tool for fine-grained assessment of collaboration in the classroom.
We are currently experimenting with the data and honing the machine learning pipeline. Once a continuous collaboration assessment system is in place, participants and teachers can be given feedback both in real time and later in summary form. Participant progress can be monitored and measured with respect to objective standards of desired team behavior.
A science of team behavior
We believe that the collaboration assessment methodology and findings from classroom experiments can be generalized to the workplace and other settings. Teams in every environment will have access to objective, quantified evidence of what they are doing well and what they need to improve. Combined with actual data on how the team eventually performed with respect to objectives, this will give us the ability to go beyond generic advice to create a new science of effective team behavior.
Written by Bill Mark, President, Information and Computer Sciences (ICS) at SRI International.