Why Divergent Thinking tests are not widely used?

SparcIt
SparcIt Blog
Published in
5 min readSep 6, 2016

Today’s global companies are looking for employees and managers who can move and operate in different countries, make decisions in unaccustomed contexts, lead diverse teams, and deal with uncertainty. The only problem is finding the creative talent. Even a small company like SparcIt is currently operating in 6 different countries.

The only problem is finding the creative talent for the global economy.

Ever since J.P Guilford and Paul Torrance pioneered the studies in 1950s, one’s creative potentials have been associated with Divergent Thinking abilities — the process or method used to create many unique ideas in order to solve a problem or open ended task. Where convergent thinking is systematic and logical, divergent thinking is spontaneous and free-flowing.

As discussed in previous post, divergent thinkers can think outside-of-the-box and find unordinary solutions.

But why Divergent Thinking tests are not used as part of recruiting or employees’ training processes?

Traditional assessments focus on one-right-answer type questions. Most personality assessments are designed in likert or multiple-choice formats for the ease-of-grading and automation. Such strict formats defeat the purpose of divergent thinking approach.

Quantifying one’s Divergent Thinking abilities is extremely tricky, simply because the traditional assessment styles are not applicable. In order to accurately measure one’s Divergent Thinking abilities, one must be given a set of open-ended stimuli, scenarios and exercises.

Traditional striction on assessments defeat the purpose of having open-ended assessments such as Divergent Thinking tests.

One main reason that divergent thinking assessments are not well-received is the inability to grade and scale them efficiently. It required trained graders which tend to be costly and not scalable.

Professional graders and consultants: Since Divergent Thinking exercises are in open-ended format, graders and consultants must be trained thoroughly. But regardless of how much training is involved, there is always a bias when human graders are involved. No matter how hard you try, you realize there’s a good chance you are grading some participants more harshly than they deserve, and giving others more credit than they deserve. Note that this type of bias is not acceptable in high-stake testing where one’s promotion or job is at stake. Hence, more organizations avoid such assessments in high-stake situations. Such assessments can only be used when all participants are graded fairly and identical with no bias.

Fatigue and frustration: There are quite a few problems with subjectivity in grading open-ended responses. These have been studied, and simple fatigue and frustration are part of it. As a trained grader, your temperament and disposition changes over the hours you spend grading. In fact, your frame of mind can change in moments for any number of reasons: Five weak responses in a row can put you in a foul mood; fatigue can set in; a baby crying in the background, too-cold or too-noisy room can set your nerves on edge. Rubrics help, but even with rubrics, there’s probably no way to be completely consistent.

Scalability & high-volume Hiring Assessment: A client of ours is a talent management firm where they focus on recruiting for low to mid-level management positions. As it is expected, a mid-level management position could get up a 100 applicants. All applicants go through a set of assessments including personality and skilled-based assessment. So far, since humans were involved in grading Divergent Thinking exercises, they can not be administered in large volume. Hence, such assessments cannot be used in recruiting process where hundreds if not thousands of applicants apply. Recruiters, trainers and teachers run away from such assessment as it involves high degree of involvement and cannot easily be reported on scales.

Costly & Time-consuming: It takes about 40 hours to grade an open-ended Divergent Thinking assessment for a group of 100 participants. As they say, time is money. In average, it takes as much as 5 minutes to grade each response in such assessment. Considering the fact that you might have about a 100 applicants, and there could be as much as 5 exercises, you are looking at about 40 hours of grading for just one position. Most recruiters, trainers and talent-management consultants do not have the luxury of such time. Hence, although Divergent Thinking assessments provide a different point-of-view, they are not as widely used.

So far, Divergent Thinking tests are not as widely used because: they require trained graders, cause fatiguing, can not be scaled and are costly & time-consuming.

In order to accurately measure one’s Divergent Thinking abilities, one must be given a set of open-ended stimuli, scenarios and exercises. Furthermore, the responses must be graded efficiently for a large group of participants. One of the well-researched and well-known assessments is SparcIt’s Creative Thinking assessment. Unlike traditional assessment, SparcIt’s unique feature is the use of open-ended exercises and automated scoring. Using a Watson-like engine, SparcIt’s patent-pending engine, accurately and efficiently grades the participants’ responses and provide a detailed report to the participants and the test administrators. Hence, it eliminates the major factors for not using such assessments.

SparcIt’s Creative Thinking assessment is fun, fast, automated, affordable and scalable.

--

--

SparcIt
SparcIt Blog

Technology company with focus on developing the future of the workforce