The Experience Enabler Scorecard

Why your program may just not be up to creating products with outstanding user experience. 🤔

Martin Gassner
Still Day One
12 min readJan 18, 2022

--

Why are great experiences so crucial?

These days user experience is everything.

Providing a world-class (user) experience has become the key ingredient and differentiator for leading brands, products and services alike. After twenty-five years of digitalisation, the global domination of the smartphone, ubiquitous high-speed internet access and Silicon Valley’s iconic digital brands with their benchmark user experiences have fuelled this rapid transformation. And now the Covid-19 pandemic has resulted in even more acceleration — how we buy, how we work, how we communicate and interact — with cutting-edge experiences that are almost always digital.

But what makes a great user experience?

A really great experience starts by putting humans at the centre, beginning with the experience, and working backwards through technology and processes. Never the other way around.

Companies who have systematically adopted a human-centric mindset, state-of-the-art experience design capabilities, appropriate analytical leadership and iterative development skills are creating breakthrough digital products that deliver an amazing user experience. Recent studies by McKinsey and Accenture show that today successful companies focus heavily on user experience.

Six levels of Experience Design maturity towards the Business of Experience

Do we need a reality check?

If you look at the diverse landscape of digital products and services today, you will find many companies are still a long way from offering an outstanding digital experience. Somehow their customers muddle through product configuration, complete the online purchase or find the information they need. Looking at the UX pyramid, most experiences end up mediocre or worse, along the lines of “Works somehow, but…”. It’s rarely fun and you’ll be glad when you’ve done it. We could call this the “standard of minimum expected user experience”.

Maslow’s pyramid of UX
Source: growthengineering.co.uk

This is surprising, because the necessary methods, skills and processes for developing digital products and services are widely accessible today and taught at numerous educational institutions. The resulting extensive pool of well-trained experts and service providers have built up an entire industry of digital transformation. Design thinking, user-centric, purpose, agile working, etc. are today the buzzwords of every business meeting, interview, or keynote speech. So, if the knowledge of how to craft excellent experiences seems to be out there, what’s the reason that many digital experiences are just so mediocre?

Crafting a world-class user experience isn’t easy.

A variety of different disciplines and roles, such as business stakeholders, researchers, strategists, designers, software developers, data scientists and others, must work seamlessly together, develop a common understanding of the goal and align their interests. On wide-reaching digital transformation programs and in large organizations, this quickly results in a very complex system. Therefore, before we start developing products for user experience excellence, we need to ask ourselves if we are capable of completing them with our existing system. And we should review this at regular intervals.

What helps us create great experiences?

After 25 years of experience in developing successful and sometimes not so successful digital products and ecosystems, we recognize recurring patterns in the complex systems and set-ups in which digital transformation programs are developed. They can be viewed in different ways that support or prevent the development of an excellent user experience.

The following enabling dimensions should be considered:

  1. Research & User Testing
    This first enabler is about gaining continuous insights, discovery and validation. Are we doing the right thing, do we understand the real problem and context? It is about knowing what the users do, think, use and feel. What are their pain points and motivators, what do they want to solve? Have we derived the right insights, do our concepts work, do we test our riskiest assumptions with the user? Are the right learning loops in place? This is a continually repeating process that never ends.
  2. Digital Experience Strategy
    This enabler is all about making sure there is a digital strategy process in place that defines a clear purpose for the experience. What is the value proposition and are guiding principles defined? Does a north star for all participants (client stakeholder and supplier teams) exist? Does everybody know, understand and follow the strategy?
  3. Innovation Imperative
    This enabler ensures the application of an innovation process and methods that are systemically implemented, to continuously extend the space of the possible and pave the way for better experiences. It is also about being aware of changes and new challenges, risks and opportunities for the business model. It influences the strategy to keep agility and the capability to adapt.
  4. Team set-up and way of working
    This enabler is about the best organisational set-up and culture, the right attitude and mindset, working methods and communication rituals. It’s also about psychological safety, enablement and creating a safe space that embraces failure, to enable the people to do their best job. It is about reducing friction to getting the work done.
  5. Design (Eco-)System
    This point covers the way of working, work environment and enablers for designers creating a smooth design and build process of the customer-facing part of a service or product. It is also about craft and how a UX-driven mindset and understanding of design is a core capability for a successful business of experience. A design system as a single source of truth for all parties involved is one of the core building blocks.
  6. Frontend (Eco-)System
    This enabler is about the way of working, work environment and enablers for developers concerning a smooth build process of the technical part of a service or product. It is also about craft, architecture, test automation and developer routines. A corresponding code library of the design system as a single source of truth for all involved actors is one of the building blocks.
  7. Content (Eco-)System
    This section is about creating and delivering the right content, at the right time and place for user needs, from content strategy and contextualisation to content development and production. It is about the required craft and skills, workflows and distribution processes.

What if we score the maturity of the seven experience enablers?

To evaluate the seven enabling dimensions described above, get an impression of the respective maturity level and a holistic view of the overall system, we have developed the Experience Enabler Scorecard. With this scorecard, an assessment of a digital transformation program can be swiftly performed to investigate to what extent the program is capable of creating digital products with an outstanding user experience.

Graph of all seven experience enabler dimensions

Hypothesis:

“The higher the maturity of the enablers, the higher the chance that a great experience can be developed.”

Disclaimer: This does not mean that a high level of maturity will automatically result in an outstanding user experience. But it will be much more likely to happen than with low-level maturity. With a low level of maturity, even the best people will not be able to create outstanding results!

What does the scorecard look like and how is it used?

The scorecard includes the seven experience enabler dimensions. Each dimension is broken down into tangible building blocks which get rated and commented on in assessment interviews with people from the program.

The rating scale has three grades:

  • not in place (value=0)
  • somehow started (value=1)
  • fully implemented (value=2)

The interviewees should take two views into account when they give their ratings:

  1. How do they see the maturity of the building block for their own role and closer team environment?
  2. How do they see it in the broader context of the program or the organisation?

For example:

If the building block is “Research Mindset”, it could be that the interviewee states that they and their team deliver a strong research mindset. This would mean a grade of “fully implemented” (value=2). But if they look into a broader context of the program they may perceive that their stakeholders de-prioritise research as most of the time there is no time and budget for doing it, or they say it is not needed because they think all insights are already known (“I know what my customer needs”…). That would result in a “somehow just started” (value=1) or “not in place” (value=0) grade of this building block. This weakens the maturity level brought along (value=2) and then leads to a reduced maturity level (value=1.5) or (value=1).

Experience Enabler Scorecard, partial view

This exercise will be done for each building block. When finished, we get a result for each of the seven dimensions and an overall score which is the average of the seven dimensions. In the diagram below we get a holistic view of the distribution of the seven-dimensional scores and can easily identify the strengths and weaknesses of the system.

Overview of the maturity score from one participant over all seven dimensions

It’s wise to make this assessment with a varied group: people working in the product teams in different roles like product managers, upper management and other stakeholders on the program. It should be a balanced mix of people from supplier teams and client teams. Conducted assessments in the past have shown that you attain most of the valuable insights with a mixed sample group of 10 to 15 people.

While it is certainly interesting what the overall maturity score over the seven dimensions looks like, the real insights come from the detailed data collected and the comparison of the different results in the seven dimensions. We now take a short look at some results of an assessment on a digital program of a big global company.

Example of Experience Enabler Maturity Assessment

First insights derived from a scorecard assessment of a big global company and their suppliers as part of a program for developing customer-facing digital services.

Setup:

  • 30–60 minute one-to-one video interviews
  • 16 international participants
  • 8 female, 8 male
  • 8 company employees with different roles and career levels
  • 8 agency partners and freelancers with different roles and career levels

The average score after interviewing 16 people shows a maturity level of 66%.

The average score after interviewing 16 people shows a maturity level of 66%.

At first, the average score looks quite good, even considering the high values in some dimensions essential for the program. However, dimensions with a lack of maturity can also be identified right away.

An overlay of all 16 interview scores shows a more differentiated picture and delivers some interesting insights

Overlay of all 16 maturity scores shows more insights

We see very clear clusters in certain dimensions with high scores:

  • Team set-up and way of working
  • Design (Eco-)System
  • Frontend (Eco-)System

This underlines that most of the participants perceive a high maturity level in the areas that enable and motivate them to do things right.

The picture is less evident for dimensions with median scores:

  • Research & Testing
  • Content (Eco-)System

Especially the score of “Research and Testing” looks critical. On the one hand, it should have a higher maturity, because if there are no good insights, how do the product teams know if they are doing the right things? On the other hand, we see a relatively high distribution of the perceived maturity level (100% to 25%). This could mean that “Research and Testing” is implemented but disconnected from the product teams who need these insights to develop the right experience. One participant explained that they get the research insights after the product has launched!

The third pattern we can recognize is a very high distribution (100% to 0%) in the dimensions of:

  • Digital Experience Strategy
  • Innovation Imperative

This could be also seen as critical. It is a sign of a lack of transparency in the program.

The digital strategy doesn’t seem to be known by part of the team and they lack the big picture. Innovation seems to be siloed in an ivory tower, not part of the product development process and experiments in the product development process do not happen.

Very wide spread of the perceived maturity score among the participants

Wide spread of the perceived experience enabler maturity on the program

This is not an unusual pattern (normal distribution or Gaussian bell curve) when you do this kind of assessment. At first glance, it seems that some people see maturity more positively than it is and others are a bit too negative. Nevertheless, it’s another sign that people should meet soon because there are some points they should discuss in a timely fashion and define actions to improve. There are also definitely some unmet expectations and a lack of transparency that hinder motivation, flow and alignment.

A closer look at the results shows four archetypes in the perception of the experience enabler maturity

Four archetypes among the interviewed participants

The above patterns or archetypes could be described as follows:

  1. The proud informed
    This archetype includes 4 of the 16 people who gave high to very high scores in all dimensions. It seems that they have more insights on the company’s organisation and strategy. Sometimes they saw the maturity of the program less critically. They are mostly employees of this global company. They are proud of what maturity level they have already achieved with the program compared to the maturity in the past and other programs in the company.
  2. The aligned gang
    The biggest archetype with 6 of 16 people. They gave high to very high scores in three dimensions, a very low score in the innovation dimension and a wide spread in the strategy dimension. They are mostly from external partners and working closely in the product and implementation streams. They see good team bonding, culture and an enabling working environment (tools, processes, attitude). But they also feel a big deficit when it comes to innovation and experimentation that will demotivate in the long run.
  3. The critical stakeholders
    With 2 of 16 people, this is the smallest group. They gave very high scores in two dimensions and medium to high scores in the other dimensions. They are employees of this global company in management positions and tend to have a helicopter / critical view of the program. They expect more outcomes from the program than it currently does. Overall, they seem to realise that they still have a long way to go to reach a higher level of maturity in the program.
  4. The more expectant
    Another 4 of 16 people display this archetype. Like the other groups, they also rank the same three dimensions high to very high. They are mostly employees of this global company and have a more critical view. It could also be that they are a bit less familiar than the “The proud informed” archetype because some of them are new to the company. We called them “The more expectant” because they are mostly subject matter experts (researchers, innovators) who see the acceptance and relevance of their expertise in the program as undervalued and are not enabled to bring in the full value they could provide to the product development.

Summary

A great user experience with digital products and services is today crucial for success and helps companies get ahead of the competition. Since the knowledge, methods and expertise to create these outstanding experiences are all in place, the system in which they act is often lacking the maturity to enable the creation of outstanding experiences. This is no wonder, since these systems are complex and keeping complex systems working as expected is a challenging task.

The “Experience Enabler Scorecard” is a simple yet powerful way to identify your program’s strengths and weaknesses with a justifiable amount of effort, and to determine whether it is capable of creating digital products with an outstanding experience. You get data points and insights way beyond the usual KPI’s. It helps you to identify what improvements are required, initiate relevant reflections and discussions and determine required actions for change. It could also be used to regularly monitor increases or decreases in maturity over time, identify best practices to learn from one another, pinpoint communication gaps in the program or convince stakeholders to change something. Importantly, you get a holistic understanding of your program as a living system.

Previous assessments of big digital programs in different industries have shown a strong correlation between high and balanced experience enabler maturity scores and high experience quality of the products and services. They often lead experience quality in their industry and in independent experience benchmark rankings. Vice versa, low maturity scores are a warning that programs are experiencing significant problems, ineffective product development, low product quality, lack of motivation in the teams and a fluctuation of good people.

If you are interested in an assessment or if you have further questions, please get in touch with me.

martin.gassner@accenture.com

--

--

Martin Gassner
Still Day One

I'm working as a designer and manager on digital stuff since the mid-nineties.