A Mixed-Methods Summer.

Anirudh Kundu
11 min readOct 24, 2023

--

My Internship Experience at J.D. Power, Summer 2023

Areas Covered: CX Research . User Research . Mixed Method Data Analysis
Duration: 1st May 2023 - 28th August 2023
Team: CX, Digital Solutions

J.D. Power, Troy, MI

Introduction

What is J.D. Power?

J.D. Power is a global leader in consumer insights, advisory services and data and analytics. A pioneer in the use of big data, artificial intelligence (AI) and algorithmic modelling capabilities to understand consumer behaviour, J.D. Power has been delivering incisive industry intelligence on customer interactions with brands and products for more than 50 years. The world’s leading businesses across major industries rely on J.D. Power to guide their customer-facing strategies.

Study Types

One of the first things that I acquainted myself with during the internship was the difference between syndicated and proprietary studies:

Syndicated studies are researches conducted off of the volition of J.D. Power to understand and analyse data across industries. These studies are crucial to generate a better understanding of industry trends and also provide competitor analysis to companies. These reports are a goldmine of information for organisations to reflect on and enhance their user experience while keeping sharp tabs of their competitors.

Proprietary studies on the other hand, are studies that clients ask for. J.D. power conducts these studies and releases insights. Similar to consultancy studies, proprietary research is client facing and carried out on a demand basis. The major industries that I dabbled across were automotive, banking, and insurance. These included a plethora of syndicated and proprietary client types.

Roles and Responsibilities

As part of the Digital Solutions team, my work revolved around evaluating and analysing data pertaining to the digital services and products of clients (Mostly GUIs). Diving deeper into tasks performed by users over client apps, websites, and other interfaces was a perfect confluence of UX and CX research.

With a background in UX research and Design, I was inducted into the team to strengthen the CX research workforce from a multifaceted perspective i.e. qualitative and quantitative. In light of my expertise in UX design, I was given the responsibility of a design advisor for consumer utility studies in addition to being a CX researcher.

Note: Due to the sensitive nature of the information dealt with at J.D. Power, all of my work stands under a strict NDA. This article is a report on my experience and learnings at the organisation, and does not pertain to client data.

Exhibit A

SPSS Regressions and Sig Testing

The first thing that I started to learn at JDP was to familiarise myself with the tools and techniques required to analyse large quantitative data. The first tool I learnt was IBM SPSS. A statistical analysis tool used to conduct quantitative regressions. Specifically, we used SPSS while conducting advanced analytics. To elaborate, while computing custom variables, we used SPSS regressions to create specialised heuristics. Furthermore, we capitalised on the repeatability of the data runs here by applying complex data regressions for other product domains too.

An important charter that I learned while parsing through data was its significance to the overall data set. Conducting ‘Sig tests’ was crucial to analyse whether the performance of a particular client was significantly different from the rest of the market, and by how much.

The structure of conducting a sig test was simple and derived from any statistical study: comparing the results and values of any particular element in the data set with the industry’s average performance and determining if the variance was significantly different.

Screenshot of a SPSS regression (pre custom table build) | The data has been omitted out due to NDA.

The next important thing to keep in mind while conducting regressions was to see the sample size of the study. J.D. Power holds a certain threshold for each industry. Every company must have at least a certain number of users participating in the study to be eligible to enter the analysis. Smaller competitors were only included in the studies under special circumstances. Such as, if the company is small and does not have many customers, but still manages to upset the market in a major way. With such clients, a “Small sample marker” was attached during visualisations to call out the limited sample size.

Sample Weighting

The third pertinent sector, which was new to me, was to “weight” the data. Two kinds of Weighting were used namely, Sample weighting and Index weighting.

Sample weighting was done to the user sample set to fit the population of the industry that was being measured.

Index weighting was done to identify the impact to the overall satisfaction through the regressions.

Stratified User Sampling and Surveys

With a background in qualitative research, I was very familiar with the creation of remote surveys. An important aspect of surveys however, was to categorise who was filling out the survey and to customise the survey questions according to the user category.

Each Industry had different product metrics, based on which we wanted to collect different user feedback.

For example: Digital banking systems could have three kinds of user segments at the sampling level, namely credit card only users, debit card only users, and both credit + debit card users.
To elaborate, while looking deeper into the credit card transaction history screens and its performance, we only wanted behaviour data by users who were active credit card users of the product. It’s possible that the screens for a particular task might be very similar for both credit and debit users, however, it is important to distinguish the user motivations behind completing tasks. Moreover, the level of segregation pre-survey also helped the data be
‘noise-free’ and was not speculative. Instead had a robust foundation to the responses.

Sample Screenshots of ‘Transaction Details’ pages for Chase Debit Users (left), and Capital One Users (Right). Sensitive Details of both images have been omitted due to privacy and NDA reasons.

Lastly, different industries and different survey criticalities were used to define the incentives that were given to the selected users. The general sample size was between the range of 1000 ~ 10,000 responses depending upon the market segment size.

Exhibit B

Cross Table Regressions

Once the fielding was complete, the results were brought back to the CX team for analysis. This was one of the most crucial phases of work from a J.D. Power perspective.

Apart from IBM SPSS, which was used primarily for complex data regressions, we also used Powersource, which was an in-house data analysis tool. My work relied heavily on using PowerSource. It was a cloud repository with the capacity to do cross-table analysis and also create visualisations of the data runs.

Sample screenshot of a PowerSource regression for a manufacturer website evaluation study.

To paint a picture of the data runs themselves, we used a top-down filtration structure to access the exact data set starting from study details like the dates of survey fielding, the study wave (summer 2023, winter 2023, fall 2022 etc.), the targeted industry (automotive, banking etc.) followed by the brand name (often a part of the study heuristic itself).

After narrowing down to the data set itself on the Y-axis or column set, we started adding qualitative heuristics on the row set or the X-axis for a comparative regression. Once all the variables had been set, we decided the calculation methods. These ranged from sig tests (described above) to verbatim analysis. The latter was done by using methods like the Analytical Hierarchy Process (AHP) and heuristic mapping.

Exhibit C

Qualitative Data Analysis

To further corroborate our findings from the quantitative data, at J.D. Power, we also heavily revered qualitative data findings. For this, we used to conduct our own set of usability audits and generate actionable heuristics. The trick was to do UX research in a CX manner by using screenshots and ‘verbatims’ to put attributes to numbers. This was the metadata that was the crux of our analysis.

Metadata Analysis

A method of ‘Screenshot Analysis’ was used for this. A detailed repository of screenshots for every client was kept and updated regularly. During the analysis stage, I used to go back to those screenshots and conduct static usability tests to find out opportunity areas or pain points which may or may not have been uncovered from the quant studies.

This process was tedious and time consuming. However, it was highly important, as it helped immensely with the verbatim analysis and the recommendations phase of the study. Especially, when the client share outs were upcoming, it became almost a necessity to conduct screenshot analysis as they were visual references to communicate our findings to the clients themselves.

Mixed Methods: Qual + Quant

I found out that it is very hard to convince clients that they were doing something poorly, if we merely showed them a series of charts and graphs with just numbers on them. Moreover, while communicating their lacunas to the industry team, it was also our responsibility to share possible recommendations on how to improve. Here, having visual and verbatim markers to support the quantitative data was of utmost importance.

Things like:

Verbatim: “The Ford website has too many things going on, I don’t know where to look”

Visual: “The Ford navigation bar at the top of the page has more than 10 elements, which does not follow a good UX pattern. More specifically, Miller’s law of psychology says….”

Data: The Ford OSAT (Overall Satisfaction) score for navigation is Significantly lower, as it ranks №12 in the industry.

With all three pieces of information combined, we were able to put a strong case for our claims and comments. This was followed by recommendations from visual, usability and monetary perspectives. The relevance of these studies was highlighted further through ‘Wave-over-Wave’ studies. As the name suggests, while doing comparative performance analysis of a client one wave after the other, we were able to closely look into the changes and updates.

Exhibit D

Client Types [Industry Oriented Heuristics]

My day to day activities also covered some more exciting tasks apart from sifting through large sets of data. The biggest boon of working at J.D. Power was the versatility of the industries as clients I was presented with. As mentioned earlier, these clients were stalwarts of their respective industries. Looking and studying the performance of the biggest (literally) companies of the world in the automotive, banking and insurance sectors was highly exciting in its own way.

Initially, the inertia of making recommendations to such large companies felt a lot. I was inhibited by the age-old imposter syndrome.

To take an example, questions like “Well the data shows a clear opportunity for Mercedes-Benz to better their loading experience, so that they can climb up the charts in the ‘premium’ automotive sector, but, I am just an Intern, what do I know”

It took me a good four-five weeks of just bouncing ideas and presenting recommendations within my team, that I actually built up enough confidence to clearly put my thoughts forward without the fear of being reprimanded or brushed aside. My manager Eric, was monumental in bringing the best out of me. I remember him saying “If the data shows it, it’s your responsibility to communicate it. You’re not speaking from personal preferences, the data says it man”

UX Design Audits: Modern Design Cues

Outside of hardcore UX and CX research, this internship gave me the opportunity to test and keep in touch with my design skills too. With a background in Interaction design backed by several internships and a Bachelor’s degree in Interaction Design, I was also involved as a design advisor for a large consultancy study for the insurance sector at J.D Power.

My role was to conduct audits of digital systems and provide a more granular understanding of the heuristics. One of the usability heuristics used by J.D. Power studies was “Modern Design Cues”. Any design practice that an organisation used to keep up with the latest visual design trends and/or accessibility trends were considered ‘MDQs’. My role was to provide a definitive framework for the same. Breaking down the granularities of what made designs ‘Modern’ and what were these cues that were being implemented by companies.

I conducted this particular study in tandem with my other work, and it took 2.5 months to come up with a robust framework for the same. I had share outs with the Senior Managing Director of the Digital Solutions team Mr. Amit Aggarwal alongside multiple review sessions with the rest of the CX team throughout my tenure. I was glad to know that my findings were valid, and were even implemented as part of the new heuristics charter for J.D. Power usability studies.

CX Research Process: Summed up

  1. Collection (estimated completion time 2–3 months)
    - User Sampling
    - Surveys
  2. Analysis Phase-1 (4–5 weeks)
    - Regressions
    - KPI Analysis
    - Industry Trends
  3. Production Phase-1 (2–3 weeks)
    - Quality Checks
    - Base Decks
  4. Analysis Phase-2 (2–3 weeks)
    - Client Specifics
    - Metadata Analysis
  5. Production Phase-2 (1–2 weeks)
    - Builder Decks
    - Quality Checks
    - Client Curation
My Huddle at J.D. Power, Troy, MI

Looking Back

Summing up my entire journey at J.D. Power, I can firmly say it was one of the most enriching internship experiences I’ve had. It’s very rare to find a company that’s willing to invest on an intern as much as J.D. Power. The digital solutions team took extra care of almost a month to just teach me techniques and tools I was unfamiliar with. They took me at my potential and not at what I had already achieved, which I believe is one of the most heartening things for any student trying to make it to the industry.

With an office in Troy, MI, I had a hybrid work setup which required me to be in-person 2–3 times a week. During the first two weeks I used to feel the pain of driving 1.5 hours to work from my house in Ann Arbor, MI. However, surprisingly, the long vivid roads and the early morning breeze slowly grew on me. So much so, that I began to enjoy driving in the mornings and evenings to and fro from work with the sun either rising or setting by the horizon.

Initially, my work meetings used to just be me scribbling down all ‘acronyms’ I used to hear (a lot of them) with a puzzled expression. I’ve never been a particularly shy person, hence, I used to ask a ton of questions whenever given the opportunity. To make things better, my team was super kind and never got irritated by my incessant questioning. Soon enough, I was fluent with all acronyms and was able to QC an MWES deck successfully. Such sentences that made no sense to me earlier were a part of my daily verbatim now.

Overall, this summer was a key marker into my pursuits of a career in Mixed-Method research work especially in the UX and CX industry. With a highly valued internship, I believe I have the chops to take on more pressing challenges in the UX Research world.

Acknowledgements

This story cannot be completed without a shoutout to one of the best teams I’ve worked with. I’d like to thank everyone for their patience, kindness, and most of all the teachings that I’ll be keeping with me for a very long time. Thank you Eric, Jon, Amit, Chelsea, Kristen, Sarah, and Omi.

A shoutout to this wonderful team.

Currently, I am practicing all of the things I learned over the summer, at the University of Michigan. I’ve taken up new teaching responsibilities at the School of Information, and am furthering my expertise with quantitative and qualitative research, in pursuit of a career in UX Research post graduation.

--

--

Anirudh Kundu

Creating happy experiences by humanizing technology :)