I’m now 18 months into my role at SEEK (wow!) and I’ve been reflecting upon the things I’ve learned, where I’ve come from, and where I’ve got to.
Often, we’re so focused on pumping out the next bit of research/design/Keynote presentation that we forget to stop and smell the lightbulb-moment-roses, so to speak.
The role I undertook at SEEK was a step up from my previous one. This was great, because I was after a challenge — but I’ve also learned some lessons along the way about what it means to build and lead a research practice.
With that in mind, I wanted to take the time to document where I started, where we’re at now, and what I’ll do going forward. I hope this helps those of you out there stepping into UX roles with a leadership focus, or building a UX Research Practice.
In the beginning…
When I began at SEEK in June 2018, my super-powered predecessor Mimi Turner had gone to great lengths to establish the UX Research Practice. I was delighted to find many processes and how-to guides had been documented and a number of tools were already in use. (For those wondering, she’s back now. Praise be.)
At that time, the UX and product management team each were around 25 people big. A number of the UXers had undergone UX Research training with Mimi before I started, and were comfortably conducting their own research for their particular product, expecting guidance from the UX Research Practice (i.e. me) when required.
The organisation was shifting to an environment where product managers also contributed to research activities focused on their product — as part of a new way of working called Continuous Discovery.
I encouraged the expectation that “PM/UX pairs” would conduct research together in a conversation with our Chief Product Officer (yes, I’m cheeky).
All of this left me with two questions:
- In this new world, what purpose did the UX Research Practice at SEEK serve beyond operational and coaching support?
- How would I ensure research quality in a world of decentralised research?
Read on for the answers I developed — but before I got stuck into tackling those questions, I first needed to prove myself.
Lesson 1: Establish your turf and get some runs on the board
Who is this newbie trying to tell me what to do? They’re like, 12
When I started the role at SEEK, one of the thoughts bouncin’ around my noggin was something like “What if people think I’m too inexperienced for this?”
I also suffer from the unfortunate condition of BabyFace(TM), which didn’t help. (But hey, I’ll look great when I’m 60…I hope).
Related to these thoughts was the desire as well as the need to prove myself. I remember speaking about this with my manager at the time and he agreed that I needed to get some runs on the board before the teams would start to trust that a) I ~usually~ knew what I was doing and b) they would, y’know, remember who I was.
My first research project at SEEK was a fairly straightforward contextual research piece with underserved job seeker segments across Victoria. This was great because I got to understand the very real struggles some of our users face when looking for work — and display my findings in a 2-day expo, building empathy for our job seekers. That research now sits in a mental model of the job seeker experience, representing this group of users alongside the ‘typical’ job seeker.
After that, the requests for research advice (and projects!) came in. I could start poking my nose into other research happening around the business.
I think this victory was really evident for me when, in October last year, one of the Heads of Product came to ask my advice and help on a small research project his team needed to conduct. That wouldn’t have happened 18 months ago. I nearly fell out of my chair from shock, but the fact he came to me, wanting to get the research done right (to ultimately make better decisions), is a big step in the right direction. (Cheers Jay!)
Lesson 2: Get a seat at the table, and own the space you are in
Why is this newbie at this meeting? What are they, like, 12?
So, my first research project at SEEK was complete. Thus began the nose-poking (nose-pokery?). I was now getting up to speed with what research was going on, and start to lend a hand or lead projects where I could — again, getting more runs on the board.
Compared to my previous role, SEEK is a very collaborative environment and decisions are usually made via consensus. Looking back, one thing I wish I’d done more of was build relationships with the various Heads of, and Product Managers, and gotten to know their spaces a little better. I also focused a lot more in recent months on strengthening relationships outside the Product space — with Marketing, Sales, and Strategy to name a few. This enables me to better find supplementary research data or complementary initiatives — increasing the sharing of user insight. Everyone wins!
It’s taken me a while to adjust my approach from ‘research it and people will come’ to working out how to get research insights into the right heads — and determining what research needs to happen in the first place.
Related to this was the first question I mentioned earlier. Let’s look at it again:
In this new world, what purpose did the UX Research Practice at SEEK serve beyond operational and coaching support?
Firstly, I felt we needed a common language/vocabulary to talk about all of our research. Creating a shared understanding of research process and practice ultimately gets things done faster — work is universally understandable and thus repeatable. (Repeatable, recordable, and rigourous — the three ‘r’s of good UX research).
So, I set about developing a framework to address this. It would be something around which to anchor our conversations and help us identify what kinds of research we were and weren’t doing. It needed to incorporate some product management thinking (i.e. align with Continuous Discovery and Marty Cagan’s Four Risks, to name some examples), as well as UX terminology. It also needed to reflect a space for UX Researchers to bounce around in. This is what I came up with (thanks to Steph Moss for tightening the visuals):
So — the UX Research Practice would lend coaching and operational support to Product teams (i.e. those PM/UX pairs I mentioned earlier) conducting exploratory, generative, or evaluative research for their product. While collaboration between product teams did exist, research conducted at SEEK previously typically focused on individual products, and still appeared that way when I showed up.
This left me with some gaps to play with:
- Exploratory research looking into certain customer segments
- Evaluative research assessing the site-wide experience
- Exploratory/generative research into new product spaces (well OK, I didn’t come up with this point myself but we’ve started this recently… that’s another article for another time).
This meant the UX Research Practice would be responsible for informing multiple product teams at a time, enabling them to make better decisions and eventually helping to set the direction of products. Space = owned.
Real-world example: I recently conducted two research projects — one looking into the experience of our corporate clients, and another looking into the experience of white-collar professional type folks. Both of these projects have informed decisions made in the business. However, this style of research is very new to SEEK, and we’re still hammering out the best way to implement it.
Following on from this, I’ve been running quarterly prioritisation sessions with the Heads of Product and UX Leads in order to determine what research is most needed. I’ve divided these projects into “Tracks” of research — all of these projects sit above individual product team work, and are led by the UX Research Practice with Product team support. I don’t have a pretty diagram for this yet, but it’s something like this:
Track 1: Streams & Segments
This research focuses on the 3 or so user streams we have at SEEK, and user segments within those streams. For example, a “stream” would be our job seekers who use our site, and a user segment within that might be parents returning to full time work. It can also focus on how our users engage with competitors.
Track 2: UXR Experiments
This research focuses on as-yet unsolved problems for our users, and the business. The problems are sourced from the business, or from various leadership forums.
Track 3: Site Hygiene
As the name suggests, this track focuses on those site-wide evaluative projects I mentioned earlier — assessing the whole experience against a number of metrics (as opposed to the experience for one product).
It’s taken some time to figure out just how we prioritise these forms of research — but hey, that’s what iteration is for! (And now I have an idea for another article…)
Lesson 3: Take 3 months to observe what’s happening and what’s not happening
We’re doing a lot of this, but not so much of that…so let’s do less this and more that
Jumping into a new, more senior role is not easy — and changing everything the moment you get in is a bit silly. You don’t necessarily know the reasons why things are done a certain way, and you need to earn people’s trust. Take time to listen, watch, and think. Understand what people are frustrated with and understand what’s working well.
As I mentioned earlier, there was a lot of documentation on research process and tools, but fewer documents describing how research is conducted at SEEK, and what kinds of methods you can use.
I worked with our Principal Designer Cheryl Paulsen on documenting some common methods used at SEEK (she’s also made a very cool tool for this that you can find here) and created a few more guidelines here and there to meet this need. Everyone loves a good how-to doc!
It became apparent that finding old research was a problem. This is definitely not unique to SEEK, but something I wanted to mitigate so that all our research was in one spot and re-usable.
I ran a workshop in which we assessed a number of different UX research tools. We came up with a shortlist of tools that we would continue to use, and one that I implemented to help us better synthesise research data and keep it all in one place. While we’re still getting all the teams familiar and comfortable with the tool, we now have a budding repository of raw research data, tags, and insights — meaning a juicy verbatim is more findable than ever.
Another initiative I began early on was the creation of SEEK’s UX Research Ethics Principles, which enable product teams to conduct user research that’s safe, respectful of our participants, ethical (duh), and looks after participant data. This clears up a lot of the ins and outs around what tools to use, getting consent (and storing it), what to do when things go pear-shaped, and how to store participant data. Massive thanks to Danya Azzopardi, Leah Connolly, and Mimi Turner for continuing to drive and deliver this piece with me (another lesson — you can’t work in a silo!).
Lastly, with direction from our Chief Product Officer, I created a series of training workshops for 5 “guinea pig” Product Managers. These covered the product lifecycle and research, facilitation, ethics, synthesis, and a quick look at different methods. We (Mimi and I) are now working out how we can turn this into a repeatable training module for product teams.
You may have worked out by now that these initiatives happily address my second question from earlier — How would I ensure research quality in a world of decentralised research practices?
It’s taken me time to get these things up and running, but I was grateful I allowed myself the time to absorb before jumping in.
Lesson 4: Impostor syndrome is real, and that’s OK
*arms flail, screams into void*
It’s amazing the different the skills people have. I’ve worked with PhD candidates, people *with* PhDs, data scientists and a whole bunch of other people at SEEK who are probably much smarter than me.
There are times where I have felt like a literal potato (specifically, a chat) in comparison (and that’s the impostor syndrome talking). I frequently had to remind myself — and still have to remind myself — that I am not a potato. I may have skills and experiences other people don’t have, just as other people have skills and experiences I don’t have. At the end of the day, we’re all here to make the world a better place (you’d hope) — so it makes sense to share skills and learn from each other.
When you’re starting a new role, focus on what you’re good at, and take the time to learn new things from the people who have the skills you don’t (books and articles also come in handy here). Once you’ve got those runs on the board we talked about, you can then start pushing yourself out of your comfort zone.
I felt very keenly that quantitative research skills, namely surveys, was a gap for me (not having had the means to conduct large surveys online previously). So I leapt on a survey writing course, and set about practicing my survey skills. Writing my first big fat survey wasn’t a piece of cake, but I’m super glad I pushed myself to do it, and I’m grateful for the support I had in doing so.
Another thing to remember is to keep checking back on, and documenting, the work you have done. When I reached my one-year anniversary at SEEK, people said to me “You’ve done a lot in a short amount of time”. At first I was like “Have I really?” but when I looked back at what I’d documented it wasn’t too shabby. Always continue learning, but don’t forget to look back on what you have done.
The reflection for me also helped justify Question Number 1 — the navel gazing, what is my like, purpose maaaan? question.
Lesson 5: There are so many things to do! Stop trying to do all of them
What do you mean, that can wait?! Everyone needs all the things NOW!
In my last role, I sat within a product or platform team, conducting the nittier and grittier research pieces like usability testing, concept testing, and some exploratory work for the product/platform.
As mentioned above, at SEEK we’ve moved to a world in which UXers and PMs conduct the evaluative or generative research for their product, which freed me up to tackle exploratory pieces of research into the holistic experience for a segment/user type, and future-focused pieces looking into unsolved problems.
I was very excited by all of this, but it was hard for me not being in a product team, doing the nitty gritty research that has a direct impact on the product. I needed to let go of what I was familiar with and start tackling the strategic problems.
Another facet of this was the operational side — not a day went by where I wasn’t dealing with a question related to ops, process, general how-do-I-do-this’s, or…complaints!
Don’t take it personally if people aren’t happy about things — you as an individual can try to improve as much as you can, but you’ll never please everyone. Aim for better, not perfect! It can be helpful here to document process and practice as much as you can so the team can help themselves (and make it clear what things they can and should be doing themselves), as well as limiting yourself to working on the projects (research or research ops) that will have the most bang for their buck. For me, this meant taking a good chunk of my downtime in December 2018 to write a bunch of how-to guides in our company intranet.
When making trade-offs on pieces of work, I’d always prioritise actual research projects and larger-scale operational pieces for the benefit of the team over the smaller things. It meant that some things were left by the wayside (sorry, OneDrive folder structure…) but it meant that I could focus on the macro — like where I wanted the UX Research Practice to go.
Lesson 6: Communicate that you are available
I may need to do some calendar gymnastics, but yes, I have 30mins for that interview guide review
Your team will come at you with last minute requests to check things, workshop things, be in meetings, or just to bounce ideas around. I learned to work at 80% capacity so I had time to deal with these curveballs. Being available for your team means they feel supported — that you care. (And to my UX and Product colleagues — I care about you all very much. Mimi and I will always make time to help you if you need it.)
I’d operated under the assumption that people would come to me if they needed coaching — but I found that sometimes it was better if I reached out and asked, as a sort of reminder that I was there to help.
I also learned to ask lots of questions to get a better understanding of what people wanted to do, and how. For example, I remember chatting with one of our UX Designers in the Mobile Apps team about research they intended to conduct with our mobile app users. After I’d asked some pertinent questions on what they were hoping to learn, we ended up reframing the research as a semi observational, semi usability testing exercise.
Chewing over the product-y problems keeps your brain fresh too. Sometimes it’s a lot for me to take in. But I usually have a rough idea of what each of the product teams are doing — which helps me connect various initiatives for the teams and gather information easily.
Lesson 7: Stop talking. Stop it
Well perhaps you should…nah wait. What do you think?
In UXR coaching conversations with my team and with Product folks, I tried to ask questions, not only to understand the problem, but also to enable the team member in question to come up with the answers themselves.
This was incredibly important for my Associate UX Researcher to grow her confidence in preparing for and conducting user interviews, among other methods. Now, she’s been able to start coaching team members herself.
Another benefit of this is that you’re giving space and time for the individual in question to practice how they explain things, and really allowing them to own their space and their work. I like to think of this as giving someone a leg up so they can climb over a fence — you’re not visible, but you’re making them more visible (rather than say, clambering over them).
Funnily enough, this reflects the behaviours of a good UX researcher — ask the ‘participant’ what they think.
Understand before you can be understood, and try to let folks speak for themselves.
Lesson 8: Be flexible — don’t be afraid to bend the rules to get the job done
You *can* take that metric…just don’t report it like that
Not gonna lie, when Continuous Discovery came along I was hesitant. However, I’m glad that I was able to work the process into UX Research methodologies and ultimately get the two to play together nicely in the sandpit, so to speak. It ultimately benefited me because I was free to explore chewier chunks of research (many UX Research folk get a bit over usability testing for months on end…).
Anyone working in a corporate environment needs to be adaptable and flexible, while (in a UX Research context at least) maintaining rigour of process and practice. You can achieve this by ensuring processes are well documented, and being very clear on what is and isn’t UX Research.
For example, the Product Managers I work with at the moment know very clearly the difference between what I do when I conduct research with our clients, and the conversations they have with the same clients. The first is research. The second is anecdotal or useful context. And they’re cool with that.
A colleague of mine was curious to learn about a variety of UX metrics, and wanted to use them in some usability testing he was conducting.
I’d previously referred to a study in which I ‘baselined’ our user experience using these metrics — and typically you’d need a high-ish number of participants to ensure your metrics are solid.
My colleague didn’t have enough participants, but, for the purposes of his learning, and to gain an indicative understanding, we took the metrics anyway. We then reported the resulting numbers as a range (luckily there were no huge outliers), and stressed that this was an indication of usability we could keep an eye on over time.
In short, pull an Elsa from Frozen — and let it go.
Lesson 9: Things will never be perfect, things will never be right
Look, the 34 participants I recruited don’t EXACTLY reflect our user base but it’s darn near pretty close
The second example above is a good reflection of this. My colleague’s use of metrics wasn’t necessarily by the book, but it was good enough for them to learn what to do, and gave an indication of usability. We were careful in our terminology of reporting it so our stakeholders didn’t get the wrong idea.
The research I conducted with our clients recently (as the italics allude to) was another example. Our clients are notoriously hard to recruit, and I’d spent a good deal of time working out the perfect recruitment spec that reflected our user base according to a number of variables — like spend, location, role type, industry, software used…and so on. (I’m ashamed to say I had fun doing it).
BUT — with these sort of recruits, sometimes you get what you’re given. I think our resulting recruit was pretty good — I’d have liked a few more of this and that, but ultimately I had to accept who I could get.
Starting with perfection gave me something to work towards, but I quickly learned that to get the work done, tweaks needed to be made.
Similarly, getting the team to use a new synthesis tool, updating our documents, and encouraging re-use of research are all slow processes — however, better is still better! Striving for perfection every time will mean you get burnt out pretty quickly.
Lesson 10: Do a thing. Get feedback on the thing. Iterate the thing. Present the thing. Working group the thing. Keep banging on about the thing
Remember this? Yeah? Good. Let’s talk about it again
I think this lesson wraps up all the other lessons quite nicely. Maybe its a super-lesson.
Let’s revisit those two questions I posed at the start of the article…am I meeting my objectives?
- In this new world, what purpose did the UX Research Practice at SEEK serve beyond operational and coaching support?
- I led the charge to shift the UX Research Practice to exploratory or broad research foci, through developing and communicating a framework
- I and my Associate UX Researcher actually *did* research in this space, and we’re working through different ways to implement it
- I developed a number of ‘tracks’ of research to better communicate what kinds of research the practice can do, and how that benefits the business
2. How would I ensure research quality in a world of decentralised research practices?
- We developed the SEEK UX Research Ethics Principles
- We implemented a new synthesis tool
- I updated our documented processes (this is ongoing really)
- I made myself available for UXR coaching
- I developed a training program, which we’re now iterating
I know I’ve used the word “I” here, but all of these things I did not do alone. While each one was I guess spearheaded by me, they all had constant feedback from my peers and stakeholders, and they were iterated on accordingly. The Framework? Folks from the UX team had input into that. The tracks? I ran that past a Head of Product and the Head of UX before circulating it more widely. The training program? That had input from both product managers and UXers.
I continue to reference these initiatives whenever the opportunity arises (to the point that I’m one of ‘those people’ that’s usually got something to say in meetings).
It is much, much easier to get stakeholder buy-in when you’ve taken your stakeholders on the journey with you, heard their thoughts, and incorporated them.
That doesn’t mean you need 50 people involved in each project — just a mix of folks who you know will provide thoughtful, nuanced feedback. Take your time to find those people, and cherish them.
Most importantly — don’t give up. Change takes time (a lot of time!), and it takes people power. UX Research Practices are not built in a day!
With thanks to Timo Hilhorst, Kayla Heffernan, Mimi Turner, and the SEEK Product team for putting up with me banging on about UX Research. You’re stuck with me now! Mwahaha!