Every company has a process for how they do things, how they run things. Whatever the core business model might be, there’s a way you would do that in an efficient, cost effective way to give your customers the best service possible. Eventually your process needs to evolve, like how we need to integrate design into our core processes, but it’s so easy to fallback on your tried and true methods instead of pressing ahead towards something new. This serves as a check-in point for myself to explain my thinking and progress, and hopefully gives others ideas how they might achieve the same. Or at the very least, feel a little less alone on their journey.
What I struggled with in the beginning in tsf.tech was trying to do everything at once. Pick up the core design skills. Set a strategy for what design activities are appropriate, for us and for each client. Integrate into the existing way of delivering software. Facilitate client thinking to move onto a (hopefully) more successful path than had been done before.
The reason I wanted to shift into a design role was because the software I had been involved in delivering was by-and-large dictated to us. That’s okay and absolutely necessary to be part of a larger, international team. Furthermore, we were empowered to problem solve when developing solutions and thus could conceive the solution in a way we believed fit the requirements. I worked on the assumption that they had done their research, but there were some decisions that never sat right with me.
A great example is a piece of software one of my colleagues had to use in a previous life. At her previous workplace she had to use a piece of software that was borderline legacy to service customers in a call-centre of sorts. When the system was rebuilt and modernised, none of the call staff were involved until the new system was unveiled and put in place. From her recollection, everyone despised that tool even though it was, on paper, more efficient than the old one. It didn’t work according to the way they actually carried out their work and ultimately lead to a few people leaving their job I suspect.
Setting the Scene Correctly
There’s a couple of techniques I remember from my Master’s days of Human-Computer Interaction that help you to identify opportunities for technology to make a difference. One of the most difficult yet fascinating modules I took was Design Ethnography. For the uninitiated, ethnography is the process of observing people carrying out work in their typical place of work and capturing a mental model of how that work is done on paper. It sounds harder than it is, trust me.
What I loved about this was that it created a sort of, abstracted workflow of how things are done. Where some people have a “sensible guess” of workflows and jump straight into developing software, the aim of ethnography is to observe how people truly work and capturing that, before then creating that sensible guess. This helps to avoid the situation my colleague was in, and ultimately leads to de-risking the design aspect of software development.
Another brilliant resource I came across that backs up the value in this way of doing and thinking is a podcast episode of the Product Breakfast Club with Mona Patel. I always rave about this podcast because it confirmed a lot of learning I did the hard way to become the designer I am today, and accelerated many other aspects. Mona describes how she always kicks off a project with a small piece of work similar to a Contextual Inquiry. She take the time to recruit, interview and observe people in their natural environments talking about their real issues and frustrations. It helps ensure we’re tackling a real problem and not an imaginary one.
One of the worst assumptions I’ve ever made is having blind faith that the clients and people that we work with have correctly identified a problem that can be solved with tech for their customers. They may know very well how to construct a successful business for their customers as-is, but not necessarily their pain points or how a tech solution would help. Every time I conduct research with a client they’re blown away by how much they didn’t actually know. You don’t know what you don’t know, and part of my job is to uncover enough for us to make informed decisions for our projects.
Bringing it Into the Build
One way of thinking about design is Dual-Track Design — there’s the Discovery track, and the Delivery track. There’s many pieces on the internet you can read about it; below is just one example I thought articulated it well to give you some context and help understand my thought process of bringing Design value into our build process.
Dual Track Agile: Level Up Enterprise Product Design with UX Part 3
Design is messy. Clean it up with an agile system that allows for exploration while increasing productivity.
In the past I had focused on the delivery aspect of design; by that I mean creating wireframes, constructing style guides, making design decisions about features that existed or features we were creating from scratch with some design thinking mixed in. What I struggled with was keeping up with the pace of delivery, and essentially becoming a feeding tube for the development. I mitigated this by empowering everyone to have more ownership of design decisions, but I knew there was more I could do to amplify the benefits of my work.
This is where I turned my attention to the discovery aspect of design. It isn’t possible to scale myself making these micro-level influences, so I hypothesised that bringing evidence to the table at a macro-level to influence decision making across everybody would provide a greater impact to our projects and business success. Queue me furiously using all of my Googling skills acquired from my days as a developer to investigate the tools and techniques of a UX researcher.
What I learned and knew I needed was a way to conduct research in a way that was:
- Quick and lean; software startups need to move very quickly
- Easy to archive and search; we can’t keep making the same bad decisions
- Manageable; teams in startup land (us included) have multiple priorities to juggle
- Scalable; my team and clients need to somewhat self-serve where possible, so I don’t become a blocker
- Sharable; we eventually need to handover the intelligence gathered, as our model is to transition into a more hands-off advisory role. As well as debrief findings appropriately, of course.
Most critically I knew that JIRA was not a place to store this intelligence, as searching through completed tickets was always a battle I had for years. Trello was no different and couldn’t store information in a flexible enough format. Eventually I found someone who shared their UX Research System, stored within a tool called AirTable. Its existing format was too granular for what we needed, but I saw potential to model a workflow methodology that suited us and our clients better.
Augmenting our Projects
Where I had previously thought of design as creating the physical blueprints of how a feature should manifest itself, I shifted more towards a model of supplying intelligence to let the team guide itself to how that should be done. Instead of taking a more concrete directional approach, I hoped a supportive and guiding approach would be more effective. Hypothetically by letting the team come to its own conclusions and seeing the proof for themselves, helps move us towards a more self-sustaining evidence driven approach.
My favourite tool to rely on to bring intelligence into the projects and share with the team is a Design Sprint. Following the principles of blending design thinking with a concise and tight test of the concept with customers always reveals something we didn’t know of before. In its entirety it is a large piece of work and isn’t usable for smaller problems or assumptions, and so the next challenge of supplying us with intelligence is curating a suite of research approaches to feed us the intelligence we need to make smart decisions.
In the diagrams below I tried to model out the process as it was, how it’s developed and is thought of today, and what it might evolve into in future. It’s at a high level aimed to highlight how we could subtly but effectively sprinkle some design and fact-finding magic into the magical tsf.tech formula. I’m still in the process of plugging these processes into existing projects and reflecting on how effective they are, so only time will tell if we’ve hit a formula that allows us to work efficiently.
A note about Retrospective Design
I think everyone is guilty of doing this at some point, especially in the early days. For teams or organisations where design is not yet at a mature level, design is thought of as a retrospective layer done only after something has been decided and implemented in its basic form — design is treated as how it looks, not how it works. Slightly more mature model depicted below show a naive but first-stab approach that treats design as how it works, and doesn’t necessarily involve designers at all.
This last minute cycle is inevitable because you can’t uncover and foresee every single problem you might encounter, and it isn’t until you actually have people use something do you get to truly iterate. We do this all the time for much smaller, less risky parts of products that are reasonably easy to implement and change. Problems arise when this is the only approach; the larger and more complex the work you do before sharing and showing it to anyone, the more intensive the rework is. To the point where you have to do more than pivot — you end up doing a complete 180º.
Crafting the Backlog
I had what I felt could solve things for me, at least at a conceptual level for now, but I guess this is where my brain struggles to process and plug the next biggest gap. The process of how we go from inception to a backlog in JIRA is somewhat opaque to me. Part of this is me needing to step in, observe and get involved, but if I explain how I think it works that might help.
We agree on the concept with the client, and understand who needs to use the system, what they need to do on the system, and then perform an inception. The inception involves sitting with the client to map out a logical system flow of how users are on-boarded to the system and the closed loop they would take. From that logical user flow, we can list the features based on that logical flow and create a set of user stories.
Now that I’ve written that out, I’m a little more sure that what we could really do with is a conceptual user flow. By that I mean how people accomplish those tasks currently — N.B. Design Ethnography. It’s not very often we’ll want to develop software to accomplish a completely new task or goal — people will have an existing method of doing existing tasks. By understanding the context around that and sharing the knowledge, it should help create a better logical user flow.
To ensure I’m on the right path, what I could do to get over this hurdle is performing the “Bootstrap with contextual knowledge capture” step I proposed for our projects, but perform it on ourselves and map out with a little more fidelity how we kick off projects and get that initial backlog. Afterwards I should have enough intelligence to craft a more detailed step-by-step flowchart with links to techniques or examples to explain the process of turning a concept and vision into a deliverable backlog with adequate detail.
Leads and Opportunities
Here are some tools and opportunities of moving forward that I’ve experimented with, to try and make more sense of things and bring options to the table.
We have a resource plan that depicts a gantt chart of different features we aim to develop and deliver in a project, but once that’s signed off it’s by-and-large a piece of documentation history. I still think there’s a place to not just capture the original plan, but update it as we progress and learn, and attach initiatives and goals to those pieces. Personally I think we need to close the loop by stating exactly what success is — it’s seldom good enough to say “the client was happy so we succeeded”. We need some metrics to aim for and a series of milestones to act as checkpoints along the way to meeting the overarching purpose and goal.
Part of closing this loop and not letting ourselves set KPI’s aimlessly and forgetting them is by defining what I’m calling a “test flow”. Essentially it’s a high level flow chart to show use who’s involved, what’s handled by the system, what’s handled manually and what outcome we’re trying to observe. The test flow chart creates a digestible narrative to aim for, and I’d like to create one for each milestone or checkpoint. Hopefully it helps bring together this higher level thinking on the roadmap to see everything holistically and give an indication of not just progress on delivery, but progress on a working business model.
Perhaps this is why I’m insistent on trying to merge user journeys and the method of creating tickets or stories in the backlog. This technique was developed as a way to look at the process of achieving a goal or task holistically, so that gaps could be identified. Not only that, but as a way to filter out and prioritise how deep to develop each step of that journey in release stages or Sprints. I think it’ll require a few practice rounds to get this right, but in theory should help us track further thinking for later and remain focused on the now.
This is an online tool that brings together these two elements of roadmapping and story mapping in order to have a single place to go from brain dumping ideas all the way to a granular delivery plan and initial documentation. It’s taken a little bit of time to get used to the way of thinking it’s built around, but assessing it purely from a design point of view I think they did a fantastic job at being concise, focused and delivering nothing more and nothing less than what is needed to solve a problem.
It’s been quite a long road to get to where I am now as a designer, and integrating that into the DNA of tsf.tech. There’s still plenty more road for us to travel down and we’re at crunch time to evolve our process with the team that we have. Delivery of nitty gritty design artefacts and decisions are still essential, but I’m of the opinion that I can add multitudes of value more by performing discovery activities to acquire intelligence to make better decisions for the good of the startup, the product and the business. There are a few tools and methods I’m exploring to help string this all together.
- Capture any outstanding gaps in communicating the thinking to date
- Share with the wider team for critique and feedback
- Focus on breaking down selected features into comprehensive tickets of design and development
- Record and re-use a research base (such as AirTable) to guide us
- Capture and spin-up the bootstrap method of contextual inquiry to set future projects up on the right foot
- Once that’s going, assemble a practical roadmap for product development
- Explore options to extend the product roadmap to become a business roadmap