10 Ways To Improve Your Agile Process

Product Manager Club
Product Manager Club
15 min readMay 19, 2015

10 Shortcomings in Your Agile Delivery Process

The way you deliver is as important as what you deliver. The product is the sum total of a cross functional team of individuals and process that you follow — want to deliver more often? More cost effectively? More smoothly? It’s time to start decomposing your agile delivery process and asking some lateral questions about the way you deliver. Better still, have an outsider do it. I wish I walked in to product teams that said they practiced agile and actually practiced agile. It’s common given the newness of agile in many working organizations and more so in organizations with a rich waterfall heritage, which saw project manager drown in documentations and process.

1. Testing Inefficiencies

WTF Minutes — A Practical Way to Measure Code Quality

Perhaps as a product manager it’s strange to some to see how much I let myself get caught up in debates around testing, the truth is the way most agile teams test is a bug bear of mine. Where do I begin? Testing is perhaps the most common giveaway that a team who thinks they are ‘doing agile’ is in fact just doing iterative waterfall (or ‘wagile’).

Firstly, let’s think about a simple waterfall project. The developer does some unit testing on a class or method, eventually we integrate lot’s of these methods and units of code and we test this component as part of integration testing and finally we merge all of the new code to the existing codebase and we regression test our new code hasn’t broken anything or re-surfaced new or old defects or bugs. Along comes the ‘business partner’ or client and they perform some user acceptance testing and you release and everything’s perfect (yeah right).

Let’s think of a typical agile team (i.e. one that’s really ‘wagile’). The developer does some unit testing on a class or method, eventually we integrate lots of these methods and units of code and we test this component as part of integration testing all during the iteration. We find some bugs but we didn’t have time to fix them as we didn’t estimate our velocity too well or anticipate defects. Plus the product owner only tried to accept things on the last day of the iteration, it didn’t help that we didn’t have test accounts. We do some more of the above in the next iteration and after 4 ‘development’ iterations we merge our branch back to the production code or trunk. From here we do some regression testing for half an iteration and then we have the end user do some more final UAT — they spot a bug, which is difficult as we’ve already stopped doing development sprints. I feel like I’ve lived this a hundred times over and made far too much noise about how silly this all is — what is agile about the above? Nothing.

What does agile look like?

  • Test Driven Development: Test driven development is a development methodology in which a developer performs short repetitions of a development cycle. The developer first writes an initially failing automated test case (even before coding) that defines the new function/improvement (think of a test case as a statement you can reference to verify something is working as expected). It’s important the code initially fails the test case so that you can rule out getting false positives. The developer then write the minimum amount of code to pass that test and finally they re-factor the code to ensure it’s of the highest quality and efficient. The type of test cases the developers are writing are what’s known as unit tests, it’s the most granular type of testing that will take place on your product. It involves testing specific sections of the code such as a class or a method and testing that the functionality of this code works as expected.
  • Continuous Integration: In addition to unit testing a developer or test engineer will aim to perform integration testing, which in its most basic form is it’s an extension of unit testing i.e. two units have been integrated into a component (collection of units) and the interface between the two is now being tested. The test looks to expose defects that arise as a result of code being integrated. Often in order to perform integration testing and regression testing it’s important for the developers to merge or commit their individual branches back to the trunk or mainline. The idea of continuous integration is to quite literally prevent integration problems, it’s the practice of merging lot’s of little changes being made by many developers often to prevent their being any code conflicts or breakages as a result of one developers code inadvertently breaking another developers code. Ideally a developer does automated unit testing to confirm their code is defect free and then they integrate many times throughout the day, this combination ensures early capture of defects and reduces defects.
  • Automated Testing: Automated testing is quite simply test cases that are built to run test cases in an automated fashion. It’s typically associated with regression testing, which aims to uncover new defects, issues and bugs that are created in other parts of the application or system after an change has been made i.e. you have a working website, you introduce a patch to fix a security gap and you may uncover you have broken the form submit button unexpectedly as a result. You can run automated unit testing and you can also automate how the front end manifests itself too i.e. does the right text display etc using tools like Selenium, which open browser and perform actions as if it were a user using your website for instance — note, I’m reluctant to call this user acceptance testing.
  • User Acceptance Testing: As the product manager it’s your responsibility to ensure that you’re constantly reviewing completed user stories that are ready for acceptance or ‘show and tell’, the earlier you and more frequently you this the more earlier you can provide feedback if something doesn’t meet acceptance criteria or if something doesn’t work as you expected and can be improved it gives you a time to write a user story for the next iteration. The product owner has the final say on whether a user story meets acceptance criteria, however you also need to be cognisant that it’s not all about thing’s literally meeting pre-determined criteria — it’s about the user stories resulting in things that make sense from a user perspective and provide a solid and logical user experience.UAT doesn’t need to be fully in the hands of the product manager but ultimately the ‘sign off’ or final call should fall with the product manager. Sometimes support in acceptance testing is essential if you want to verify the user experience across a few different break points and browsers for instance or maybe you can rope some people in from around your office or in your immediate vicinity to perform some guerilla testing before releasing to your users.

2. Dropping everything for 3rd Party Requests & ‘Production Issues’

Everything’s a defect or issue, some are just more important.

So here’s my deal. Everything that exists in production that is making something an inconvenience for the user is a ‘production issue’, the only difference is that some issues take a higher priority in the backlog than others. One of the craziest things I see when product managers launch a new product is that they spend the next 5 iterations working on all of the minor defects, bugs and user feedback and trying to enhance a product that is great for 90% of its users — they respond to the vocal minority. I’m sorry, but there will always be some users who think something can be better.

A more fruitful way of spending your time is to work on the features and enhancements that are the next highest priority. These could be things that are completely different to what you’ve just launched or they could be reacting to the validated learning you just elicited from the launch of your MVP. What I’m trying to say is that everything in your backlog has a relative priority. You can spend your whole life reacting to defects and the vocal minority and get so caught up in doing so that your backlog becomes saturated and you end up not working on any of your high priority items or the features and capabilities that will keep you ahead of your competition.

The reason why you need to ensure your team isn’t a reactive group is because you have goals and outcomes you are driving and reacting to everything that’s labelled a ‘production issue’ or treating every 3rd part waterfall projects ‘delivery date’ as a final and rigid date isn’t how agile delivery works and isn’t in the best interest of your product or users. Only you as the product owner have the most holistic view of your end user needs and on overall view of the different priorities and 3rd parties etc feeding into your product backlog.

3. Illogical Team Composition

A functional looking scrum team

I’ve worked in teams with 2 scrum master and multiple product owners (with conflicting visions and objectives) and everything in between. It’s not easy to move people around to create logical teams but sometimes you have to do things that suck for some people to make the product your organization is delivering a winner. Some of the worst products are those that reflect the silo of the organization i.e. one team owns the acquisition journey and another owns the e-commerce journey. Sometimes it has to be done because some products are so large in size and complexity but if you need to set up shop in this way then the respective teams need to have a common forum and transparency into each other’s backlogs (this is where scalable agile practices help). Additionally things like pattern libraries and common/global assets or a shared design teams. I digress.

To be successful you need a logically set up ‘hit team’. A scrum master, quality engineers, developers, systems analyst and a product owner. You need a creative designer and a UX designer too, yes there’s a difference. I mean you don’t need all of these things but you sure do need a balance of complementary skill sets and roles.

4. Large Team Size

My worst. A team that is set up so that there’s 20 developers to 1 designer, 1 product owner and 1 quality/test engineer. It’s a lose/lose. You end up with one of two scenarios on your hands.

You either have the bottlenecks in the team being put under massive strain to keep a team working at their velocity and they eventually burnout. They will bust a gut to deliver 100pts of user stories each iteration to keep the developers ticking but in doing so you will lose out on quality and they will end up sucking at the 20 other things that make up their day jobs.

Alternatively your bottlenecks will say I’m not killing myself for a jumbo sized team and you will end up with a lot of developers twiddling their thumbs and being under utilized.

Ideally in a scrum team each player is interchangeable so if you can’t shrink your team size then you should definitely be pushing for those developers to help play another role in the team i.e. assisting the product owner in user story writing, assisting the test engineer in the testing of the product or even working on research spikes to see if something further down the pipeline is technically feasible for instance.

5. Volume over Value

Smart product owners make sure that they have their team working on the highest value work items as priority and they ensure the team always knows what user stories are priority in both the product and sprint backlogs. It’s important that the team focuses on value over velocity and we can borrow some learnings from highway planners.

Transportation planners and mathematicians have done lots of research on the most efficient way of ensuring highways are operating in the most efficient way possible and the findings are interesting. The short summary is that there’s an optimum vehicle velocity, which is about 70mph — faster isn’t better. At higher velocities there’s a lower tolerance for variability; lane changes and reacting to brake lights increases causes cascading effects which slow down the throughput.

This is so true of an agile team. Often teams strive to continually evolve their velocity over time as they find their grooves and iron out inefficiencies, however there’s a limit to how much a team can deliver. I always stress with my team that volume isn’t as important as velocity. We need to leave some room for soaking up what I call ‘scrum entropy’, it’s the lack or order or predictability of a sprint and the work the team is delivering. It’s inevitable that there will be surprises during development work that will require some though time or a developer will stumble across some code which poses a security risk and needs some refactoring. This is ok. If you don’t push your velocity past the point where your throughput (in terms of value) declines as a result of your ability to consume variations and the unknowns of the sprint. Don’t over estimate and under deliver. Scrum entropy, is a term I’d love to copyright!

Finally, I can’t stress enough how your teams measure of success should be value driven vs. volume. Move the needle on outcomes related to customer satisfaction and engagement vs. number of story points delivered (it’s also a great way to prevent teams inflating story sizings!). You will be affording your team the chance to take their time with ensuring what they deliver is both high value as well as high quality, they can be more meticulous about implementing their solution, they know they have some time to fix defects if any are found they can tweak things that don’t work first time. Ensure the goals of your engineers/scrum masters are value based and make it a key part of your product vision.

6. Lack of process uniformity

Uniformity allows for efficiency. Some examples of inefficiency:

  • Scrum Ceremonies: Is your scrum master planning backlog grooming, retrospectives and sprint planning etc on a regular cadence and in advance? Is the format of these ceremonies well defined and consistent so people go into them prepared and knowing what to expect?
  • Stakeholder Engagement: Is the way people and teams engage with you and vice versa the way you engage with them straightforward? Are stakeholders for different products, approvals processes and reviews easily identifiable?
  • Tools: Are some people using excel and others Jira to reference your sprint plan? Are half of you’re developers running BlueJ and the other half running Eclipse IDE? Is access to development environments well documented and easy to set up for new hires?

7. Poor Analytics & Reporting

Data Mashup

If there’s one thing I can safely say is that the next big thing you should be concerning yourself with is data science, in fact it already is the thing. Big data, statistical analysis, data science…whatever it is you need to be living it. Your product needs to be driven by it.

Data underpins everything I do as a product owner and it’s why whenever I take over a new product the first thing I do is take a crash course in the data sources available to me that allow me to make informed decisions.

Given its important you can imagine the implications poor, inaccurate data causes. I’ve had every analytics and data problem you can think of…double counting conversion start events, tags not firing, user survey forms not submitting. This is problematic and you need to prioritize it in your backlog and devote the time required to get the most from your data.

The second issue bad data and analytics can cause is when you have multiple sources of data providing different results and insights. Imagine if your looking at the bounce rate in one report suite and your engineers are looking at the same in another suite. Problem, you both have different bounce rates. Issue, you’re applying different segmentation to your traffic. Solution, ensure that you establish a single point of insight for all of your different analytics needs and where possible try to consolidate them into a single report…automate it and distribute it. Ensure your leaders and team are always referencing and being fed the same data as you.

Finally, react to metrics that matter. As I’ve said before — don’t react to the vocal minority things that your users are angry about, obscure things that your qualitative free form surveys are telling you. React to metrics that are statistically significant. To insight derived from both qualitative and quantitative data.

Don’t get me wrong. This isn’t gospel, much of the product role has an element of acting on instinct and prior knowledge. It’s one of the qualities that will set you apart — you’re ability and willingness to make informed and sometimes quick decisions. That’s said, good data from a single source will make your life easier and give you the confidence you need to bring others on the journey with you.

8. Blackboxes

If your product team is referring to you as a ‘client’ or as ‘the business’ then you have a problem, you don’t have a ‘one team, one dream’ product mentality, which will damage the quality of the product you deliver. The problem I find with this kind of team dynamic and way of thinking is that you begin to find the processes of your developers and testers are somewhat of a black box for you — they obscure your view into their world through complexity or secretiveness. A few examples of common black boxes in agile:

Backlog: You will likely use a tool to manage your backlog and agile activities such as Rally or Jira, this is a collective way for the team to see what’s going on and what’s coming up. However, if members of your team aren’t using the tool properly then you begin to affect the whole team. For instance you will encounter people who don’t update task hours on user stories…which means your burndown chart won’t well err burn down, this will raise alarm bells for you but it just turns out it’s someone not pulling their weight with updating tasks. These small things create unnecessary problems. To you the burndown suggests your team underestimated their velocity or that they’ve encountered problems, it’s frustrating for you and it needs to be managed by strong scrum masters.

Defects: Some teams manage user stories in a tool like Rally or Jira and defects in another tool or software application, it just doesn’t make sense. Why log in to two different tools to see what you can often see under one roof? Worse, some teams obscure defects from you because they don’t want to be penalised by their bosses for poor code quality etc. It’s crazy. If you have defects under one roof you have transparency and if you have transparency the team can swarm around the individual that needs help or the suer story presenting the most problems and share expertise and learnings. The worst thing you can do is hide problems otherwise how can you learn from them? What happens if the first time others realise there’s defects is when the product is in the hands of the end user?

Logic and Technology: Engineers will shelter you from the technical workings of your product sometimes or talk about it in a really complex way to discourage you from asking questions in case you identify there’s a better way of doing things. I recently got speaking to my engineers and asked them, “Why do we keep completing test automation user stories but keep forecasting the same amount of time for testing?”, the question weirdly got my scrum masters back up but I was persistent. It turns out we were indeed automating testing but it transpires the automated testing software (which could be easily shared via a JAR file) was only installed on the test managers laptop…because err, he’s the test manager and he should do the testing. No, if we’re investing money in automation but reaping no reward that’s stupid. If sharing a simple JAR file will mean better code quality then that will mean less defects and that will mean more free time for for the test manager — never be afraid to challenge practices such as this by asking simple lateral questions.

9. Inadequate Training

If you’re a product manager in an agile team and you’re working with individuals from an old school waterfall background you will probably spend half your time educating them, annoying them and inadvertently patronising your colleagues. You will also spend half of your time being frustrated knowing how things should work but being at the mercy of the knowledge of other team members understanding of agile. Your organisation needs to ensure that they’re providing you and your team constant training — even if it’s just as simple as having ‘agile champions’ sit in on key ceremonies and offer advice and set you and your team challenges for improving the way you work it’s imperative that you are constantly learning.

10. Play it safe kinda people

I heard a term for people the other day that sums these kind of people up beautifully, these are your ‘have problems for your solutions’ kinds of people. I’ve always said go big or go home. When you’re a product owner if you’re not crazy about changing the world and the way people interact with the things they use and you own then you need a career change. I love being a product manager — I’m forever pushing the boundaries and and trying to better the way me and my team work, however in doing so you’ll create friction and step on toes. Always be graceful but never stop pushing boundaries.

Enjoy this? Join the community at www.productmanagerclub.com or follow us on Twitter at https://twitter.com/

--

--

Product Manager Club
Product Manager Club

The Product Manager Club is a group of super passionate digital product managers that share best practices and their experiences (www.productmanagerclub.com).