The Customer Development Life Cycle

Bruce Bookman
Adventures in iOS mobile app development
26 min readJul 30, 2018

Increasing profits and reducing costs at each stage of the software development life cycle through focus on the customer

Your most unhappy customers are your greatest source of learning. ~ Bill Gates

A customer is the most important visitor on our premises. He is not dependent on us. We are dependent on him. He is not an interruption in our work. He is the purpose of it. He is not an outsider in our business. He is part of it. We are not doing him a favor by serving him. He is doing us a favor by giving us an opportunity to do so.” – Mahatma Gandhi

The dollars that go into your pocket from your job came from a customer. Businesses live and die on the monetizing the value they produce. In the business of creating, selling and maintaining computer software or services, also known as the software development life cycle (SDLC), customer focus is a key strategic advantage.

Too often however there is a Great Wall — a Grand Canyon gap between those who define or code or test or deliver or sell or support the customer of a software product or service and the purchasers and users.

With over 20 years in the computer software and hardware industry, and having worked for teams and companies focused on customer value, I have seen ways to infuse end user focus into the entire product life cycle without complete disruption of standard industry methodologies. These practices can fit into any development approach be it traditional waterfall or agile or something else.

Product Management Gaps

Role Confusion

The product manager is supposed to be the customer representative in defining what is to be produced. They take disparate inputs from the market, from customers, from competitors, and others and roll that into a description of what is to be created.

Often, product managers become glorified sales engineers. Unfortunately, since PMs are in constant contact with customers, they get sucked into the sales cycle. Also a PM tends to have a solid understanding of the product and its application and is a good communicator. All of those skills are valuable in the sales cycle. In my experience many PMs are simply sales engineers with a different title. And this means they are not focused on creating the product blueprints.

Worse, in order to win sales, PM at times can promise to deliver new functionality and even commit to a time frame without consulting those who will deliver on those promises.

I have also seen the opposite problem of the PM that rarely travels and isn’t talking to customers or potential customers. I’m at a loss to explain how they manage a product without doing this.

Another disease I have witnessed is the notion of Product Marketing as equal to Product Manager. These two things are very different. Product Marketing is the act of marketing — creating collateral and other activities to gain attention of potential customers.

Effective product managers are standing at the cross section of customer engagement, market analysis, technology trends and laying all that out in well defined product requirements.

Expertise

As good as a product manager may be at summarizing the future product course, they are not usually trained or oriented to thinking about the conditions under which the product may fail.

As a Quality Assurance Engineer or Technical Support Engineer, I typically had a large number of “what is the product supposed to do if X bad thing happens” type questions about the PRD.

How a great idea can go bad in the market of real life

Gaps and Cost

There is clearly a cost to incomplete product requirements. Bad requirements means spending money building either an incomplete thing, the wrong thing, or a thing that does not meet market demands.

The purpose of a business is to create a customer who creates customers. ~ Shiv Singh

Plugging the Hole

First and foremost the job of a product manager needs to be that of creating the blueprint. And doing so with great detail and attention. If your product managers are doing anything that looks like sales, stop that.

Bring the experts to the table. If a product manager is listening to customers and potential customers, that is the meat of an incomplete sandwich.

To complete the sandwich by adding the bread and condiments, technical support engineers and quality assurance test engineers must be involved in the product definition process.

The support engineers talk to customers every day. They have a front row seat to the pains customers go through getting the product to meet expectation. A good support manager understands intuitively that the most cost effective way to deliver technical support is to not deliver technical support. And in order to avoid delivering support, the product has to be defined up front with a decent shot at being bullet proof.

Support engineers also know how customers use the product and what confuses the customer. I have seen first hand how minor tweaks to a software product dropped call rates by a massive amount — saving the company money and improving customer satisfaction.

By this same token, the quality assurance test team knows where the bodies are buried. They are trained to think in extremes. They are trained to think about usability. The whole reason for the existence of a test organization is to protect the customer from mistakes and miscalculations.

Put into Practice

No matter your SDLC model, there is opportunity to plug-in these internal representatives. In practice what I have seen work very well is to have customer focused experts from support and QA at the table during the product requirements definition phase.

This means both support and QA have sign off authority for the PRD. In other words, the defined product can’t move forward without the blessing of these two customer focused groups.

This is implemented with a simple set of review cycles. PM drafts requirements and those are reviewed by QA and support representatives. These representatives provide feedback and ask for clarification and the cycle continues until the parties are mostly satisfied (nothing is ever perfect).

Note that I have seen both formal gating and informal sign-off from support and QA work for an organization. If there are a large number of customer problems or a few but very severe problems that are leaking out, choosing a hard gating approach may be best.

Giving customer facing technical people input at this stage results in a product that is more satisfying to customers and causes fewer problems — all leading to cost savings and increased sales.

“Setting customer expectations at a level that is aligned with consistently deliverable levels of customer service requires that your whole staff, from product development to marketing, works in harmony with your brand image.” – Richard Branson

Management Benefits

What does not work is defining the product in a vacuum. QA is going to have to test the product and support is going to have to deliver support for it. So these departments are stakeholders.

In a worst case scenario, QA learns very late in the game about what is already built or will soon be. Part of running QA means planning for costs such as computer hardware as well as personnel and training. If QA is not privy to the product definition process, this will cause delay in ramp up testing efforts and most likely either delays in delivery of the product or cuts in quality.

Here also customer technical support suffers from surprise. New or improved product typically requires new skill sets or changes in work flow and other adjustments. Cost is incurred and delays happen when support is not in the know about the attributes of upcoming releases

Increase sales and decrease costs through tasking QA and support with review and sign off of PRD. Sales engineers may also be included for both their high level of customer expertise as well as helping prepare their organizations for future product offerings. They also have a solid sense of what new factors will sell and which ones will fall flat.

Software Engineering Specification Gaps

It may be technically deep, but it still impacts the end user

Once the product management team defines what to build, the software folks flesh out the deep technical details of how to make the product. This can include screen mock-up, architecture — how the pieces will all work together, technically detailed database changes, base technology choices, and much more.

As in the product definition phase, if the specification phase is done in a customer vacuum, costs rise and profits suffer.

Here again, giving the stakeholders a seat at the table will help drive product quality.

Put into Practice

Technical support and QA staff sign off on the software engineering specifications. This is conducted in a draft and review cycle just as in the PRD phase described previously.

One concern that may be raised is that the QA or support teams do not have the technical skills to fully grasp the software specifications. This is really an opportunity disguised as a problem.

Communication between customer focused groups and the software teams can only help deepen knowledge for all. Coders get a better handle on real world application of the work they do, and the customer teams will get technical details that they can leverage in their everyday work. This practice builds a common language and set of expectations and trust that can only enhance the overall product development process as well as the delivery of customer support.

Coding and Testing Gaps

I once worked for a software company whos coders chose a specific software licensing scheme. The licensing system was sold by a separate company and could be easily incorporated into other software products.

From a technical perspective, choosing this licensing component made perfect sense, yet became a source of immense cost to the company because it lead to mass customer confusion and delays in customer implementation of product as well as slowing proof of concept roll outs.

In a moderately sized company, this kind of problem — one that plagues customers and customer support — could exist completely unnoticed by the software or product management staff. None of these people ever have to buy the software and license it, so how would they ever discover that doing so was so painful?

Here customer support needs to be able to speak on behalf of the customer. The support organization needs to communicate problems encountered in the field back into the product development and maintenance effort.

This goes beyond the traditional filing of bugs. It also means understanding the severity, extent and business impact of the problems.

Technical Support knows where the bodies are buried

Putting into Practice

To increase QA and software engineering connection to customer reality, there are a few options.

Walk in My Shoes

One option is the ride along. QA and coders should sit with customer support team members and listen in to customer interactions. Since these staff are typically not trained for customer service it is best that they listen and not try to help out. The goal is for them to notice design problems or testing gaps rather than to solve the customer’s immediate problem.

I suggest this should take place once per quarter for at least one full day. Any less than that would not be enough exposure to variety and any more than that is costly and distracts from the primary missions of the groups.

I have firsthand experience with this strategy and seen it work very well.

From the Horse’s Mouth

Another option is to bring your customers to the coders. Hold get togethers with customer representatives. These would be designed specifically to help customers explain how they experience the product. Well choreographed and moderated Q&A can be conducted so that the software engineers and others (QA, support, PM) get a good sense of the day to day of the technology they create.

Good Q&A starters:

  • Why did your company purchase our software?
  • Describe what you do in a typical day with our software
  • How many users are there, and do they have different roles?
  • What challenges do you face with our software?
  • What advice would you give us for future product?

I ran a program for a short time dedicated to this exact kind of effort and the software engineers were thrilled to get this kind of face to face account. And the customers felt very “special” and gained increased product loyalty.

Customer Support and QA Collaberation — Testing sign off

The testing department creates written test plans and step-by-step guides describing what they will do. Like with the other sign off suggestions above, an option would be to give customer support the opportunity to review tests and approve the test plan.

A secondary function of all of the sign off steps I have suggested is that each separate team increases the collaboration performed. This builds trust and respect and deeper appreciation of the functions. It also increases everyone’s skin in the game. If I’m tasked to review and sign off, I’m putting my reputation on the line.

Customer support staff will gain deeper knowledge of the technology and QA will get a better understanding of real world usage.

Bugs are a Gold Mine

Software testers find and report bugs and coders fix them. If that is the extent of the leveraging of the bug repository then your software company is missing big learning opportunities.

Defects can be categorized so that trends can be detected. Certain aspects of the software may be particularly problematic and seeing this may only happen by using a heat map to show clusters of bugs in specific areas.

One QA rule of thumb is that where there are bugs, there are more bugs. Intellegence on the areas of the product that are buggy can help test planning and test case creation and increase test coverage for problem children.

Often bugs are filed with no test case executed. These can come from informal testing, customer support, or directly from customers. Where they come from is not as important as knowing they exist and harvesting their value.

If there are defects without test cases, that is clearly a gap. QA must monitor the bug database and leverage bugs that do not have test cases and turn those into tests. My experience is that many software companies ignore this concept and pay for it in the hidden cost of product defects that continue to worm their way out the door or fester unattended.

Tester Professionalism Gaps

I have worked for many a software company where the testing staff is made up of a majority of deeply technical test engineers who are proficient at writing testing code for automated testing. However, what they are not is trained software testers.

Software testing is a discipline. Strong sets of practices exist and are each specialties in themselves. Concepts such as boundary testing, test matrix analysis, requirements mapping, code coverage, branch coverage, unit testing, stress and load testing, error injection, usability, use case coverage and others are all part of the breadth of professional testing knowledge.

Too often I have seen an emphasis on creation of reusable automated tests and lack of deployment of professional test concepts that could yield higher end quality.

Staffing your test team with a majority of talented coders isn't wrong. It is short sighted. The worst testers of code are coders. They think in small, bite sized, technical chunks of isolation and abstraction. Most end users of software do not use software in bite sized, small, abstract chunks.

Coders are not usually as attuned to end user experience. Those writing code usually have little to no experience in the field of the target audience. The software engineers writing code for a banking system or writing code to test the banking system most likely have never worked in the financial industry nor are trained to think deeply about the needs of a user of the financial industry.

Trained testers spend much time thinking about the end user. Therefore your test staff must be made up of professional quality assurance engineers steeped in the techniques of testing. Automation is important, and at times critical, yet too much focus on that can result in lower quality.

The Crazy Anti-test Testing Technique

Software testing is partly a logical, scientific discipline. In order to isolate bug root cause tests are crafted to control all variables and change one at a time.

In addition, I highly recommend tests that do the exact opposite. It is too easy in the test effort to miss certain ways in which something will go wrong and the software will fail. Therefore I suggest creating tests where the effort is to create situations where everything is wrong. Break as much as possible. Make a complete mess. Do nothing that is “supposed” to be done and everything that is not. This is different from targeted controlled negative testing, it is error injection in the extreme.

For a software product that is typical such as front end user interface, web server, middle-ware, and back end database — you shut down the web server, middle-ware and database and see how the end user interface behaves. Does it give the user any helpful idea that something is wrong behind the scenes?

How about disconnecting the networking between components? And filling the database with “unexpected” data with special characters, null values, maximum values, passwords that do not follow the requirements and other bad actors.

Execute this by creating layers of error. As the software fails, good testers can usually come up with a hypothesis as to the root cause and create further tests to prove out that root cause.

My experience with this kind of worst case in the extreme testing is that it surfaces bugs that would not normally be found. It also does a great job in finding usability and maintainability defects.

Filling Gaps by leveraging Customer Experience Intelligence

There are many things that can be done to prevent defects from reaching customers, but some bugs will always get through. How can a software firm keep tabs on what is happening and respond?

Phone Home

Internet connected software can be built to automatically report back to the creators. This can be done in real time or on a periodic basis. If designed and staffed carefully, this capability could be an add-on service that customers pay for in return for proactive speedy support response.

Clearly there are security and privacy issues with this approach and those can be mitigated through implementing features such as opt-in/out, scrubbing of data prior to reaching the provider, legal agreements, and other methods.

In my experience this can work well or fail well. At more than one company I have seen this phone home capability built and then ignored. The company collects giant amounts of valuable data but that data is never analyzed. And this is tragic. Not only is that a waist of resources in creating and maintaining the feature, but it also means a ton of bottom line impact learning is going unlearned.

I have also seen this data being leveraged well, but narrowly. Product Managers dive into the information and can glean usage patterns and other information that can guide future design. Knowledge is power, and this trove of insight can be leveraged by others.

Technical Support can learn about usage patterns that can influence staffing levels and skills training. Quality Assurance can learn real world scenarios and plug holes in testing. The software development staff can gain insight into defects that are difficult to reproduce in-house.

Log Dumps

The majority of software products create what are known as logs. These are files that contain technically detailed information about everything that goes on with the software and can contain debug level information that is of great value in identifying and fixing problems.

During a customer support engagement, support staff will request these files. The result is internal server hard drives chock full of logs. Applying analytic to this treasure chest can yield big returns.

Incident Reports

Professional software support organizations keep detailed records of each interaction with a customer. There are a few ways these documents can be leveraged.

As with phone home or logs, the raw stream of information contained in these can be of value. More value can be extracted through sampling of a subset of records and applying technical analysis. A trained professional can extract from a set of cases such things as required documentation updates, as yet undetected defects, design issues and more.

Further, the support tickets can be tagged to categorize them into buckets. These may include tags such as Documentation Issue; Customer Error, Bug, Third Party Software Issue, Unmet Prerequisites, Networking problem, Storage problem, etc.

The value in this grouping is it can help paint a broad picture of where the gaps in the software development process lay.

I have applied this technique and successfully identified a design issue that was generating over 60% of customer support engagements. After some solid technical analysis the feedback I provided to software development lead to a redesign. The source issue dropped from the top issue on the heap of customer engagement to second to last within a year of implementing the changes.

Knowledge Base as Bug Database

Software companies often create a searchable knowledge base that contains solutions to common problems.

Notice the wording above.. “common problems”

A problem is a defect or a bug. We just disguise it as a knowledge base article. Industry practice is often to choose not to fix bugs but rather document the bugs and work-arounds.

A costly gap can open if the number of customers running into the problem gets high. If a knowledge base article documents a corner case that few customers ever see, then fixing that problem is probably more costly than beneficial. However, if many customers are going to the knowledge base for the same problem, a high impact bug can be hiding in plain sight. Customers are not reporting the problem so you can’t know it is happening to them in great numbers.

Or can you?

Gross measures can be taken to determine if specific cases that are documented by knowledge articles are visited frequently. This can be as simple as a web page hit count over time. The knowledge base system could also have other ways of capturing usage and feedback that can assist in gleaning hidden gems worthy of fix consideration.

During the course of a customer support incident, technical representatives often identify knowledge articles that will resolve the customer problem. This can be documented along with the other notes of the engagement and rolled up. With the web page hits plus the customer support incident hits on knowledge articles managers can get a picture of problem areas that need to be addressed.

Consider every customer contact, whether it be directly through a customer support instance or indirectly through the use of knowledge base or other self help systems as an opportunity to learn. In essance, every customer support contact is caused by a bug. The bug may be an error in code, an error in documentation, an error in training, an error in the sales cycle or some other place. A mistake was made somewhere, and every business can learn from mistakes and improve.

Planning to Freeze

It is difficult and costly to test software that is in flux. A feature might be tested and dropped and thereby create sunk costs. Changes in flow or design can necessitate re-write of test cases.

Documenting software under development that changes over time is also problematic. If the software has an interface, step wise explanation for users may need overhaul and screen shots retaken.

Training and other preparation in the support and sales and marketing organizations becomes more difficult if future product is unpredictable.

A “freeze” simply means a portion of the software being built will undergo only minor changes if any. There is a risk in doing this because changes in market conditions or technologies won’t be taken into account. Organizations must weigh the cost to documentation and testing due to flux in context of the cost of the ability to pivot based on market demands.

I have seen teams that can produce highly market relevant software while at the same time committing to freezing of features and user interface.

To allow for flexibility, a thaw can occur and changes implemented. The key to success is to bring stakeholders in on the planning so that no part of the wider production effort is caught off guard with the changes and everyone understands the cost trade-offs.

In converse, I have seen teams that simply never thought of the idea of putting freeze milestones in place. This behavior is frustrating for many dependent efforts and can increase costs or delays dramatically.

Documentation Gaps

Documentation Review

To reduce the risk of inaccurate, unclear, or simply incorrect product documentation include the technical documentation team in review processes.

If given the opportunity to be included in the PRD and engineering specification reviews, the documentation effort benefits from a deeper and more timely picture. Not only will this yield higher quality guides, the end user focus of documentation staff can help head off design problems and software defects.

Not only should documentation be intimately connected to the software development effort, other experts are valuable in the documentation production process itself.

Product Managers, software engineers, test engineers, sales engineers and customer support staff have valuable perspectives that should be tapped. Involve them in the documentation review cycle.

Documentation can be considered as much a part of the product as any software. So if it makes sense to allocate resources to high quality software it also makes sense to put resources into documentation. Plugging customer focus into documentation cycles produces savings and quality.

Documentation Testing

When I was a QA manager I had a special assignment for new members of the team. I asked them to employ the current user facing documentation and to “do everything with the product the documentation says can be done”

This exercise had three main objectives:

  1. Train the new staff
  2. Find new bugs in software
  3. Find new bugs in the documentation

Consider the end user documentation as an extension of the product requirements document. The PRD describes what the software is supposed to do, and so do the docs. They are really the same thing in different forms.

I believe that all testing cycles should include testing the documentation. Teams can regression test large swaths of code by doing so.

Training Materials

For brevity I am lumping Documentation and Training material production together. They both require review cycles from other customer focused teams and they both can be tested as part of regression efforts. The training creators need code freeze milestones and need to review PRD and software specifications just like the documentation team needs to.

What is Product?

Earlier I talked about the idea that end user documentation be considered as much a part of the product as any software code. Quality and accuracy increase when writers are tasked with reviewing requirements, engineering specifications and test cases. Planned code freezes cuts documentation cost.

There are other realms to consider when thinking about what makes up the end product.

I know a software company that sells a highly complex and costly product. As part of the delivery of the software, technical staff from the vendor engage in initial setup for the customer. They start by obtaining customer specific technical details so that the system can be configured appropriately. These technicians then type the details into a text file that is used by specialized installation software. The installation software does the setup of the complex system based upon the information in the text file.

At first blush this all sounds fairly standard and not controversial. However take a step back and consider customer experience.

The image above is courtesy of a fantastic piece in Forbes “The Longest Lasting Emotions in Customer Experience”

Complexity

The software system is so complex that specialists from the software vendor are required to conduct initial setup.

As a shopper looking for a solution, you may think twice about the costs of complexity not just in the install phase, but over the life of the solution.

While larger enterprises may see such costs as part of doing business, small to medium enterprises can’t afford the burden and will seek out competitors who offer simplicity.

Reduction in complexity may not win more customers, but it could reduce costs in the given scenario by reducing the training requirements and skill level of the set up folks. It can reduce the hours needed for system setup thereby increasing the availability of installers. This yields a higher rate of installs over time and more opportunity for profit and customer satisfaction.

The Hidden End Users

In my example, editing a text file is done by vendor technical staff as part of install services. It would be easy to think that internal employees are not end users and therefore have no need for a good end user experience.

This is where thinking about all of the parts of the software release as product can add benefit.

That text file is part of the product. The procedure and execution of install is part of the product.

The software company I’m writing about made the call to create a customer facing user interface to replace the practice of text file editing. While there would be no change in the delivery of software through staff consultants, the entire text editing scheme was overhauled.

The result was increased trust in the entire software package. Perception of the product was not good and this unprofessionally clunky text editing only deepened the feeling. Worse, although the tool was meant for internal staff it was often used by end customers. So the first experience that many customers had with the overall solution was that of editing a text file. This, of course, does not engender a sense of quality and high standards.

Including this text editing portion as a product allowed the company to add branding and consistent look and feel. It elevated the first touch experience by making it a truly customer friendly experience.

Further, now that the text editing was reworked as software, the software underwent stringent testing and review. More control was applied in the software so that users could not make as many errors. This lead to reduction in delayed or failed installs.

Apple Inc. is well known as a company that provides a well crafted ecosystem and user experience. Part of that is done by thinking of every touch point as important. The physical product, the software, the website, the buying experience, the support experience, the usage experience, and more are all given the same level of attention. Customers are highly satisfied and Apple can charge a premium and reap solid profits by these practices.

Think about all of the things that go into creating and using and buying the software created by your company and you may be able to move the needle of profitability by uncovering hidden users or ham handed items that live in the corners of your product.

Beta Testing

Many software companies send out incomplete versions of product for customers to try out. The goal is to gather feedback and potentially incorporate that prior to wider release.

Although this part of the SDLC is plainly designed with customer focus in mind, there are gaps that could use attention.

Beta is not the Time for a Sale

There can be a very blurry line between beta programs and Proof of Concept (PoC) efforts. PoCs are designed to sell the finished product by allowing the customer to test drive, while beta is poking at a product under development.

I have often seen beta testing hijacked by the desire to sell the latest and greatest. This can cause all manner of conflict and loss of valuable customer feedback for development.

Another reason to keep a hard line between beta projects and sales is that the value of the engagement is quite different. A customer taking a test drive with the express understanding that they are providing feedback is not nearly as valuable as a prospect trying the software out for potential purchase. That value imbalance can be costly if unfinished software is positioned more as a PoC. Partly this is due to customer perception of quality but more impact comes from customer support interaction.

Since prospects with cash on the table have more value, PoC support activities have a different quality than those that would be applied to beta. A company may bend over backwards for a sale while not bat an eyelash when a beta participant can’t get the darn software to install. It does a huge disservice to the customer to put them in a position of receiving low touch low effort service when that is not the expectation.

Put strong measures in place to ensure the beta efforts have clear outcomes and those outcomes are supported by all stakeholders and those who will execute. Make it clear that a beta test is not a sales engagement and any blurring of the line risks permanent damage to a potential customer acquisition.

Don’t put a Stick in a Hornets Nest

Never, ever, ever ever… no…no…no. Do not put beta software into a customer production environment. Do not install beta software that customers intend to use to run their business.

The risks here are huge. Beta software by nature is unfinished and under-tested. Imagine a bank using unproven software to track your deposits and you get the clear understanding of the high stakes at play. We are talking here about potential billion dollar law suites.

If beta software ends up in production costs at least double. There are now compounding problems. The agreement is that beta is a low man on the totem pole in terms of delivering customer support and now you have half baked software running potentially mission critical business applications. This is the perfect storm of pain to both vendor and customer.

Will customers put beta software into production? You bet they will. Other than doing everything to help the customer understand the rules of engagement, there is nothing preventing the customer from violating those rules. It is of paramount importance that the beta test be one whose character is understood deeply by all parties involved and blanketed with legal protections.

Eating Dog Food

The practice of asking internal employees to try out the product as it is under development can yield very helpful feedback. Even if the software/hardware does not have a business application, there are still creative ways to engage colleges in daily usage and feedback.

This is a fairly standard industry practice known as dogfooding or “eating your own dogfood”. The effort tends to cost little and yet yield interesting user input. Before your product sees the real world, let it see a semblance of vaguely mirrored somewhat real world.

Filling Holes in Retrospect

A technique I have seen work very well is a meeting where stakeholders gather and each individual writes on sticky notes. They write a few words about “things that went well” during the cycle and “things that could be improved”. A single horizontal line is drawn on a white board and the notes about things that went wrong go below the line and those that went well go above the line.

Once each note is placed, each stakeholder has 1 minute or so to explain to the whole their notes. Then a quick effort is made to place the duplicate or similar notes in the same location.

After everyone has an opportunity to get an understanding of the items, they are asked to place a mark on the notes that represent something they agree with (a vote). Each person is limited to voting for maybe 3 items.

The leader tallies the marks and reveals the winners. These are discussed in more detail. The objective is to identify actions that could be made to enhance the good outcomes and minimize or otherwise address the bad stuff.

The attendees are lead to making decisions as to those items to take up for future work. And the number of items is limited to two so that the items for further work can have focus and a chance of succeeding.

Conducting iterative retrospectives builds team muscle memory and habit around continuous process improvement and aids in bye-in and trust for future projects.

Embedding Customer Focus

Image Source

The further into the software development effort, the more costly become defects. Fundamental design problems may require complete re-work. Costs are not only incurred in the coding, but in testing and documentation and training and other efforts as well.

To reduce cost and increase quality, embed customer focused teams in the product requirements, software specification, test specification, training ,documentation, beta, release and maintenance phases of the software development life cycle. Reimagine the SDLC as the CDLC — Customer Development Life Cycle

Gather deep technical intelligence from phone home systems or the store of log bundles and apply the learning to improvements. Do the same with customer indecent reports and knowledge base.

Hidden in the business are dozens of opportunities to convert oddball practices into well crafted customer delighting products and services. This includes the opportunity to wow your internal customers and win deeper commitment and boosterism.

Go extreme and use outside the box testing where everything that can go wrong is purposely made to go wrong. Demand customer focused and trained quality assurance professionals that think user first rather than code first.

Think about all of the departments in the business and what customer insights the staff or data tools might provide. Fall in love with the user. The customer should be front and center at every point.

The customer may not always be right, but the customer is the one who puts dollars in your pockets.

--

--