A Unique Twist on Agile Development

Katherine Rosenkranz
Florence Development
4 min readJul 23, 2019

We are past the days of people asking, “Are you agile?” Agile development (releasing new features faster in smaller, iterative chunks) is now akin to development practices at a startup. Mostly thanks to agile’s nature of embracing continuous improvement and it’s flexibility in handling change.

While most software development teams would say they are agile, processes still look different at every company. Some teams religiously hold agile “planning poker” meetings — and some do not. Some teams are rigid with user story templates when writing and organizing requirements, and others take a more informal approach. For us, the unique twist on the agile development process comes from the clinical trial industry.

As a software developer, I knew I wouldn’t personally be curing cancer. However, being part of the journey to find a cure by working on software that powers clinical trial research has been an eye-opening experience. This experience has also been especially enlightening in how the scientific method influences our own software development lifecycle.

Starting with a Hypothesis

While a lot of startups are learning to embrace the “test your hypothesis and scale” approach, our love of experimentation is more inspired by the industry we serve than it is by any agile thought-leaders.

If you’ve forgotten what a good experiment looks like, let me help you remember. It starts with a hypothesis. First, the testable, applicable idea needs to be formulated as a hypothesis. A null hypothesis, or alternative hypothesis, is formed as the converse conclusion to the hypothesis just created. Lastly, a control group is identified for evaluation — this group will represent the outcome if no changes were made.

At Florence, we believe a good idea can come from anywhere, so anyone who has attended one of our planning summits, sprint planning meetings or regular retrospectives has heard the phrase “let’s run an experiment.”

One example is during our bimonthly Developer Patterns and Standards meeting. We review our technical backlog and developer style guides then decide on a hypothesis to test as a team for a couple of sprints. We collectively choose to examine an idea built to improve our developer experience and also evaluate results from the previous experiment. If the results are positive, we adopt the practice as a regular process.

The experiment can be as broad as a company-wide change, or as personal as blocking your calendar for heads-down working time. As long as the approach has a testable hypothesis, Florence encourages the experiment to move forward with a celebratory “go forth and conquer.”

Keep Your Eye on the Patient

The goal of a clinical experience is to alleviate a patient’s symptoms by finding a cure. For us, that translates to curing our users’ woes through our product experience.

Just how a clinical trial would not gather useful data without participant visits and feedback, our product roadmap would not be as reliable without user feedback. Before working on a new feature, we conduct user interviews to determine priorities and designs. We take post-implementation feedback seriously on how to improve or expand our platform.

Sure, agile development encourages user stories to create functional business requirements, but we emphasize direct stories from users above any other stakeholder. After all, the efficacy of a drug isn’t determined by a pharma company internally, so why would the efficacy of our product be any different?

Before wrapping up our user-centric approach, it is important to cover what happens when feedback does not match our hypothesis. In clinical research it’s common that a drug or intervention does not produce the expected results. When this happens, participants, families, clinical researchers, social workers, and data scientists work together to determine the best path forward. Trial-and-error is a staple of the scientific method.

When undesirable outcomes happen during development, we pivot our product feature roadmap, despite starting work, if our destination does not align with the users’ highest priorities. Since we value a positive patient outcome, we adapt our plans for what to work on next.

Measure, Monitor and Audit

In Agile development, after implementation, testing starts. Quality Assurance (QA) is a massive part of ensuring software is ready for production. Our development team is diligent with writing unit tests before our QA team starts further validation. We even run additional tests for security and performance. While I cannot claim these practices are unique to our product delivery methodology, compliance testing is.

We ensure we are compliant with FDA regulation internally with each deployment as well as through external audits. A third-party vendor combs through our product, processes, and documentation to decide if we meet quality standards or not. (By the way, I’m happy to report we pass with flying colors.) The whole concept of an “audit” is near and dear to the Clinical Trial process. Our customers personally experience FDA auditing to ensure quality processes at their trial sites, as well.

We take the monitor and auditing philosophy a step further into the development of employees as people, too. Everyone at Florence sets quarterly goals, regularly checks those goals, and meets with respective supervisors to ensure we are on track for growth.

The willingness to experiment, focus on the ultimate outcome of user satisfaction, and commit to monitoring results makes the Florence approach a unique flavor of agile development; and it wouldn’t be the same without the Clinical Trial industry to guide the way.

--

--