The DORA Lab EP #03 | Ben Parisot — Engineering Manager at Planet Argon

typo
The Typo Diaries
Published in
23 min readJul 19, 2024

--

This blog was originally published in the Typo blog.

In the third episode of ‘The DORA Lab’ — an exclusive podcast by groCTO, host Kovid Batra has an engaging conversation with Ben Parisot, Software Engineering Manager at Planet Argon, with over 10 years of experience in engineering and engineering management.

The episode starts with Ben offering a glimpse into his personal life. Following that, he delves into engineering metrics, specifically DORA & the SPACE framework. He highlights their significance in improving the overall efficiency of development processes, ultimately benefiting customers & dev teams alike. He discusses the specific metrics he monitors for team satisfaction and the crucial areas that affect engineering efficiency, underscoring the importance of code quality & longevity. Ben also discusses the challenges faced when implementing these metrics, providing effective strategies to tackle them.

Towards the end, Ben provides parting advice for engineering managers leading small teams emphasizing the importance of identifying & utilizing metrics tailored to their specific needs.

Timestamps

  • 00:09 — Ben’s Introduction
  • 03:05 — Understanding DORA & Engineering Metrics
  • 07:51 — Code Quality, Collaboration & Roadmap Contribution
  • 11:34 — Team Satisfaction & DevEx
  • 16:52 — Focus Areas of Engineering Efficiency
  • 24:39 — Implementing Metrics Challenges
  • 32:11 — Ben’s Parting Advice

Links and Mentions

Episode Transcript

Kovid Batra: Hi, everyone. This is Kovid, back with another episode of Beyond the Code by Typo, and today’s episode is a bit special. This is part of The DORA Lab series and this episode becomes even more special with our guest who comes with an amazing experience of 10 plus years in engineering and engineering management. He’s currently working as an engineering manager with Planet Argon. We are grateful to have you here, Ben. Welcome to the show.

Ben Parisot: Thank you, Kovid. It’s really great to be here.

Kovid Batra: Cool, Ben. So today, I think when we talk about The DORA Lab, which is our exclusive series, where we talk only about DORA, engineering metrics beyond DORA, and things related to implementation of these metrics and their impact in the engineering themes. This is going to be a big topic where we will deep dive into the nitty gritties that you have experienced with with this framework. But before that, we would love to know about you. Something interesting, uh, about your life, your hobby and your role at your company. So, please go ahead and let us know.

Ben Parisot: Sure. Um, well, my name is Ben Parisot, uh, as you said, and I am the engineering manager at Planet Argon. We are a Ruby on Rails agency. Uh, we are headquartered in Portland, Oregon in the US but we have a distributed team across the US and, uh, many different countries around the world. We specifically work with, uh, companies that have legacy rails applications that are becoming difficult to maintain, um, either because of outdated versions, um, or just like complicated legacy code. We all know how the older an application gets, the more complex and, um, difficult it can be to work within that code. So we try to come in, uh, help people pull back from the brink of having to do a big rewrite and modernize and update their applications.

Um, for myself, I am an Engineering Manager. I’m a writer, parts, uh, very, very non-professional musician. Um, I like to read, I really like comic books. I currently am working on a mural for my son, uh, he’s turning 11 in about a week, and he requested a giant Godzilla mural painted on his bedroom wall. This is the first time I’ve ever done a giant mural, so we’ll see how it goes. So far, so good. Uh, but he did tell me that, uh, he said, “Dad, even if it’s bad, it’s just paint.” So, I think that.. Uh, still trying to make it look good, but, um, he’s, he’s got the right mindset, I think about it.

Kovid Batra: Yeah, I mean, I have to appreciate you for that and honestly, great courage and initiative from your end to take up this for the kid. I am sure you will do a great job there. All the best, man. And thanks a lot for this quick, interesting intro about yourself.

Let’s get going for The DORA Lab. So I think before we deep dive into, uh, what these metrics are all about and what you do, let’s have a quick definition of DORA from you, like what is DORA and why is it important and maybe not just DORA, but other metrics, engineering metrics, why they are important.

Ben Parisot: Sure. So my understanding of DORA is sort of the classical, like it’s the DevOps Research and Assessment. It was a think tank type of group just to, I can’t remember the company that they started with, but it was essentially to improve productivity specifically around deployments, I believe, and like smoothing out some deployment, uh, and more DevOps-related processes, I think. That’s, uh, but, uh, it’s essentially evolved to be more about engineering metrics in a broader sense, still pretty focused on deployment. So specifically, like how, how fast can teams deploy code, the frequency of those deployments and changes, uh, to the codebase. Um, and then also metrics around failures and response to failures and how fast people can, uh, or engineering teams respond to incidences.

Beyond DORA, there’s of course the SPACE framework, which is a little bit more broader and looks at some of the more day-to-day processes involved in software engineering, um, and also developer experience. We at Planet Argon, we do a little bit of DORA. We focus mainly on more SPACE-related metrics, um, although there’s a bunch of crossover. For us, metrics are very important both for, you know, evaluating the performance of our team, so that we can, you know, show value to our clients and prove, you know, “Hey, this is the value that we are providing beyond just the deliverable.” Sometimes because of the nature of our work, we do a lot of work on like the backend or improvements that are not necessarily super-apparent to an end user or even, you know, the stakeholder within the project. So having metrics that we can show to our clients to say, “Hey, this is, um, you know, this is improving our processes and our team’s efficiency and therefore, that’s getting more value for your budget because we’re able to move faster and accomplish more.” That’s a good thing. Also, it’s just very helpful to, you know, keep up good team morale and for longevity sake, like, engineers on our team really like to know where they stand. They like to know how they’re doing. Um, they like to have benchmarks on which they can, uh, measure their own growth and know where in sort of the role advancement phase they are based on some, you know, quantifiable metric that is not just, you know, feedback from their coworkers or from clients.

Kovid Batra: Yeah, I think that’s totally making sense to me and while you were talking about the purpose of bringing these metrics in place and going beyond DORA also, that totally relates to the modern software development processes, because you just don’t want to like restrict yourself to certain part of engineering efficiency when you measure it, you just don’t want to look at the lead time for change, or you just don’t want to look at the deployment frequency. There are things beyond these, which are also very important and become, uh, the area of inefficiency or bottlenecks in the team’s overall delivery. So, just for example, I mean, this is a question also, whether there is good collaboration between the team or not, right? If there is no good code collaboration, that is a big bottleneck, right? Getting reviews done in a proper way where the quality, the base is intact, that really, really matters. Similarly, if you talk about things like delivery, when you’re delivering the software from the planning phase to the actual feature rollout and users using it, so cycle time probably in DORA will cover that, but going beyond that space to understand the project management piece and making sure how much time in total goes into it is again an aspect. Then, there are areas where you would want to understand your team satisfaction and how much teams are contributing towards the roadmap, because that’s also how you define that whether you have accumulated too much of technical debt or there are too many bugs coming in and the team is involved right over there.

And an interesting one which I recently came across was someone was measuring that when new engineers are getting onboarded, uh, how much time does it take to go for the first commit, right? So, these small metrics really matter in defining how the overall efficiency of the engineering or the development process looks like. So, I just wanted to understand from you, just for example, as I mentioned, how do you look at code collaboration or how do you look at, uh, roadmap contribution or how do you look at initial code quality, deliverability, uh, when it comes to your team. And I understand like you are a software agency, maybe a roadmap contribution thing might not be very relevant. So, maybe we can skip that. But otherwise, I think everything else would definitely be relevant to your context.

Ben Parisot: Sure. Yeah, being an agency, we work with multiple different clients, um, different repos in different locations even, some of them in GitHub, Bitbucket, um, GitLab, like we’ve got clients with code everywhere. Um, so having consistent metrics across all of like the DORA or SPACE framework is pretty difficult. So we’ve been able to piecemeal together metrics that make sense for our team. And as you said, like a lot of the metrics, they’re for productivity and efficiency sake for sure, but they also then, if you like dig one level deeper, there is a developer experience metric below that. Um, so for instance, PR review, you know, you mentioned, um, like turnaround time on PRs, how quickly are people that are being assigned to review getting to it, how quickly are changes being implemented from after a review has been submitted; um, those are on the surface level, very productivity- driven metrics. We want our team to be moving quickly and reviewing things in a timely manner. But as you mentioned, like a slow PR turnaround time can be a symptom of bad communication and that can lead to a lot of frustration, um, and even like disagreement amongst team members. So that’s a really like developer satisfaction metric as well, um, because we want to make sure no one’s frustrated with any of their coworkers, uh, or bottlenecked and just stuck not knowing what to do because they have a PR that hasn’t been touched.

We use a couple of different tools. We’re luckily a pretty small team, so my job as a manager and collecting all this data from the metrics is doable for now, not necessarily scalable, but doable with the size of our team. I do a lot of manual data collection, and then we also have various third-party integrations and sort of marketplace plugins. So, we work a lot in GitHub, and we use some plugins in GitHub to help us give some insight into, for instance, like you said, like commit time or number of commits within a PR size of those commits you know, we have an engineering handbook that has a lot of our, you know, agreed upon best practices and those are generally in place so that our developers can be more efficient and happy in their work, so, you know, it can feel a little nitpicky to be like, “Oh, you only had two commits in this giant PR.” Like, if the work’s getting done, the work’s getting done. However you know, good commit, best practice. We try to practice atomic commits here at Planet Argon. That is going to, you know, not only like create easier rollbacks if necessary, there’s just a lot of reasons for our best practices. So the metrics try to enforce the best practices that we have in mind already, or that we have in place already. And then, yeah, uh, you asked what other tools or?

Kovid Batra: So, yeah, I mean talking specifically about the team satisfaction piece. I think that’s very critical. Like, that’s one of the fundamental things that should be there in the team so that you make sure the team is actually productive, right? From what you have explained, uh, the kind of best practices you’re following and the kind of things that you are doing within the team reflect that you are concerned about that. So, are there any specific metrics there which you track? Can you like name a few of them for us?

Ben Parisot: Sure, sure. Um, so for team satisfaction, we track a number of following metrics. We track build processes, code review, deep work, documentation, ease of release, local development, local environment setup, managing technical debt, review turnaround, uh, roadmap and priorities, and test coverage and test efficiency. So these are all sentiment metrics. I use them from a management perspective to not only get a feeling of how the team is doing in terms of where their frustrations lie, but I also use it to direct my work. If I see that some of these metrics or some of these areas of focus are receiving like consistently low sentiment scores, then I can brainstorm with the team, bring it to an all-hands meeting to be like, “Here’s some ideas that I have for improving these. What is your input? What’s a reasonable timeline look like?” And then, show them that, you know, their continued participation in these, um, these surveys, these developer experience surveys are leading to results that are improving their work experience.

Kovid Batra: Makes sense. Out of all these metrics that you mentioned, which are those top three or four, maybe? Because it’s very difficult to, uh, look at 10, 12 metrics every time, right? So..

Ben Parisot: Yes.

Kovid Batra: There is a go-to metric or there are a few go-to metrics that quickly tell you okay, what’s going on, right? So for me, sometimes what I basically do is like if I want to see if the code, initial code quality is coming out good or not I’m mostly looking at how many commits are happening after the PRs are being raised for review and how many comments were there. So when I look at these two, I quickly understand, okay, there is too much to and fro happening and then quality initially is not coming out well. But in the case of team satisfaction, of course, it’s a very feeling, qualitative-driven, uh, piece we are talking about. But still, if you have to objectify it with, let’s say three or four metrics, what would be those three or four important metrics that you think impact the developer’s experience or developer’s satisfaction in your team?

Ben Parisot: Sure. So we actually have like 4 KPIs that we track in addition to those sentiment metrics, and they are also sort of sentiment metrics as well, but they’re a little higher level. Um, we track weekly time loss, ease of delivery, engagement, uh, and perceived productivity. So we feel like those touch pretty much all of the different aspects of the software development life cycle or the developer’s day-to-day experience. So, ease of delivery, how, you know, how easy is it for you to be, uh, deploying your code, um, that touches on any bottlenecks in the deployment pipelines, any issues with PRs, PR reviews, that sort of thing. Um, engagement speaks to how excited or interested people are about the work that they’re doing. So that’s the more meat of the software development work. Um, perceived productivity is how, you know, how well you think you are being productive or how productive you feel like you are being. Um, and that’s really important because sometimes the hard metrics of productivity and the perceived productivity can be very different, and not only for like, “Oh, you think you’re being very productive, but you’re not on paper.” Um, oftentimes, it’s the reverse where someone feels like they aren’t being productive at all and I can go, and I know that from their sentiment score. Um, but then I can go and pull up PRs that they’ve submitted or work that they’ve been doing in JIRA and just show them a whole list of work that they’ve done. I feel like sometimes developers are very in the weeds of the work and they don’t have a chance to step back and look at all that they’ve accomplished. So that’s an important metric to make sure that people are recognizing and appreciating all of the work and their contributions to a project and not feeling like, “Oh, this one ticket, I haven’t been very productive on. So, therefore, I’m not a very good developer.” Uh, and then finally, weekly time loss is a big one. This one is more about everything outside of the development work. So this also focuses on like, how often are you in your flow? Are you having too many meetings? Do you feel like, you know, the asynchronous current communication that is just the nature of our distributed team? Is that blocking you? And are you being, you know, held up too much by waiting around for a response from someone? So that’s an important metric that we look at as well.

Kovid Batra: Makes sense. Thanks. Thanks for that detailed overview. I think team satisfaction is of course, something that I also really, really care about. Beyond that, what do you think are those important areas of engineering efficiency that really impact the broader piece of efficiency? So, uh, just to give you an example, is it you’re focusing mostly in your teams related to deliverability or are you focusing more related to, uh, the quality of the work or is it something related to maybe sprints? I’m really just throwing out ideas here to understand from you how you, uh, look at which are those important areas of engineering efficiency other than developer satisfaction.

Ben Parisot: Yeah. I think, right. I think, um, for our company, we’re a little bit different even than other agencies. Companies don’t come to us often for large new feature development. You know, as I mentioned at the top of the recording, we inherit really old applications. We inherit applications that have, you know, the developers have just given up on. So a lot of our job is modernizing and improving the quality of the code. So, you know, we try to keep our deployment metrics, you know, looking nice and having all of the metrics around deployment and, uh, post-deployment, obviously. Um, but from my standpoint, I really focus on the quality of the code and sort of the longevity of the code that the team is writing. So, you know, we look at coding practices at Planet Argon, we measure, you know, quality in a lot of different ways. Some of them are, like I said earlier, like practicing atomic commits size of PRs. Uh, because we have multiple projects that people are working on, we have different levels of understanding of those projects. So there’s like, you know, some people that have a very high domain knowledge of an application and some people that don’t. So when we are submitting PRs, we try to have more than one person look at a PR and one person is often coming with a higher domain knowledge and reviewing that code as it, uh, does it satisfy the requirements? Is it high-quality code within the sort of ecosystem of that existing application? And then, another person is looking at more of the, the best practices and the coding standards side of it, and reviewing it just from a more, a little more objective viewpoint and not necessarily as it’s related to that.

Let’s see, I’m trying to find some specific metrics around code quality. Um, commits after a PR submission is one, you know, if where you are finding that our team is often submitting a PR and then having to go back and work a lot more on it and change a lot more things; that means that those PRs are probably not ready or they’re being submitted a little earlier. Sometimes that’s a reflection of the developer’s understanding of the task or of the code. Sometimes it’s a reflection of the clarity of the issue that they’ve been assigned or the requirements. You know, if the client hasn’t very clearly defined what they want, then we submit a PR and they’re like, “Oh, that’s not what I mean.” so that’s an important one that we looked at. And then, PR approval time, I would say is another one. Again, that one is both for our clients because we want to be moving as quickly with them as we can, even though we don’t often work against hard deadlines. We still like to keep a nice pace and show that our team is active on their projects. And then, it’s also important for our team because nobody likes to be waiting around for days and days for their PR to be reviewed.

Kovid Batra: Makes sense. I think, yeah, uh, these are some critical areas and almost every engineering team struggles with it in terms of efficiency and what I have felt also is not just organization, right, but individual teams have different challenges and for each team, you could be looking at different metrics to solve their problems. So one team would be having a low deployment frequency because of maybe not enough tooling in place and a lot of manual intervention being there, right? That’s when their deployments are not coming out well or breaking most of the time. Or it could be for another team, the same deployment frequency could be a reason that the developers are actually not writing or doing enough number of like PRs in a defined period of time. So there is a velocity challenge there, basically. That’s why the deployment frequency is low. So most of the times I think for each team, the challenge would be different and the metrics that you pick would be different. So in your case, as you mentioned, like how you do it for your clients and for your teams is a different method. Cool. I think with that, I.. Yeah, you were saying something.

Ben Parisot: Oh, I was, yeah. I was gonna say, I think that, uh, also, you know, we have sort of across the board, company best practice or benchmarks that we try to reach for a lot of different things. For instance, like test coverage or code coverage, technical debt, and because we inherit codebases in various levels of, um, quality, the metric itself is not necessarily good or bad. The progress towards a goal is where we look. So we have a code coverage metric, uh, or goal across the company of like 80, 85%, um, test coverage, code coverage. And we’ve inherited apps, big applications, live applications that have zero test coverage. And so, you know, when I’m looking at metrics for tests, uh, you know, it’s not necessarily, “Hey, is this application’s test coverage meeting our standards? Is it moving towards our standards?” And then it also gets into the individual developers. Like, “Are you writing the tests for the new code that you’re writing? And then also, is there any time carved out of the work that you have on that project to backfill tests?” And similarly, with, uh, technical debt, you know, we use a technical debt tagging tool and oftentimes, like every three months or so we have a group session where we spend like an hour, hour and a half with our cameras off on zoom, just going into codebases that we’re currently working on and finding as much technical debt as we can. Um, and that’s not necessarily like, oh, we’re trying to, you know, find who’s not writing the best code or what the, uh, you know, trying to find all the problems that previous developers caused. But it’s more of a is this, you know, other areas for like, you know, improvements? Right. And also, um, is there any like potential risks in this codebase that we’ve overlooked just by going through the day-to-day? And so, the goal is not, “Hey, we need to have no technical debt ever.” It’s, “Are we reducing the backlog of tech debt that we’re currently working within?”

Kovid Batra: Totally, totally. And I think this again brings up that point that for every team, the need of a metric would be very different. In your case, the kind of projects you are getting by default, you have so much of technical debt that’s why they’re coming to you. People are not managing it and then the project is handed over to you. So, having that test coverage as a goal or a metric is making more sense for your team. So, agreed. I think I am a hundred percent in line with that. But one thing is for sure that there must be some level of, uh, implementation challenges there, right? Uh, it’s not like straightforward, like you, you are coming in with a project and you say, “Okay, these are the metrics we’ll be tracking to make sure the efficiency is in place or not.” There are always implementation challenges that are coming with that. So, I mean, with your examples or with your experience, uh, what do you think most of the teams struggle with while implementing these metrics? And I would be more than happy to hear about some successes or maybe failures also related to your implementation experiences.

Ben Parisot: Yeah. So I would just say the very first challenge that we face is always, um. I don’t want to see team morale, but, um, the, somewhat like overwhelming nature depending on the state of the codebase. Like if you inherit a codebase, it’s really large and there’s no tests. That’s, you know, overwhelming to think about, having to go and write all those tests, but it’s also overwhelming and scary to think, “Oh, what if something breaks?” Like, a test is a really good indicator of where things might be breaking and there’s none of that, so the guardrails are off. Um, and that’s scary. So helping people get used to, especially newer developers who have just joined the team, helping them get used to working within a codebase that might not be as nice and clean as previous ones that they’ve worked with is a big challenge. In terms of actual implementation, uh, we face a number of challenges being an agency. Like I mentioned earlier, some codebases are in, um, different places like GitHub or Bitbucket. You know, obviously those tools have generally the same features and generally the same, you know, sort of dashboard type of things. Um, but if we are using any sort of like integrated tool to measure metrics around those things, if we get it, um, a repo that’s not in the platform, it’s not on the platform where we have that integration happening, then we don’t get the metrics on that, or we have to spin up a new integration.

Kovid Batra: Yeah.

Ben Parisot: Um, for some of our clients, we have access to some of their repos and not others, and so, like we are working in an app ecosystem where the application that we are responsible for is communicating and integrated with another application that we don’t, we can’t see; and so that’s very difficult at times. That can be a challenge for implementing certain metrics, because we need to know, like, especially performance metrics for the application. Something might be happening on this hidden application that we don’t have any control over or visibility into.

Um, and then what else? Just I would say also a challenge that we face is the, um, most of our developers are working on 2 to 3 applications at a time, and depending on the length of the engagement, um, sometimes people will switch on and off. So it can be difficult to track metrics for just a single project when developers are working on it for like maybe a few weeks or a few months and then leaving. Sometimes we have like a dedicated developer who’s lead and then, have a support developer come in when necessary. And so that can be challenging if we’re trying to parse out, like why there might’ve been a shift in the metrics or like a spike in one metric or another, or a drop and be like, “Okay, well, let’s contextualize that around who was working on this project, try to determine like, okay, is this telling us something important about the project itself, or is it just data that is displaying the adding or subtracting of different developers to the project?” So that can be a challenge.

Specifically, I can mention an actual sort of case study that we had. Uh, we were using Code Climate, which is a tool that we still use. We use the quality tool for like audits and stuff. Um, but when I first started applying to Argon, I wanted to implement its velocity tool, which is like the sister tool to quality and it is like very heavily around cycle time. Um, and it was all great, I was very excited about it. Went and signed up, um, went and connected our GitHub accounts, or I guess I did the Bitbucket account at the time cause most of our repos were in Bitbucket. Um, didn’t realize at the time at least that you could only integrate with one platform. And so, even though we had, uh, we had accounts and we had clients with applications on GitHub, we were only able to integrate with Bitbucket. So some engineers’ work was not being caught in that tool at all because they were working primarily in GitHub application. And again, like I said, sometimes developers would then go to one of the applications in Bitbucket, help out and then drop off. So it was just causing a lot of fluctuations in data and also not giving us metrics for the entire team consistently. So we eventually had to drop it because it was just not a very valuable tool, um, in that it was not capturing all of the activities of all of our engineers everywhere they were working. Um, I wished that it was, but it’s the nature of the agency work and also, you know, having people that are, um.

Kovid Batra: Yeah, I totally agree on that point and the challenges that you’re facing are actually very common, but at the same time, uh, having said that, I believe the tooling around metrics observation and metrics monitoring has come way ahead of what you have been using in Code Climate. So, the challenge still remains, like most of the teams try to gather metrics manually, which is time-consuming, or in your case where agencies are working on different projects, it’s very difficult or different codebases, very difficult to gather the right metrics for individual developers there also. Somehow, I think the challenges are very valid, but now, the tooling that is available in the market is able to cater to all those challenges. So maybe you want to give it a try and see, uh, your metrics implementation getting in place. But yeah, I think, thanks for highlighting these pointers and I think a lot of people, a lot of engineering managers and engineering leaders struggle with the same challenges while implementing those. So totally, uh, bringing these challenges in front of the audience and talking about it would bring some level of awareness to handle these challenges as well.

Great. Great, Ben. I think with this, uh, we would like to bring an end to our today’s episode. It was really amazing to understand how Planet Argon is working and you are dealing with those challenges of implementing metrics and how well you are actually doing, even though the right tooling or right things are not in place, but the important part is you realize the purpose. You don’t probably go ahead and do it for the sake of doing it. You’re actually doing it where you have a purpose and you know that this can impact the overall productivity of the team and also bring credibility with your clientele that yes, we are doing something and you have something to show in numbers also. So, I really appreciate that.

And while, like before we say goodbye, is there parting advice or something that you would like to speak with the audience? Please go ahead.

Ben Parisot: Oh, wow! Um, yeah, sure. So I think your point about like understanding the purpose of the metrics is important. You know, my team, uh, I am the manager of a small team and a small company. I wear a lot of hats and I do a lot of different things for my team. They show me a lot of grace, I suppose, when I have, you know, incomplete data for them, I suppose. Like you said, there’s a lot of tools out there that can provide a more holistic, uh, look. Um, and I think that if you are an agency, uh, if you’re a manager on a small team and you sort of struggle to sort of keep up with all of the metrics that you have even promised for your team or that you know that you should be doing, uh, if you really focus on the ones that are impacting their day-to-day experience as well as like the value that they’re providing for either, you know, your company’s internal stakeholders or external clients, you’re going to quickly see the metrics that are most important and your team is going to appreciate that you’re focusing on those, and then, the rest of it is going to fall into place when it does. And when it doesn’t, um, you know, your team’s not going to really be too upset because they know, they see you focusing on the stuff that matters most to them.

Kovid Batra: Great. Thanks a lot, Ben. Thank you so much for such great, insightful experiences that you have shared with us. And, uh, we wish you all the best, uh, and your kid a very happy birthday in advance.

Ben Parisot: Thank you.

Kovid Batra: All right, Ben. Thank you so much for your time. Have a great day.

Ben Parisot: Yes. Thanks.

--

--

typo
The Typo Diaries

Improve velocity, quality & throughput of your dev teams by co-relating SDLC metrics, work distribution & well-being. Explore further - www.typoapp.io