I’m a Software Testing Noob

Norman Schultz
12 min readJul 29, 2019

Three months into my developer job. Here’s what I’ve learned about real-world testing so far.

Being a QA dev is a lot like being a detective. I love it!

I got my first developer job in May of this year after completing the back end development program at the Turing School of Software and Design. I’ve learned a ton since then.

I actually learned a lot from just the coding challenge for my new job and the subsequent interview. It involved working with a movie database API, verifying results returned from a title search. I tested whether the results matched the search made and if the entries returned were legit movies. During the interview we discussed my testing approach and expanded testing beyond the original ask. They liked the tests I wrote and more importantly how I thought. On top of that I got along well with people interviewing me and was very open about being genuinely interested in what the company was up to. I got the job.

Once I got past the onboarding phase I started working closely with the developer who interviewed me. He is not only a sharp programmer, he is a good mentor. Early on he regularly gave me challenges to overcome and asked about my thinking/ideas. My learning skyrocketed.

Here are some highlights of important things I’ve learned so far about real-world development testing:

  1. You might not have control over your test data.
This is not the data we actually work with lol — but the complexity isn’t far off!

When you’re learning coding you end up working on small projects. You might be working alone or with a few other people. Maybe you’re given data to work with in the form of a existing CSV or JSON files used in database seeding. Maybe the data set you work with is pretty large, even.

None of this compares to the data “situation” you find at a large organization.

We have an entire team dedicated only to data. That team is located in a different city. They have their own SCRUM master and devs, stories and tickets. I’ve seen just a few of their database schemas — they are massive. I have access to our actual database using a client app (I use Postbird) — it’s about 70 tables. This is just for the part of the business my team is involved in.

Some of the data we work with represents publicly available information. So for the related programming instead of using fake data (first_name: joe, last_name: schmo) we use real data. Using real data ensures that realistic data cases are covered. Because of this the data team creates and populates all versions of our database — production, development, and QA (our pre-deploy environment). While we are able to requests data cases, Developer access to the database is read-only.

The data changes over time because it reflects the real data of our business. An entry you see one month might not be the the same the next — or might not be there at all. As you can probably imagine, this presents some interesting challenges. Are you not getting anything returned because your query isn’t written properly, or is there just nothing there to be returned? Is that test failing because the function is broken or did it just not find anything?

I have actually come to like using real data. I’ve always thought that there’s something fishy about a process in which you create your own test data. There’s a time/place for it, but it often ends up just being an echo chamber. “I’ll put this thing here and what’s this! Lo’ and behold, I can get it back! Yay for me!” The concept of a double-blind experiment comes to mind. If you know what you want to get out of something in advance you’re (consciously or not) probably going to set things up to get that. So if you really want to trust your results and implications, you have to make sure the people doing the experiments don’t know what you’re going for (or better: they don’t even know what the experiment is about). For us in software, the database represents an unbiased entity where few if any assumptions are made about what we’re going to do with it or how things will get done. You might have to work harder both in testing and development, and deal with some frustration when you get surprising data returned to you, but at least when you DO get the data you’re looking for you know it’s not just an empty exercise.

2. You have to constantly advocate for quality.

If you don’t bring up quality issues there’s a good chance no one will.

No one at the company wants endpoint downtime, erroneous data, performance delays, or bugs. Such problems can mean losing real dollars. (In our business there can be compliance/legal issues raised by faulty software.) But if you care, you really have to make sure quality is considered at every step and with enough rigor. Other people on your team have plenty on their hands. You have the quality football and are expected to run with it!

With this in mind, I think being a dev requires being fairly outspoken. The pressure to get defects fixed and get new features shipped is often very real. Ensuring quality represents slowing things down because it’s essentially saying “Ok, wait a second. I can see that this works in this circumstance — but how can we be sure it’s going to work across the platform for every kind of user in every location?”

Waiting until all the development work is done to be the one to break the bad news that it doesn’t work as expected is an antipattern. It will result in stories not being completed on time, and require developers to shift their context from new work they took up back to what they thought they already finished. Quality really must be included at the earliest moment in the software development cycle.

Something to think about: Let’s say you run a site that ranks baseball players using a multi-factor algorithm. Making an API call to obtain the latest rankings will give a list of players with some details, but will probably not return the data by which the players were ranked in the first place. So from an integration standpoint, how would you test that you’re getting the right results? The data you need to prove or disprove that the ranking function works isn’t there. At the very least your test is more difficult to write, but it might even be impossible to automate. It would have been better that quality was thought of from the very beginning, at the time when the decisions about what data the endpoint returns were made in the first place. A good developer would point this out get everyone on the same page from the start.

3. Newsflash: Things really do break!

We’ve had outages and high-priority defects break in and take over a sprint. It’s pretty stressful! We had one of our critical services down for several hours. These things happen to the best of organizations. If the software has a defect or straight up fails in production, and it was your job to write the integraion tests, there’s a good chance the bug is on you!

Let me step back a bit and ask a question: in software what is the purpose of a test? The most common answer people will give you is “It’s to see if code does what it’s supposed to do.” But this is actually (and perhaps surprisingly) a poor answer. A much better purpose is to find problems in the code. Code might do what it’s supposed to do, but in so doing break something else. Perhaps the concept of “what it’s supposed to do” wasn’t very clear to begin with, or didn’t reflect long-term thinking. Especially for integration and end-to-end testing — it’s not just about does the thing do the thing, it’s also about whether that’s a thing that should be done at all and whether the system works as a whole with this new thing added.

If developers are doing their job, bugs or potential break points are known in advance. We’re not usually responsible for fixing them — but they shouldn’t be a surprise. And while it’s possible that an organization might play fast and lose with known issues, it’s not likely. People publish code to production because they think it works properly.

It’s not practical (or perhaps even possible) to detect all code problems in advance. But the organization is relying on developers to catch problems before customers do. So our testing has to be wide enough to catch issues before code goes live.

Test stability is also critical. Tests have to be stable enough so that when you do see a failure you know it’s really a problem with the code not merely a problem with the test. Tests that “cry wolf” will soon be ignored and as such they are no longer doing their job of protecting the software.

All this requires that you advocate not only for testing being considered in developer decisions but also for the time and resources needed to have a complete and well-organized testing suite.

4. Testing has in image problem

At some organizations, integration testing is associated with QA work. Toward the end of the Turing program the curriculum pivoted somewhat away from strictly learning programming toward coaching students on getting our first developer jobs. As such we learned there are many paths a developer can take. You can focus strictly on back-end work or swing the other way to the front-end, or be “full stack” and do both. Or you could deviate from the pure developer title and do work in product or QA. I distinctly remember that, unlike the other path options, QA was portrayed with a bit of disdain. While not stated explicitly it seemed implied that being a developer was more creative and intellectually stimulating than being in QA. (I’ve talked with students since then and they confirmed that this is still the prevailing thinking.)

At the time I didn’t think much of it, but it came up again when I got on the job market and noticed some QA positions. In those early coffee meetings (SOOO many coffee meetings) I learned that some junior developers to enter the field via QA only to use it as a stepping stone. Some wouldn’t even consider applying for a QA position.

I actually told one of the QA people I had coffee with about the way QA was portrayed to me and he was well aware of it. He told me that it is a fairly common image, but I didn’t understand why it existed. From my standpoint testing code seems every bit as critical as writing it (in some respects, it might be more critical — bad code can do harm!). I was, though, able to figure out the reasons for the QA image problem.

QA as a field is in a period of transition. In the past most QA testing was done manually. People sat at a computer (or an iPad or phone!) and used the software from the user interface, trying to see if they could “break it” by producing errors or bugs. No one questioned the value of doing this. But in such a world QA people were not programmers. They didn’t know how to code and it wasn’t part of their job.

Things are (very) different now.

While there is still some manual testing going on (and some cases where there just is no substitute) most testing is automated. A QA automation developer codes tests that perform many iterations at a time. Tests don’t have to be in the same language as the programs they are testing — but in order to be automated they have to be coded. So as such many modern QA people are coders. But the image of a QA dev not having the coding chops of a “true” developer still exists.

At the same time, more and more developers are writing their own tests at every level. Strict QA roles are, it would seem, gradually going the way of the dodo by being squeezed on all sides. I don’t know for sure this is a good thing. Having someone who 100% specializes in breaking things can have a lot of value. But development work is expensive, and it’s hard enough to find developers as it is. Dividing up the roles seems too costly in both time and money, especially in an industry that thrives on a fast-moving SDLC.

Regardless of who is doing the work, the programming done in integration testing might not be as diverse or complicated as straight development, but it shouldn’t be underestimated. Our integration testing suite was written over several years. So it’s a brownfield project of its own, but is actually pretty slick. It uses numerous object types and helper modules, reads/writes JSON and YML, creates Docker containers and Rake tasks, connects tests results to external analysis tools. It is its own (sizeable) repo for which is employed a nearly-identical Git workflow as the dev team. It employs while loops, error/failure handling, regex, and just about every assertion type in the book. There have been several test-oriented Ruby gems created for integration testing that are used by the entire organization. This isn’t “Programming Lite”. Sometimes it’s much harder to test a new feature or fix than it is to write the functional code! So I’m not convinced the coding work in integration testing is simpler than any other development work. But the stigma remains.

But there’s a second reason for the testing image problem. A developer writes or revises code for a new feature or fix. Having the testing hat on means one’s mindset is not the same as the developer’s. The developer’s motivation is “Let’s get this done” or “I have to figure out how to fix this.” The testing mindset is more skeptical. It’s more like “OK — code has been written proposing to do X and thought to not break Y or Z. Prove it!” These two perspectives are rather at odds. Yes everyone is on the same team and ultimately want the same things, but the dynamics are that of opposing forces.

This opposition gets real when you write a test that really does show that something doesn’t work or that it broke something else. The deadlines for getting a story done are ticking. You can be seen as giving other devs more work. If a dev had a bad week or made a mistake, you’re the bearer of bad news.

There’s a reason why even in a Agile environment developers might not test their own code. It’s called confirmation bias: the tendency to notice and accept what we already believe (or want to believe) and discount or ignore that which contradicts us. Every human being has this tendency. Developers will naturally want to believe their code is good, especially after it passes unit testing. I’ve seen developers “fix” or even skip tests that were failing without taking the time to think about why the test failed and what we can learn from it. You really need someone else to take a fresh and independent look at the code, one without a motivation for it to pass tests and be promoted to production.

But all this means developers might think of integration testing, over which they may have less control than unit testing, as an annoyance or even a “necessary evil.”

If you do find yourself writing integration tests for code you haven’t written, you walk a fine line in this regard. Being a well-liked part of the culture while simultaneously (and frequently) sending other developers back to the drawing board is.., well, problematic! There are significant “soft” skills involved. Too aggressive? Others might naturally start avoiding having you write tests. Too easy-going? You’re tests will probably not be rigorous enough or you’ll cave to pressure. Neither is good for the software or process. Finding a balance is an art in itself.

Closing Thoughts

I love the testing part of development work and think integration testing is fascinating. If you have a questioning personality, and see things with a critical viewpoint with an eye for detail, you probably feel the same way. There’s a very real industry need for developers for whom testing is a strength, and a subsequent ton of potential for a feeling of reward at a job well done. Get testing right and you‘ll make everyone’s life easier and have a seriously positive impact on your users and the company’s image. For me a career doesn’t need much more than that!

--

--

Norman Schultz

A back-end test automation developer, and graduate of the Turing School of Software and Design. Enjoying development work and learning every day!