<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Alan Richardson — EvilTester.com on Medium]]></title>
        <description><![CDATA[Stories by Alan Richardson — EvilTester.com on Medium]]></description>
        <link>https://medium.com/@eviltester?source=rss-1884120cfdf5------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 17 May 2026 19:16:12 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@eviltester/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[A Web Testing, Automating and Tooling Masterclass]]></title>
            <link>https://medium.com/@eviltester/a-web-testing-automating-and-tooling-masterclass-6758d2707253?source=rss-1884120cfdf5------2</link>
            <guid isPermaLink="false">https://medium.com/p/6758d2707253</guid>
            <category><![CDATA[test-automation]]></category>
            <category><![CDATA[software-testing]]></category>
            <dc:creator><![CDATA[Alan Richardson — EvilTester.com]]></dc:creator>
            <pubDate>Fri, 06 Feb 2026 11:58:33 GMT</pubDate>
            <atom:updated>2026-02-06T12:05:53.854Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*5MeRViylGooLUKf5.png" /></figure><p><em>TLDR; We can only test to the level supported by our Ability, and the degree to which we are supported by tooling to Observe, Interrogate and Manipulate the System.</em></p><h3>Video</h3><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FbSfgADkdQug%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DbSfgADkdQug&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FbSfgADkdQug%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/270f2425fce0a32dba375b542c874a58/href">https://medium.com/media/270f2425fce0a32dba375b542c874a58/href</a></iframe><p><a href="https://www.youtube.com/watch?v=bSfgADkdQug">Watch on YouTube</a></p><ul><li>00:00 Introduction</li><li>00:26 Application Under Test</li><li>02:51 Live Testing</li><li>03:41 Testing vs Automating</li><li>06:03 Testability vs Automatability</li><li>09:25 Workarounds</li><li>10:55 Technical Knowledge</li><li>15:44 Test Approach — JavaScript</li><li>17:02 Exercise — Test Approach</li><li>17:24 Automated Execution Approach</li><li>18:34 Exercise — Code Review</li><li>20:35 Exercise — Your Project</li><li>20:59 Server Side</li><li>24:20 Exercise — Test Approach — API</li><li>24:52 API Tooling</li><li>29:14 Exercise — API Testing</li><li>29:34 Automating the API</li><li>31:54 API Interacting with UI</li><li>37:09 End Notes</li></ul><h3>Introduction</h3><p>A customer has reported that one of your pages doesn’t work. But your automated execution coverage tells you that it does. What went wrong?</p><p>That’s what we’re going to explore now.</p><ul><li>difference between testing and automating,</li><li>testability vs automatability,</li><li>we’ll think about test coverage from a technology perspective,</li><li>and we’ll look at some tools that can help you test.</li></ul><p>Here’s the Application Under Test.</p><p>Infinite Scroll Challenge on the TestPages</p><p>Many sites have infinite scroll. You scroll to the bottom and it loads new items.</p><p>That functionality works. We have have automated coverage. That passes.</p><p>What’s the issue?</p><blockquote><em>Oh yeah, sorry I forgot to mention. I’m going to encourage you to think. Ideally we would do this as a workshop, as hands on training, but it’s a video and text workbook.So all I can do is prompt you into thinking.</em></blockquote><blockquote><a href="https://www.eviltester.com/page/contact/"><em>Contact me</em></a><em> if you are interested in live training for your team or organization.</em></blockquote><p>So what’s the issue?</p><p>Well, I can’t click the button. The page refreshes and scrolls too quickly and the button isn’t enabled for long enough when it is visible.</p><p>This happens with a lot of infinite scroll sites, there are footers that can’t be clicked, stuff at the bottom of the screen that you can’t see.</p><p>This did work at some point, because we tested it, but I guess someone changed the time out value and the automated execution assertions didn’t flag this as an issue.</p><p>So let’s start thinking about this from a Testing perspective.</p><h3>Live</h3><p>If this was a live project we could just stop here and raise a defect.</p><p>Obviously the timeout is wrong. Raise the defect. Move on to the next issue.</p><p>But wait.</p><p>We have automated coverage, and it didn’t highlight this issue.</p><p>We should think about the difference between Testing and Automating.</p><h3>Testing vs Automating</h3><p>Testing is what we do. As humans, we interact with the software and we observe, we build models, we learn, we compare the actual interaction with the models. We report information derived from the difference between our models and the observations.</p><p>When we investigate an issue and expand our models we interrogate the system more deeply to learn what’s going on.</p><ul><li>Testing is the human activity of interacting, observing, thinking, learning, experimenting.</li><li>Automating, is the human activity of making the interaction and observation of the system an automated process. And that includes the comparing of observed results with the expected results.</li></ul><p>Automating is a human activity which results in the output of an automated execution process. The only involvement with the human after the activity has been automated is to investigate the failure reports and maintain it when it fails.</p><p>So both Automating and Testing are human processes. But testing leads to more human processes, automating leads to an automated process that only involves humans when it fails.</p><blockquote><strong><em>NOTE:</em></strong><em> AI might change how I view the process of automating. Certainly I’ll need more distinctions in how I describe nuances around Automating, but for now, all the automating I do is Human Initiated, or Directed, and results in some automated execution process.</em></blockquote><h3>Testability vs Automatability</h3><p>And let’s just quickly look at Testability and Automatability because these words are used badly when describing software.</p><p>Is this system testable?</p><p>Yes. I can access it in the browser, I can see it, I can interact with it. The button toggles too quickly but I can test the application.</p><p>But when we talk about testability we often talk about, adding id s to the elements, make it it more observable, etc.</p><p>Well the id s, are really for automatability, not testability.</p><p>In the browser, the browser tooling makes it observable because of the technology used, not the system.</p><p>Most of the time when we are talking about testability we are really talking about automatability.</p><p>This application has been built with automatability in mind.</p><p>It has id s, there are classes, the JavaScript source is visible, it is easy to change the state and configuration variables, this thing is so easy to automate and observe. But none of that was required to help me test it.</p><p><strong>Testability is not the same as automatability.</strong> Keep that in mind as we continue through this process, particularly when we move to the server side interaction.</p><p>With the JavaScript Infinite Scroll system, the main thing that impacts my Testability, is the bug, preventing me from testing the button functionality.</p><h3>Workarounds</h3><p>Let’s quickly consider workarounds.</p><p><strong>Your ability to find workarounds will impact your ability to test the system.</strong></p><p>So your ability impacts the Testability.</p><p><strong>Testability is as much about your ability to test the application as it is the application supporting you in testing it.</strong></p><p>Depending on how deep you want to go. You can only test the application to the limit that you can observe, manipulate and interrogate the application.</p><p>Your ability to test the application is often impacted by the usability of the application, and that is true here. The application is not usable for one main function so it is hard to test that function.</p><p>But, we can work around that, for this application, with more Technical Knowledge.</p><h3>Technical Knowledge</h3><p>When working with the web, we need to understand our tool capabilities.</p><p>My tool at this point is the browser.</p><p>What can I do with it?</p><p>I can view the page source.</p><p>I need to be able to understand HTML. Some CSS. Some JavaScript.</p><p><strong>Having the tooling ability to look at the source doesn’t help me unless I understand what I’m looking at.</strong></p><p>So if you’re testing web applications, you probably want to understand:</p><ul><li>How a browser works,</li><li>How HTML works,</li><li>What is CSS and how we use it,</li><li>What is JavaScript and how it works.</li></ul><p>At the very least to have a reading ability of these technology artifacts.</p><p>So at this level, at the source, with the technical knowledge I have… I can see in the JavaScript some variables, I can see they are amendable.</p><p>But, I don’t have the tooling to amend it yet.</p><p>So, what else do I have available?</p><p>I can look at the dev tools.</p><p>I can see the DOM view shows me much the same as the source. But… it can be different.</p><ul><li>The source is what we gave the browser to work with.</li><li>The DOM is what the browser created after interpreting and executing the source.</li></ul><p>In the DOM I can see the JavaScript as well.</p><p>I also have the ability now, to interact and manipulate the JavaScript variables that I observed in the source.</p><p>There are a few timeout and millisecond variables there.</p><p>Perhaps the bug is the scrollAfterMillis variable?</p><p>Let me change that to 3000.</p><p>And… now it just takes longer to refresh, but I still can’t click the button.</p><p>Let me try the preLoadTimeout:</p><p>Now, the button stays active for 2 seconds before it loads the next set of data. That’s my workaround to make this application testable.</p><p>It also means that I can go a little deeper in the bug report and say that the root cause is the preLoadTimeout being too small a value so the user doesn&#39;t have time to click the button.</p><blockquote><em>You might be interested in this </em><a href="https://www.eviltester.com/blog/eviltester/technical_testing/2022-09-22-chrome-dev-tools-overview/"><em>video and blog post covering more details of Chrome Dev Tools for Testing</em></a><em>.</em></blockquote><h3>Test Approach</h3><p>So what is my test approach for this?</p><ul><li>load the page</li><li>scroll down</li><li>see that it refreshes and adds the data I expect</li><li>click the button</li><li>see that it doesn’t refresh when I scroll</li></ul><p>Is that it?</p><ul><li>Do I need to reload the page and check that it starts auto refreshing again?</li><li>How long do I need to wait after stopping it, and trying to scroll to make sure it doesn’t start auto-refreshing again?</li><li>What else?</li></ul><h3>Exercise — What is your test approach?</h3><p>After this video I encourage you to think through what your Test Approach for this application would be.</p><h3>Test Approach vs Automated Execution Approach</h3><p>Once you’ve figure out how to test it, how would you automate it?</p><p>I’ll show you what we have here.</p><p>And this test passes.</p><pre>@Test  <br>public void scrollToStopLoadingAndClick(){  <br>WebDriverWait(driver, Duration.ofSeconds(10)).until(  <br>        ExpectedConditions.elementToBeClickable(page.getStopLoadingButton()  <br>    ));  <br> <br>    page.getStopLoadingButton().click();  <br>  <br>    new WebDriverWait(driver, Duration.ofSeconds(10)).until(  <br>        ExpectedConditions.textToBePresentInElement(  <br>            driver.findElement(By.id(&quot;statusMessage&quot;)),&quot;Clicked&quot;)  <br>    );  <br>}</pre><p>Is that good enough?</p><p>It works. It passes.</p><p>But it didn’t highlight the fact that the halting functionality is unusable for a human.</p><p>This is one of the issues when we automate something.</p><p>We automate the functionality.</p><p>We don’t automate the experience.</p><blockquote><em>If you are interested in more details about WebDriver with Java then have a look at this </em><a href="https://testpages.eviltester.com/reference/automating/webdriver/webdriver-java/"><em>video Masterclass on the basics of WebDriver with Java</em></a></blockquote><h3>Exercise — Critique the code</h3><p>So after the video, or now, pause the video.</p><p>Critique this code:</p><ul><li>Understand what it does</li><li>What does it not do?</li><li>I can see it doesn’t try to re-scroll the page to make sure that clicking the button stopped the auto-scroll.</li><li>What else does it not do?</li><li>What conditions does it not check?</li><li>Should it match the user experience?</li><li>How could it?</li><li>What would you change to match the user experience?</li></ul><p>Much automated execution coverage does not match the user experience. When it doesn’t then there is a risk that the automated execution passes, but the user experience doesn’t.</p><p>And there is a risk we don’t notice.</p><p>Testing will hopefully identify those issues. But automated execution often does not.</p><h3>Exercise — Consider Your Project</h3><p>Is there a risk on the projects you work with that the automated execution covers the functionality, but not the user experience of that functionality?</p><h3>Server Side</h3><p>OK, so that was the JavaScript version of the Infinite Scroll.</p><p>It offered us some scope for thinking like the tester.</p><p>But let’s push this a little further.</p><p>We also have the server side infinite scroll.</p><p><a href="https://testpages.eviltester.com/challenges/synchronization/infinite-scroll/server-side/">https://testpages.eviltester.com/challenges/synchronization/infinite-scroll/server-side/</a></p><p>It looks the same. It has the same bug, the button is too fast. And my automated execution passes.</p><p>But this page, when it needs new items to display it calls the server to get the information back.</p><p>Do you trust that statement?</p><p>How do you know it calls the server?</p><p>Visibly, in the browser, when I test it and observe the output. It looks the same.</p><p>We need to be able to be able to interact with the application from a technical perspective.</p><p>What if I just think it is connecting to the server, but I accidentally release the wrong version? What if I just renamed the file?</p><p>So we need to check.</p><p>We need to observe the application at multiple technical levels.</p><p>So let’s look in the dev tools again. And this time we’ll look in the network tab.</p><p>Filter by Fetch/XHR to see the requests made by JavaScript and I can observe a call to moreitems</p><p>Let’s interrogate that request</p><p>I can see a JSON response:</p><pre>[<br>    {<br>        &quot;id&quot;: 1,<br>        &quot;text&quot;: &quot;This is content item number 1. Scroll down<br>         to automatically load more content when the<br>          \&quot;Click to Stop\&quot; button becomes visible.&quot;<br>    },<br>    {<br>        &quot;id&quot;: 2,<br>        &quot;text&quot;: &quot;This is content item number 2. Scroll down<br>         to automatically load more content when the<br>          \&quot;Click to Stop\&quot; button becomes visible.&quot;<br>    },<br>    ...<br>]</pre><p>Great.</p><p>So I have tooling to increase my ability to test this.</p><p>But I need to know about:</p><ul><li>HTTP Requests</li><li>Fetch and XML Http Request</li><li>JSON</li></ul><p>My ability to test this application will be limited if I do not have that technical knowledge and if I do not know how to use the Dev tools network tab.</p><p>But at least I know now that it is making server requests.</p><p>I didn’t have to trust anyone. I could verify that this was true.</p><p>So now, our testing scope just expanded.</p><h3>Exercise — Server Side Test Approach</h3><p>Now, what do you have to test?</p><p>Is it good enough to just test the front end now but scrolling up and down and clicking the button?</p><p>After watching this video, take some time to think through what you want to test to cover the Server Side infinite scroll.</p><ul><li>Do we also have to test that HTTP call?</li><li>How much do we have to test it?</li></ul><h3>Server Side Test Tooling</h3><p>And… do you know how to do that?</p><p>How can you amend the HTTP requests?</p><p>We can do some of that from Chrome.</p><p>I could:</p><ul><li>copy as cURL</li><li>copy as fetch</li></ul><p>with cURL I can use the command line.</p><p>Or fetch I can amend it in the console.</p><p>I could take the cURL and paste it into an API tool.</p><p>I can do that in Bruno by creating a new request from cURL.</p><p>Or in postman I can paste in the cURL command into the new request.</p><p>I can experiment with it in these tools.</p><p>It is important to test the API on its own.</p><p>For example, when I built this, it was only when I was automating and testing the API, that I realised that I really need a limit on the count. If I let people make a request and ask for 6 million items back. That could easily bring down my server.</p><blockquote><em>NOTE: you can find </em><a href="https://apichallenges.eviltester.com/tools/clients"><em>a list of API Tools on the API Challenges site</em></a><em>.</em></blockquote><h3>Exercise — API Testing</h3><p>As an exercise, think through what conditions you would want to test on the API, then you can use any of the tooling approaches mentioned to experiment.</p><h3>Automating The API</h3><p>I would probably want to automate the API as well.</p><p>This does remove us from the user experience because the responses from this API are normally handled by the UI, so just because we see something working in the API we can’t assume that it works with the API interacting with the UI.</p><p>I used RestAssured to automate the API.</p><p>The coverage runs at the same time as the web UI coverage.</p><p>And I have a lot more coverage at the API level, than I do at the UI level.</p><p>Think about what coverage you would add for the API.</p><p>Then try to automate it. I used Java, with RestAssured, but you could use any library or programming language you want.</p><p>Or you could even have a set of canned requests in Postman or Bruno or any of the other API tools.</p><p>For production API automating, I primarily use code and HTTP or API libraries. I don’t tend to use tools like Bruno or Postman.</p><p>But for production API Testing, I do use Bruno and other tools. Because I like the flexibility and the ability to see the requests being made.</p><p>Testing and Automating often sound like the same thing. But they have different aims and are supported by different tooling.</p><h3>API Interacting with UI</h3><p>And now I’m in the position where I’m testing the API in isolation of the UI.</p><p>Is that a risk?</p><p>For example, I don’t know how the front end handles a 500 error response. I can see it is the same JSON format but it is a different status code, does that make a difference?</p><pre>[<br>    {<br>        &quot;id&quot;=0,<br>        &quot;text&quot;=&quot;For input string: &#39;-1.02&#39;&quot;<br>    }<br>]</pre><p>Would you test that?</p><p>Do you know how to test that?</p><p>One way to do that is to use a Proxy.</p><p>Intercept the request, amend it to be one that triggers an error and play it through to the front end.</p><p>And then I can see if the system handles error responses or not.</p><p>I would either use Zap or BurpSuite.</p><p>For this exercise I would use ZAP.</p><ul><li>open a session using Edge</li><li>create a new context for testpages.eviltester.com</li><li>filter the history to “Show only URLs in Scope”</li><li>Set the breakpoint on all requests and responses</li><li>trigger a scroll through the UI</li><li>amend the request</li><li>see the result in the UI</li></ul><p>I can use tooling to increase my ability to observe the system and manipulate the system at different technology levels.</p><blockquote><em>NOTE: A </em><a href="https://apichallenges.eviltester.com/tools/proxies"><em>list of recommended HTTP Proxy Tools is available on the API Challenges site</em></a></blockquote><h3>End Notes</h3><p>It is often surprising how much depth we have to test and automate when testing even simple pieces of functionality.</p><p>The more that we extend our technical ability to cover the multiple levels of the application, and we learn how to use tooling to help us observe, interrogate and manipulate at those different technology levels, the more we can expand the coverage of our testing.</p><p>Automating is not the same as Testing. Both are human processes, but the output of automating is not the same as the output of testing.</p><p>But yes… we can be testing, as we are automating, we may well learn things during the process of automating the application. But we should not confuse the continued execution of the output from automating with testing.</p><p>Testing can miss things because we are human, or we may not have covered the conditions, or we may not have gone deep enough into the system.</p><p>Automated execution can miss things because we humans forgot to add the conditions, or we didn’t assert enough. But automated execution can also miss the human user experience and tell us things are working, when they clearly are not.</p><p>I hope that you do now go off and do the exercises yourself. You may not have all the skills to do this yet, you may not have tried all the tools. That just means you can revisit this exercise and application multiple times as you grow your knowledge and skill set. And repeat it when you want to evaluate or learn new tools.</p><p>That’s why I created the Test Pages, and that’s also why this is a fairly high level overview.</p><p>Remember to work through the exercises.</p><h3>Exercises</h3><h3>The Applications Under Test</h3><ul><li><a href="https://testpages.eviltester.com/challenges/synchronization/infinite-scroll/">https://testpages.eviltester.com/challenges/synchronization/infinite-scroll/</a></li><li><a href="https://testpages.eviltester.com/challenges/synchronization/infinite-scroll/server-side/">https://testpages.eviltester.com/challenges/synchronization/infinite-scroll/server-side/</a></li></ul><h3>Exercise — What is your test approach for JavaScript Infinite Scroll?</h3><ul><li>think through what your Test Approach for the JavaScript Infinite Scroll application would be.</li><li>Do you need to reload the page and check that it starts auto refreshing again?</li><li>How long do you need to wait after stopping it, and trying to scroll to make sure it doesn’t start auto-refreshing again?</li><li>What else?</li><li>what conditions would you cover?</li><li>how would you approach the testing?</li></ul><h3>Exercise — Critique the code</h3><ul><li>Understand what it does</li><li>What does it not do? — I can see it doesn’t try to re-scroll the page to make sure that clicking the button stopped the auto-scroll. — What else does it not do? — What conditions does it not check?</li><li>Should it match the user experience? — How could it? — What would you change to match the user experience?</li></ul><p>Critique this code:</p><pre>public class InfiniteScrollTest {  <br>  <br>    static WebDriver driver;  <br>    static InfiniteScrollPage page;  <br>  <br>    @BeforeAll  <br>    static void setupWebDriver(){  <br>        driver = DriverFactory.getNew();  <br>        page = new InfiniteScrollPage(driver);  <br>    }  <br>  <br>    @BeforeEach  <br>    public void reload(){  <br>        page.open();  <br>    }  <br>  <br>    @Test  <br>    public void scrollToStopLoadingAndClick(){  <br>            ExpectedConditions.elementToBeClickable(page.getStopLoadingButton()  <br>        ));  <br>  <br>        page.getStopLoadingButton().click();  <br>  <br>        new WebDriverWait(driver, Duration.ofSeconds(10)).until(  <br>            ExpectedConditions.textToBePresentInElement(  <br>                driver.findElement(By.id(&quot;statusMessage&quot;)),&quot;Clicked&quot;)  <br>        );  <br>    }  <br>  <br>    @AfterAll  <br>    public static void closeDriver(){  <br>        driver.close();  <br>    }  <br>  <br>}</pre><p>Supporting Abstractions:</p><pre>public class DriverFactory {  <br>    public static WebDriver getNew() {  <br>  <br>        WebDriver driver;  <br>        ChromeOptions options = new ChromeOptions();  <br>		options.addArguments(&quot;--disable-smooth-scrolling&quot;);   <br>        driver = new ChromeDriver(options);   <br>        return driver;  <br>    }  <br>}<br><br>public class InfiniteScrollPage {  <br>  <br>    private final WebDriver driver;  <br>  <br>    public InfiniteScrollPage(WebDriver driver) {  <br>        this.driver = driver;  <br>    }  <br>  <br>    public void open() {  <br>        String url = SiteConfig.SITE_DOMAIN +<br>         &quot;/challenges/synchronization/infinite-scroll/&quot;;  <br>        driver.get(url);  <br>    }  <br>  <br>    public WebElement getStopLoadingButton() {  <br>        return driver.findElement(By.id(&quot;loadMoreBtn&quot;));  <br>    }  <br>}<br><br>public class SiteConfig {  <br>  <br>    public static final String SITE_DOMAIN = &quot;https://testpages.eviltester.com&quot;;  <br>   <br>}</pre><h3>Exercise — Consider Your Project</h3><ul><li>Is there a risk on the projects you work with, that the automated execution covers the functionality, but not the user experience of that functionality?</li></ul><p>Because if that’s a risk, you might want to revisit your test approach and your automated execution approach.</p><ul><li>How might your approach need to change?</li></ul><h3>Exercise — Server Side Test Approach</h3><p>What do you have to test now that the Server Side calls are involved?</p><ul><li>Is it good enough to just test the front end now but scrolling up and down and clicking the button?</li><li>Do we also have to test that HTTP call?</li><li>How much do we have to test it?</li></ul><h3>Exercise — API Testing</h3><p>As an exercise, think through what conditions you would want to test on the API, then you can use any of the tooling approaches mentioned to experiment.</p><p>Suggested Tools:</p><ul><li>Browser Dev Tools — fetch requests in console</li><li>Browser Dev Tools — generate cURL and use from CLI</li><li>Bruno</li><li>Postman</li></ul><p><a href="https://apichallenges.eviltester.com/tools/clients">A List of API Tools is available on API Challenges site</a></p><h3>Exercise — UI and API Interactive Testing</h3><ul><li>Use a Proxy to allow you to observe and interrogate the traffic from the Web Site to the Internal API backend.</li><li>Amend the Request to trigger an error response and see how the front end handles it.</li><li>Try testing the API from within the proxy.</li></ul><p>A <a href="https://apichallenges.eviltester.com/tools/proxies">list of recommended HTTP Proxy Tools is available on the API Challenges site</a></p><p><a href="https://www.patreon.com/c/eviltester"><strong>Join our Patreon</strong></a><strong> from as little as $1 a month for early access to videos, ad-free videos, free e-books and courses, and lots of exclusive content.</strong></p><p><em>Originally published at </em><a href="https://www.eviltester.com/blog/eviltester/technical_testing/2026-02-5-web-testing-automating-tooling-masterclass/"><em>https://eviltester.com</em></a><em> on February 6, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6758d2707253" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Test Interaction with HTML form fields]]></title>
            <link>https://medium.com/@eviltester/test-interaction-with-html-form-fields-e10aa9ab1349?source=rss-1884120cfdf5------2</link>
            <guid isPermaLink="false">https://medium.com/p/e10aa9ab1349</guid>
            <dc:creator><![CDATA[Alan Richardson — EvilTester.com]]></dc:creator>
            <pubDate>Fri, 16 Jan 2026 10:34:40 GMT</pubDate>
            <atom:updated>2026-02-06T12:06:23.640Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*iFcLRsgXjWTm73IG.png" /></figure><p><em>TLDR; When testing web apps we need to test the interaction between the browser implemented controls and our system. We don’t test the browser.</em></p><h3>Testing HTML Form Fields</h3><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F_MmW3M_r1Ls%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D_MmW3M_r1Ls&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F_MmW3M_r1Ls%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/5a840057d5bab7075f04970f60b1d6bf/href">https://medium.com/media/5a840057d5bab7075f04970f60b1d6bf/href</a></iframe><p><a href="https://www.youtube.com/watch?v=_MmW3M_r1Ls">Watch on YouTube</a></p><h3>Testing Web Technology</h3><p>I like to use the term Technical Testing.</p><p>Many people don’t. I think other people view the term as implying that some people are technical and other people are not.</p><p>I use the term to cover “Testing which is informed by Technical knowledge”.</p><p>So if you’re testing a web application, you need to understand web technology, you need technical understanding of how browsers and the web works, otherwise you can’t test it effectively.</p><p>In this video I try to explore some of the reasons why this is important.</p><p>If I’m testing a web application and it has a form, and in the form is a date field which has a date control calendar pop up which lets me set a date.</p><figure><img alt="date drop down" src="https://cdn-images-1.medium.com/max/284/1*NaF1cXBqnadLel8IVoJErw.png" /></figure><p>What should I test?</p><h3>What should I test?</h3><p>Should I test that…</p><ul><li>the calendar popup works on mobile and every web browser?</li><li>the calendar control works with keyboard?</li><li>the calendar control correctly switches years and months when I click the control buttons?</li></ul><p>Well, no. Not if the HTML looks like this:</p><pre>&lt;input id=&quot;datetime-local-input&quot; type=&quot;datetime-local&quot;<br> name=&quot;datetime-local&quot;&gt;</pre><p>All of the complex UI interaction is implemented by the browser.</p><p>All our application does is use an off-the-shelf browser provided control.</p><h3>Do I not test the functionality?</h3><p>When we are using off-the-shelf controls like this we have to test the interaction between the control and our application.</p><p>If we added JavaScript event listeners to the control to perform extra validation or functionality then… yes we have to test that.</p><p>If we use the date entered in the control by the user, then we have to test that our application can handle the data supplied by the value of that control.</p><p>If we are not using the date then we should question why we have the control on our application UI.</p><h3>Learn to spot technology risks</h3><p>I can see from the HTML above that there might be an accessibility risk, because I don’t see any ARIA attributes associated with that control.</p><p>So we might not have configured the control in the page properly to implement all the accessibility requirements we might want.</p><p>If we had created a custom control, or it is specialised React control, then we do want to test it because we just introduced a technology risk that might impact mobile and different browsers. We might also have functional bugs.</p><p>But if we are using built in browser controls we don’t need to test as much functionality related to the control. We need to test the functional interaction with our application and custom code.</p><h3>Test the Domain Configuration</h3><p>We shouldn’t spend a long time testing validation messages from custom controls, but if a control has been configured then we need to test that the configuration is correct.</p><pre>&lt;input id=&quot;password-input&quot; type=&quot;password&quot; name=&quot;password&quot;<br> pattern=&quot;^[A-Za-z0-9_]{4,12}$&quot; maxlength=&quot;14&quot; required=&quot;true&quot;&gt;</pre><p>I don’t need to test that the password control above shows dots instead of letters and hides the input. But I do need to check that the configuration we supplied for the validation pattern and length are correct. And I need to make sure that the password is actually required (because we configured that it is).</p><blockquote><em>NOTE: eagle-eyed readers, who have a technical understanding of how the configuration works will probably have spotted a bug in the above configuration. Yes. We should find that bug in our testing. And we know to look for it because we understand that the control has been configured with our application domain rules, and that’s what we are testing.</em></blockquote><h3>Don’t Trust the Client</h3><p>The greater the technical understanding we have of the web technology, the less we will trust any data that the server receives.</p><p>We can edit the DOM to bypass domain configuration.</p><p>We can use JavaScript to enter data that bypasses browser validation.</p><p>We can bypass the browser altogether and just communicate directly with the server.</p><p>A greater technical understanding allows us to understand the risks of the technology more effectively</p><h3>Testing Web Apps</h3><p>All of the examples above are from:</p><ul><li><a href="https://testpages.eviltester.com/">testpages.eviltester.com</a></li></ul><p>I also have a guide to testing HTML elements to describe some of the risks and issues that we might have when using or configuring off-the-shelf HTML fields and elements.</p><ul><li><a href="https://testpages.eviltester.com/reference/input-elements/">Testing Input Elements — Notes and Exercises</a></li></ul><p>I had to learn web technologies when testing web applications.</p><p>I have seen people functionally test the browser implemented controls because they didn’t learn web technologies.</p><p>This means they wasted time, because they didn’t understand the technology they were testing.</p><p><a href="https://www.patreon.com/c/eviltester"><strong>Join our Patreon</strong></a><strong> from as little as $1 a month for early access to videos, ad-free videos, free e-books and courses, and lots of exclusive content.</strong></p><p><em>Originally published at </em><a href="https://www.eviltester.com/blog/eviltester/news/html-form-field-testing/"><em>https://e</em>viltester.com</a><em> on January 16, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e10aa9ab1349" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[New Version of CounterString Extension]]></title>
            <link>https://medium.com/@eviltester/new-version-of-counterstring-extension-0936f1e2a8e9?source=rss-1884120cfdf5------2</link>
            <guid isPermaLink="false">https://medium.com/p/0936f1e2a8e9</guid>
            <category><![CDATA[software-testing]]></category>
            <dc:creator><![CDATA[Alan Richardson — EvilTester.com]]></dc:creator>
            <pubDate>Tue, 13 Jan 2026 09:56:50 GMT</pubDate>
            <atom:updated>2026-01-13T10:01:50.930Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*TXJkdYiAobB5Hpc1.png" /></figure><p>I released a new version of my CounterString generation Chrome Extension with new features and a new UI.</p><h3>CounterString Extension</h3><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FEXytN-CR7nM%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DEXytN-CR7nM&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FEXytN-CR7nM%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/914402fb9e41f41bf027b316b44ed89c/href">https://medium.com/media/914402fb9e41f41bf027b316b44ed89c/href</a></iframe><p><a href="https://www.youtube.com/watch?v=EXytN-CR7nM">Watch on YouTube</a></p><h3>Source and Download Links</h3><p>I have a tool page for the CounterString extension with more details.</p><ul><li><a href="https://www.eviltester.com/page/tools/counterstringjs/">https://www.eviltester.com/page/tools/counterstringjs/</a></li></ul><p>But you can find the tool:</p><ul><li><a href="https://github.com/eviltester/counterstringjs">source</a></li><li><a href="https://chromewebstore.google.com/detail/counterstring/keklpkmokeicakpclclkdmclhgkklmbd">on the chrome store</a></li></ul><h3>Overview</h3><p>I first released the CounterString Extension in 2019 (I think), and at that point it was a simple input field that created a string and added it as a value to the input field.</p><p>Now it can:</p><ul><li>Generate CounterStrings</li><li>Configurable length and delimiters</li><li>Generate Random Data From Regex</li><li>Generate Character Ranges</li><li>Repeat text, characters and regex</li><li>Add generated data as value, with input event, to an input field</li><li>Trigger Key events to ‘type’ the data with configurable speed</li><li>Binary Chop Calculator for field length exploration</li></ul><p>Some of this was a result of migrating code from my Java Test Tool Hub (which I think I worked on about 12 years ago):</p><ul><li><a href="https://github.com/eviltester/testtoolhub">https://github.com/eviltester/testtoolhub</a></li></ul><p>From the Test Tool Hub I took the:</p><ul><li>Robot — I had to find a different way of triggering events, but this allows ‘typing’ the data into the field rather than just amending the value.</li><li>Increased customization of the CounterStrings</li><li>Ranges of Data</li><li>Repeated Data</li><li>Binary Chop Range Calculation</li></ul><p>And the Data from Regex was repurposing some code from my <a href="https://anywaydata.com">AnyWayData.com</a> test data generator which uses the same JavaScript library to generate data from a Regex — <a href="http://fent.github.io/randexp.js/">RandExp</a></p><h3>CounterStrings</h3><p>I wrote about <a href="https://www.eviltester.com/2018/05/counterstring-algorithms">CounterString Algorithms</a> in an earlier blog post.</p><p>This extension uses both reverse generation and forward generation.</p><p>I initially learned about CounterStrings from <a href="https://satisfice.com">James Bach</a>. James has a tool called <a href="https://www.satisfice.com/download/perlclip">PerlClip</a> which is a command line tool to generate data directly into the clipboard. This uses a reverse algorithm where the full string is generated and then reversed to paste into a field.</p><p>A forward algorithm allows ‘typing’ the String character by character without generating the full string first.</p><p>While I don’t generate the full String first, I do precompute a Schema which I then follow to generate the String, so it takes about the same processing time, but uses less memory and allows streaming of the result.</p><p>I created a different algorithm this time, which is much simpler to understand, but probably takes longer to execute. I’ll need to update the algorithm page at some point, but you can see the algorithm implemented in the source code.</p><ul><li><a href="https://github.com/eviltester/counterstringjs/blob/master/extension/js/generateSchema.js">github.com/eviltester/counterstringjs/blob/master/extension/js/generateSchema.js</a></li></ul><h3>Other CounterString Tools</h3><p>A quick hunt around the web revealed a few more CounterString tools that I don’t think I had found all of these before. So these might be interesting for anyone thinking of creating their own implementation.</p><ul><li>PERL: <a href="https://www.satisfice.com/download/perlclip">https://www.satisfice.com/download/perlclip</a></li><li>TYPESCRIPT: <a href="https://github.com/j19sch/counterstring/">https://github.com/j19sch/counterstring/</a></li><li>RUBY: <a href="https://github.com/jamesmartin/counterstring">https://github.com/jamesmartin/counterstring</a></li><li>JAVA: <a href="https://github.com/eviltester/testtoolhub">https://github.com/eviltester/testtoolhub</a></li><li>PYTHON: <a href="https://github.com/deefex/pyclip">https://github.com/deefex/pyclip</a></li><li>RUST: <a href="https://github.com/thomaschaplin/rust-counter-strings">https://github.com/thomaschaplin/rust-counter-strings</a></li></ul><p>Another Chrome Extension:</p><ul><li><a href="https://github.com/Pawel-Albert/utilities-for-testing-extension">https://github.com/Pawel-Albert/utilities-for-testing-extension</a></li><li>and some associated Test Pages that I haven’t seen before</li><li><a href="https://pawel-albert.github.io/utilities-for-testing-extension/">https://pawel-albert.github.io/utilities-for-testing-extension/</a></li></ul><p><a href="https://www.patreon.com/c/eviltester"><strong>Join our Patreon</strong></a><strong> from as little as $1 a month for early access to videos, ad-free videos, free e-books and courses, and lots of exclusive content.</strong></p><p><em>Originally published at </em><a href="https://www.eviltester.com/blog/eviltester/news/counterstring-new-version-dec-2025/"><em>https://eviltester.com</em></a><em> on January 13, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0936f1e2a8e9" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Software Testing Podcast — Episode 026 — Portfolio Projects — The Evil Tester Show]]></title>
            <link>https://medium.com/@eviltester/software-testing-podcast-episode-026-portfolio-projects-the-evil-tester-show-d027e38c58b9?source=rss-1884120cfdf5------2</link>
            <guid isPermaLink="false">https://medium.com/p/d027e38c58b9</guid>
            <dc:creator><![CDATA[Alan Richardson — EvilTester.com]]></dc:creator>
            <pubDate>Wed, 24 Dec 2025 12:53:02 GMT</pubDate>
            <atom:updated>2025-12-24T12:54:43.200Z</atom:updated>
            <content:encoded><![CDATA[<h3>Software Testing Podcast — Episode 026 — Portfolio Projects — The Evil Tester Show</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*drCiMjV8Na2i7Ids.png" /></figure><p>Should you have an online portfolio showcasing your skills and abilities to help get a job?</p><p>It really depends on the recruitment process. But… if I’m recruiting, and you have a profile then I will have looked at it. So it better be good.</p><p>Welcome to The Evil Tester Show! Covering the common question of “Should we have portfolio projects?” Alan shares his experience as both a creator and reviewer of portfolios, covering everything from the dos and don’ts of GitHub repos, to which skills actually benefit from a showcase, and how a well-crafted readme can make a difference. Whether you’re considering building a portfolio, looking to upgrade your current one, or wondering where to focus your efforts, this episode has practical advice to help you stand out as a top 1% applicants and avoid creating a portfolio that could hurt more than help.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FpqqYTmLFj5Y%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DpqqYTmLFj5Y&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FpqqYTmLFj5Y%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/98e6b6ae0bdf7727385f9bf64ee86f92/href">https://medium.com/media/98e6b6ae0bdf7727385f9bf64ee86f92/href</a></iframe><p><a href="https://www.eviltester.com/show/026-portfolio-projects/">audio and subscribe links here</a></p><h3>Episode Summary</h3><p>A practical look at online portfolios for developers and testers. Covering both trying to find your next role and also looking to switch careers. Most people don’t have public portfolios so that means you can really stand out if you build a good one.</p><p>We explore what actually matters in a portfolio: the difference between learning repos, personal projects, and true showcase projects designed for job hunting. We break down the content that impresses, and the stuff that doesn’t. We emphasize the need for simplicity and quality. Expect to find tips about managing forks on GitHub, writing an effective README, documenting your learning process, and promoting your work across LinkedIn and blogs to maximize visibility.</p><p>Tailor your projects to communicate exactly what you want and make your portfolio a focused sales tool rather than a catch-all for everything you’ve ever coded. Plus, advice on how to keep your repo clean, how to handle learning projects when switching roles or languages, and why you should never be afraid to show that you’re still learning.</p><p>Most recruiters won’t look at your portfolio, but when they do, you’ll want to be in the top 1%. If you do decide to build a portfolio, make sure it represents your best work.</p><h3>Outline</h3><p>00:00 Portfolio Value</p><p>The episode starts by discussing the real-world value of online portfolios. Most recruiters don’t look at them, but if the right person does, yours better be impressive. The host shares personal experiences with portfolios and offers perspective on how useful these tools can really be.</p><p>02:59 Stand Out Skills</p><p>We dig into what actually makes you, and your portfolio, stand out. Typical skills tested during interviews like defect writing, test cases, or strategies do not need to be shown off in a repo. Instead, focus on things that make you unique, such as personal opinions, in-depth case studies, or examples of thought leadership. This is often best done through blogs or articles, linked to your GitHub on your LinkedIn profile.</p><p>09:19 Project Types</p><p>A breakdown of project types:</p><ul><li>Learning Projects: Messy, experimental, used to learn new tools, languages, or skills. It’s okay if they’re not polished, but document what you learned.</li><li>Personal Projects: Hobbies or interests that show personality and initiative. These don’t need to be perfect but should reflect genuine enthusiasm and curiosity.</li><li>Portfolio Projects (Showcase Projects): Your best, most complete work. These should be polished, well-documented, and focused on the skills you’re trying to sell.</li></ul><p>Making these distinctions helps recruiters judge your work appropriately. Always clarify the purpose in your README files.</p><p>12:27 Showcase Projects</p><p>Showcase projects are your sales pitch-they need to be as close to perfect as possible. The host shares advice on what to include in a GitHub repo:</p><ul><li>Use your profile README to introduce yourself professionally.</li><li>Only include original repos, not forks, to keep things clean.</li><li>Don’t commit IDE or compiled files; use .gitignore.</li><li>Use static analysis tools and add unit tests to show you care about quality.</li><li>Readme documentation should cover project intent, installation instructions, known limitations, and choices made during development.</li><li>Whenever possible, leverage GitHub Actions to show your code works.</li></ul><p>Recency and maintenance matter when looking for work. Keep your portfolio projects updated and relevant.</p><p>19:39 Promoting Yourself</p><p>Once you’ve created a portfolio project, get the word out. Add it to your LinkedIn profile, feature it as a project, and share articles about your work. Updating and promoting your portfolio regularly keeps it visible and demonstrates your ongoing growth.</p><p>21:44 Final Advice</p><p>Most professionals don’t bother with portfolios, a missed opportunity for many, but if you go the extra mile to create one you can stand out. Make sure it’s good, you don’t want to stand out because it’s bad. Even if recruiters don’t look at it most of the time, the right one will, and that’s when it really helps. Sharing your learning and even your mistakes is crucial for personal development, especially as you gain more experience.</p><h3>Key Takeaways</h3><ul><li>Most developers and testers don’t have a portfolio, so building one sets you apart when it’s high quality.</li><li>A strong online portfolio acts as a sales tool, focusing on your best, most relevant skills for the job you want.</li><li>Differentiate between learning projects, personal hobby repos, and polished, focused showcase repositories.</li><li>Keeping documentation clear, and up-to-date is just as important as good code in your portfolio.</li><li>Promoting your portfolio on platforms like LinkedIn and your blog can drive the right attention.</li></ul><h3>Quotes</h3><blockquote><em>“Should you have an online portfolio showcasing your skills and abilities and will that help you get a job? Well, I got an online portfolio, I’ve had one for years. But I also know that people don’t look at it.”</em></blockquote><blockquote><em>“If you’ve got a portfolio and it’s bad, it might put interviewers off. If you’ve got a learning project and you’ve been copying someone on YouTube… When people look at that, they’re gonna go, ‘This person doesn’t know what they’re doing.’”</em></blockquote><blockquote><em>“What you want to showcase is something that makes you different, a reason why someone would pick you up as opposed to someone else.”</em></blockquote><blockquote><em>“Portfolio project says, ‘I know how to do this, therefore judge me on the work.’ A learning project is, ‘I learned from these other people.’”</em></blockquote><p><a href="https://www.patreon.com/c/eviltester"><strong>Join our Patreon</strong></a><strong> from as little as $1 a month for early access to videos, ad-free videos, free e-books and courses, and lots of exclusive content.</strong></p><p><em>Originally published at </em><a href="https://www.eviltester.com/show/026-portfolio-projects/"><em>h</em>ttps://eviltester.com</a><em> on December 24, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d027e38c58b9" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Episode 023 — The Software Testing Job Market in 2025 with Jack Cole — The Evil Tester Show]]></title>
            <link>https://medium.com/@eviltester/episode-023-the-software-testing-job-market-in-2025-with-jack-cole-the-evil-tester-show-11fbca3a557c?source=rss-1884120cfdf5------2</link>
            <guid isPermaLink="false">https://medium.com/p/11fbca3a557c</guid>
            <category><![CDATA[software-testing]]></category>
            <dc:creator><![CDATA[Alan Richardson — EvilTester.com]]></dc:creator>
            <pubDate>Tue, 23 Dec 2025 15:02:20 GMT</pubDate>
            <atom:updated>2025-12-23T15:04:52.667Z</atom:updated>
            <content:encoded><![CDATA[<h3>Episode 023 — The Software Testing Job Market in 2025 with Jack Cole — The Evil Tester Show</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*3pdDq3xcrqA1bspw.png" /></figure><p>Welcome to Episode 23 of The Evil Tester Show. This episode of dives deep into the realities of tech recruitment, job search strategies, and career planning for Software Developer and Testers — with expert recruitment consultant Jack Cole from WEDOTech.uk — Whether you’re an experienced Test manager, expert Tester or just starting out, Jack’s decades of industry know-how will give you the tips and tricks you need to understand what works in today’s competitive market.</p><p>Are you trying to figure out how to break into the software testing job market or make your next big move?. In this packed hour-long conversation, we cover everything from market trends, LinkedIn networking, and the recruitment pipeline, to building a career roadmap and even the AI hype machine. Grab your notebook, settle in, and get ready for real insights you can use — plus a few stories from the trenches and actionable tips for every step of your job hunt.</p><p>We keep things down-to-earth and practical — no stuffy jargon, just the honest, practical advice that will help you stand out, get noticed, and map your next steps with confidence.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FZV2QOG7-0b0%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DZV2QOG7-0b0&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FZV2QOG7-0b0%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/ab3f5fcfa63b36d28b07a42caf15cfe1/href">https://medium.com/media/ab3f5fcfa63b36d28b07a42caf15cfe1/href</a></iframe><p><a href="https://www.eviltester.com/show/023-job-market-jack-cole/">audio and subscribe links are here</a></p><h3>Special Guest: Jack Cole</h3><p>Find Jack @ <a href="https://www.wedotech.uk/">wedotech.uk</a></p><p>Follow Jack on <a href="https://www.linkedin.com/in/jack-cole-354bb953/">LinkedIn</a> for jobs and job hunting tricks.</p><ul><li>Over 14 years of experience in software testing and development recruitment</li><li>Organizer of industry leadership forums and community events</li><li>Well-known on LinkedIn for candid recruitment advice and market insights</li><li>Recently working on new career resources and a community project called “build your edge”</li></ul><h3>Key Takeaways</h3><ul><li>Job market for testers is saturated and highly competitive; you need to stand out.</li><li>Networking beats blind applications. Real conversations and tailored outreach are what make the difference.</li><li>Treat your job search like a sales process: research prospects, follow up, and show clear business value.</li><li>Testimonials and recommendations on LinkedIn really do help — don’t be shy about asking!</li><li>Plan your career with a long-term view: the next job is only one stepping stone; know where you want to go in two or three moves.</li></ul><h3>Current State of the Software Testing Recruitment Market</h3><ul><li>Market Imbalance and Challenges: The job market for software testers is highly saturated, with a significant disparity between job openings and candidates. Testers are often first impacted by redundancies during downturns and experience slower recovery in hiring.</li><li>Competition and Differentiation: With many skilled professionals seeking roles, standing out requires candidates to showcase unique, niche, or highly technical skills, as well as experience solving specific business problems.</li></ul><h3>Application and Recruitment Process Insights</h3><ul><li>Volume of Applicants and Recruiter Constraints: Recruiters and talent managers face overwhelming application volumes, leading to reliance on trusted networks and referrals rather than job board applicants. Initial and final applicants often get more attention than those in the middle.</li><li>Being Proactive and Strategic: Candidates must do more than submit applications. They should connect directly with hiring stakeholders, use tailored messaging, and highlight relevant achievements and problem-solving experiences.</li></ul><h3>Effective Networking and LinkedIn Strategies</h3><ul><li>Optimizing LinkedIn Presence: Candidates are encouraged to actively engage on LinkedIn, use the ‘Open to Work’ banner strategically, request testimonials, and maintain consistency between their CV and online profiles.</li><li>Building Relationships: Networking is not just about asking for jobs but consistently reminding people of your expertise and availability. Following target companies and key individuals, and using features like LinkedIn’s notification bell, helps candidates stay informed and responsive.</li></ul><h3>Treating the Job Search as a Sales and Marketing Process</h3><ul><li>Sales Mindset for Job Seekers: Successful candidates adopt sales strategies-prospecting potential employers, following leads, and crafting messages that focus on business value and solved challenges rather than just technical skills.</li><li>Nurturing Connections: Building and leveraging past relationships with colleagues and recruiters can provide priority access to opportunities, similar to VIP entry compared to standing in the general job queue.</li></ul><h3>Working with Recruiters vs. Direct Applications</h3><ul><li>Recruiter Partnerships: Engaged recruiters advocate for their candidates and can streamline processes, coach applicants, and accelerate decision-making if the fit is strong.</li><li>Direct Company Outreach: When recruiter communication is lacking or unclear, candidates may consider bypassing recruiters, but if a recruiter is actively involved, direct outreach can sometimes be counterproductive.</li></ul><h3>Salary Trends and In-Demand Skills</h3><ul><li>Market Salary Ranges: Salaries for Tester roles in the UK range from £55k-£140k depending on level and specialization, with London showing higher averages. Technical, niche, and SRE/DevOps-adjacent skills command a premium.</li><li>Comparison with Development Roles: Tester salaries are relatively aligned with developer roles, especially as more technical and AI-adjacent skills become increasingly sought after.</li></ul><h3>Career Planning and Long-Term Strategy</h3><ul><li>Forward Planning: Candidates are advised to plan two to three steps ahead in their careers, considering potential transitions into product management, leadership, SRE, security, or other specialized fields.</li><li>Leveraging Transferable Skills: Test professionals possess broad exposure to software lifecycles, equipping them with unique skills highly valued in adjacent roles, particularly in product and leadership paths.</li></ul><h3>Interview Preparation and Process Trends</h3><ul><li>Interview Structure: Interviews are progressively structured, starting with informal conversations to assess skills and chemistry, moving to technical/architecture tasks, and finally focusing on cultural fit and conflict management.</li><li>Recruitment Influence: Recruiters can sometimes influence hiring processes, helping candidates by providing detailed context, interview preparation resources, and feedback to improve chances of success.</li></ul><h3>Impact of AI on QA Roles and Future Skills</h3><ul><li>AI-Driven Demand: There is emerging demand for testers with experience in AI and data science, particularly for use cases such as chatbots and data-driven testing functions.</li><li>Adapting Skills: The future job market requires a blend of traditional Test expertise with familiarity in AI, infrastructure-as-code, monitoring, and DevSecOps, depending on candidates’ long-term goals.</li></ul><h3>Community, Events, and Personal Branding</h3><ul><li>Building Credibility: Engaging in community events, seeking and giving testimonials, and maintaining visibility help candidates bolster their professional brand and trustworthiness.</li></ul><h3>Final Recommendations</h3><ul><li>Candidates should think beyond immediate job needs, network proactively, invest in their brand, and adopt a persistent, sales-oriented approach.</li></ul><h3>Episode Summary</h3><p>In this episode, Alan sits down with Jack Cole — a recruitment expert who knows the software testing world inside and out — for a practical, straight-talking look at the realities of getting hired today. Jack breaks down the state of the UK (and wider) hiring market for testers and Software Development professionals: he’s open about just how crowded and tough things have gotten, especially with recent layoffs, market volatility, and a shift in what employers are looking for.</p><p>Jack doesn’t sugarcoat it: landing a job takes more than just hitting “apply” on LinkedIn. He walks us through why networking is more important than ever, and how to make sure your online presence works for you — including tips on using the “Open to Work” banner effectively, following companies and managers, and writing those all-important tailored messages.</p><p>A big part of the discussion focuses on treating your job search like a sales process — prospecting for leads (jobs and companies), pitching yourself with clear value (“here’s the business problem I solve”), and using testimonials for social proof. Jack also digs into how recruiters actually operate behind the scenes: how they filter candidates, the difference between recruiter and direct applications, and why getting in early or late can change your odds.</p><p>We also dig into salary expectations at different levels (with London/non-London comparisons), the increasing demand for niche and technical skill sets (like infrastructure as code, and AI), and how to think long-term about your career — planning two or three moves ahead, not just the next job.</p><h3>Notable Quotes and Examples</h3><blockquote><em>“To stand out in today’s market, you’ve got to have something unique or special about you — something niche or super technical. Clients want people who’ve solved real problems in the world, not just ticked skills boxes.”</em></blockquote><blockquote><em>“I would actually say be one of the first or be one of the last to apply — otherwise you might just get lost in the middle. But even more, you have to do more now to get recognized and noticed.”</em></blockquote><blockquote><em>“Treat your job search as a sales process. You’re not just applying; you’re prospecting, researching, following up, showcasing value, and backing it with social proof.”</em></blockquote><blockquote><em>“If you just apply for jobs, you’re basically standing in a queue to get into a nightclub — and it’s probably already full. Make friends with the manager or the owner, and you get right in. That’s the power of networking.”</em></blockquote><blockquote><em>“Don’t just think about your next role — think about the next two or three roles you want, and plan your skills and connections that way.”</em></blockquote><p>Take Action:</p><ul><li>“Standing out means more than having the skills. It’s about solving real business problems and showing you can do it.”</li><li>“If you’re just applying for jobs, you’re in a long queue. It’s the relationships that get you in the door.”</li><li>“Think beyond just the next gig — where do you want to be two, three moves from now?”</li><li>“Don’t be shy — reach out for testimonials and make your value visible on LinkedIn.”</li><li>“Most interviewers want to see outcomes, not just tech buzzwords. Prep your stories!”</li></ul><h3>Detailed Episode Breakdown</h3><h3>00:00 — Welcome &amp; Intro</h3><p>Alan introduces Jack Cole, giving listeners a quick summary of Jack’s long history in tech recruitment, community events, and active presence on LinkedIn.</p><h3>02:03 — Market Overview</h3><p>Jack immediately lays out the current reality: the QA/testing job market is crowded, with way more candidates than positions, especially at the leadership level. He shares hard data from LinkedIn Recruiter:</p><ul><li>~14,000 active QA management profiles in the UK, 400k open to work</li><li>Only 1,400 jobs posted for those roles This creates a real “buyer’s market” where employers can pick and choose. Redundancies have hit hard, and experienced people are often the first out and slowest to get rehired.</li></ul><h3>10:33 — Networking Secrets</h3><p>Forget just applying to jobs — Jack emphasizes the importance of maintaining visibility and relationships. He covers:</p><ul><li>Effectiveness of LinkedIn “Open to Work” banner</li><li>How recruiters get lost in a flood of CVs and rely on their trusted network</li><li>The importance of being top-of-mind when recruiters get a relevant position</li><li>Why internal recruiters are swamped and why timing your applications matters</li></ul><h3>16:00 — Sales Approach</h3><p>Jack provides actionable advice:</p><ul><li>Treat each job application like a sales pitch:</li><li>Research the company, identify managers, and connect thoughtfully</li><li>Message those contacts with specific examples of relevant problems you’ve solved</li><li>Always focus on business outcomes, not just tech skills</li><li>Use LinkedIn testimonials as “social proof”</li><li>Don’t be afraid to ask directly for recommendations</li><li>Keep “outreach” messages to under 120 words for best response</li></ul><h3>32:01 — Salary Insights</h3><p>Jack breaks down salary expectations for QAs:</p><ul><li>UK average Tester: £55k (maxing out ~£70k for mid-level, up to £80k for senior, £85-£140k for highly specialized leads)</li><li>London rates are higher: up to £100k for seniors, niche skills up to £140k</li><li>Niche skills in demand: automation, infrastructure as code, operational acceptance, non-functional testing He also compares Test with Dev salaries and explains how recruiters educate clients on setting realistic budgets.</li></ul><h3>38:49 — Career Progression</h3><p>Jack encourages listeners to plan beyond the immediate job hunt. Advice includes:</p><ul><li>Map out 2–3 moves ahead: do you want to become a product manager, security engineer, or Test leader?</li><li>Connect with people who have already made those transitions; ask them what worked</li><li>Capitalize on breadth as a Test professional — more options open than you think He notes a trend of Testers moving into product roles, thanks to their cross-team communication and customer focus.</li></ul><h3>41:20 — Interview Strategies</h3><p>When it comes to interviews:</p><ul><li>Always treat them as a two-way conversation — it’s not just you being evaluated</li><li>Be ready to talk not just about skills, but about specific outcomes and problem-solving</li><li>For technical roles, expect more system architecture, real-life scenario questions, code reviews, or pairing exercises, not just coding tests</li><li>Do research on interviewers and company context Jack emphasizes: preparedness and rapport-building really matter.</li></ul><h3>47:47 — AI Impact</h3><p>AI is an emerging focus area:</p><ul><li>AI Test roles are already appearing (e.g., chatbot testing, LLM understanding)</li><li>Companies are investing, but the field is new; few people have direct experience</li><li>Jack suggests: if you want to ride the AI wave, start experimenting and learning — but choice should fit your desired career direction (AI, SRE, DevSecOps, etc.)</li></ul><h3>52:06 — Jack’s Resources</h3><p>Jack teases new resources:</p><ul><li>“Build Your Edge” — a new community for learning, events, career coaching, and CV support</li><li>More actionable resources and maybe a whitepaper on career transitions in QA coming soon</li><li>Advice: Keep following for updates, and use company/leader “bell” notifications on LinkedIn</li></ul><p><em>Originally published at </em><a href="https://www.eviltester.com/show/023-job-market-jack-cole/"><em>https://</em>eviltester.com</a><em> on December 23, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=11fbca3a557c" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Software Testing Podcast — AI Optimism or Pessimism — The Evil Tester Show Episode 028]]></title>
            <link>https://medium.com/@eviltester/software-testing-podcast-ai-optimism-or-pessimism-the-evil-tester-show-episode-028-0bd018642b48?source=rss-1884120cfdf5------2</link>
            <guid isPermaLink="false">https://medium.com/p/0bd018642b48</guid>
            <dc:creator><![CDATA[Alan Richardson — EvilTester.com]]></dc:creator>
            <pubDate>Mon, 22 Dec 2025 16:45:50 GMT</pubDate>
            <atom:updated>2025-12-22T16:49:23.409Z</atom:updated>
            <content:encoded><![CDATA[<h3>Software Testing Podcast — AI Optimism or Pessimism — The Evil Tester Show Episode 028</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/300/1*qoc5pAEPXVf7l-dLmou31g.jpeg" /><figcaption>AI is changing our roles — for the better?</figcaption></figure><p>Rather than retrospect on the year in general. This podcasts looks at how I’ve been approaching the learning of AI. Where the industry has gone wrong and what to look forward to with AI.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FGSweziWVfDE%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DGSweziWVfDE&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FGSweziWVfDE%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/67fe416f7ed769fa67b61cec0082a70d/href">https://medium.com/media/67fe416f7ed769fa67b61cec0082a70d/href</a></iframe><p><a href="https://www.eviltester.com/show/029-ai-optimism-or-pessimism/">audio links and subscribe here</a></p><h3>AI Optimism or Pessimism</h3><p>I sat down to think what am I optimistic about, and what am I pessimistic about with the Software Development industry’s use of AI.</p><p>I see a lot of positive benefits for AI use in the Software Development Process.</p><p>There are many risks involved with adding AI into products and we should rightly be concerned about those and try to identify how to test AI enabled applications properly.</p><p>Many of the statements concerning how Software Testing has to evolve concern me. e.g. more technical skills, more focus on risk, point out flaws in process, identify architecture risk. I thought Software Testing already did this. So perhaps one good thing from AI is an evaluation of what part Software Testing plays in the Development Process and what are the fundamental skills and knowledge we need in order to Test software effectively.</p><h3>AI Productivity</h3><p>I’m incredibly optimistic about what I can personally do with AI. It’s massively increased what I can get done, especially in programming, content creation, and marketing.</p><p>But companies are cutting jobs and blaming AI. But companies have always looked for scapegoats when trimming staff AI is just the latest excuse.</p><p>If leadership truly saw their people as the reason for their growth, they’d use AI to amplify productivity and expand the team’s capabilities, not use it as a reason to shrink down.</p><blockquote><em>“If companies viewed the staff as a core engine for growth, they wouldn’t want to get rid of people when AI comes in. They’d want to use AI to make the staff more effective and grow the company. Clearly they don’t.”</em></blockquote><p>Many management teams see staff as a cost center, not as an engine for growth. Sales gets viewed as the engine of growth and is often paid on commission, so it seems cheap. The product is a ‘thing to be sold’. And management… are always essential.</p><p>Some organizations don’t think “how do we help our people do more?” They take the position: “how can we do the same (or less) with fewer people?”</p><p>AI is the scapegoat. Not the cause.</p><h3>Short-Term Pessimism</h3><p>AI is overhyped. Leadership decisions are being made based on hope, not the technology’s actual capabilities.</p><p>In the short term, I’m a bit pessimistic, not about AI itself, but about how people (management) are choosing to react to it. Too many seem to want to use AI to restrict and constrain, instead of to grow.</p><p>But our work is already changing. New tools exist. We can’t afford to be complacent. We need to experiment and figure out where AI makes sense.</p><h3>How Should We Use AI?</h3><p>Not every product needs AI enablement and features. Too much risk, particularly if you do not know how to test the results.</p><ul><li><strong>I do NOT use AI for direct human-to-human communication.</strong></li><li><em>If you get an email from me, I wrote it. If I get one from you, I read it, not an AI.</em></li><li><strong>I would not use AI for company procedures or performance reviews.</strong></li><li><em>Adding AI to these human, recognition-based workflows is a mistake.</em></li><li><strong>I won’t use AI as a buffer between people.</strong></li><li><em>Don’t use AI to hide. Engage.</em></li></ul><p>I do use AI to summarize public podcasts, blog posts, etc, Stuff I choose to consume, but not personal, direct communication.</p><p>Management is about people, and putting an AI buffer between you and your team leads to bad communication and alienation.</p><blockquote><em>“Your boss should already know what you’re doing. They should be helping you explain the value that you’ve added. This is a human process of recognizing value. It’s not something that we add AI into the middle of.”</em></blockquote><h3>How I learned AI</h3><p>When I started playing with AI, I created a podcast summarizer. My process for learning anything new is to find a project and use the new technology to build something.</p><p>I wanted to consume more podcasts but couldn’t keep up. So, I wrote a tool using:</p><ul><li>Hugging Face libraries</li><li>Whisper AI for transcription</li><li>Different LLM models for summarization</li><li>Prompt engineering</li><li>Ollama for running models locally</li><li>OpenRouter for connecting to cloud-based models</li></ul><p>I did everything locally as much as possible, partly to learn the limits before relying on cloud or paid models. Only after learning the limits did I see real value in some of the paid services.</p><p>Don’t believe in ‘magic’ from big models until you know what smaller, free models can actually do.</p><h3>Step By Step Adoption</h3><h3>Step 1: Chat Interfaces</h3><p>At first, I used AI the same way most people, from a chat interface, like ChatGPT or Claude. It worked but had limits. I wanted something embedded into my IDE.</p><h3>Step 2: Embedding into the IDE</h3><p>Now, I use a plugin called <a href="https://continue.dev/">Continue</a>, which works in both IntelliJ and Visual Studio Code.</p><ul><li>Initially connected to Ollama (local code models)</li><li>Later I Switched to OpenRouter for bigger cloud models</li></ul><p>So now, I can:</p><ul><li>Jump into a chat interface via my IDE</li><li>Still use chatbots like Claude/ChatGPT for generic things</li><li>Never look at Stack Overflow anymore, everything’s in the IDE chat</li></ul><h3>Step 3: Agentic Coding</h3><p>I started using agentic tools, like OpenCode, on the command line. They can scan my whole codebase, not just what’s open in the IDE.</p><p>I use it to:</p><ul><li>Create a page object model for a URL</li><li>Write a test for that model</li><li>Generate K6 Scripts</li><li>Create swagger files</li><li>Write scripts</li><li>Write API Abstractions</li><li>… so much</li></ul><p>Now instead of:</p><ol><li>Writing the test</li><li>Building abstractions and models manually</li><li>Iteratively improving</li></ol><p>I let the AI generate code, then I come in and change or adapt what I need.</p><blockquote><em>“I’m using it as a tool to help me create things faster. Then I come in and use my knowledge and experience to review the code or expand it.”</em></blockquote><h3>Step 4: OpenSpec for Better Requirements</h3><p><a href="https://github.com/Fission-AI/OpenSpec">OpenSpec</a> helps generate and maintain evolving and up to date requirements documentation.</p><ul><li>Specs are continuously updated as requirements change</li><li>OpenCode has documentation to work from, in addition to my prompts</li></ul><p>When I use OpenSpec, I’m letting the Agentic Coding Assistant write even more code, than when I use it iteratively at the CLI.</p><p>This has helped me:</p><ul><li>write a new application faster</li><li>Keep requirements and documentation in sync</li><li>Automate coverage incrementally</li></ul><p>I’m not sure how well it would scale to a full team, but it works well for a solo dev.</p><p>But…</p><ul><li>AI sometimes makes strange design choices</li><li>If you don’t review and refactor, your codebase can fall apart fast</li><li>Human review is still necessary</li></ul><p>AI saves time, but still requires fundamental core skills to evaluate their output and fix them when they go wrong.</p><p>I use AI for programming all the time, but not for actual testing.</p><h3>Agentic AI in Testing</h3><p>I’m beginning to look into Agentic AI tools specifically designed for testing.</p><p>I’m currently experimenting with AQE Fleet from <a href="https://www.linkedin.com/in/dragan-spiridonov/">Dragan Spiridonov</a></p><ul><li><a href="https://github.com/proffesor-for-testing/agentic-qe">https://github.com/proffesor-for-testing/agentic-qe</a></li><li><a href="https://forge-quality.dev/">https://forge-quality.dev/</a></li></ul><h3>Pros and Cons</h3><p>Right now, I am optimistic about AI empowering individual development team members. It makes us faster, and the tools are getting better.</p><p>But I am pessimistic about the way companies choose to adapt to AI. Focus on headcount reduction instead of increased effectiveness and growth.</p><h3>What is Testing?</h3><blockquote><em>“I keep seeing posts that testers need to evolve into quality engineers and risk analysts and customer experience advocates because we’ll be reviewing the systems for risk more than we’re and testing them… but I thought we already did those things.”</em></blockquote><p>People keep saying testers now need to be risk analysts, customer experience advocates, technical experts, etc. But that’s what good testers have always done. It almost feels like AI is forcing companies to rediscover what software testing really is, and always has been.</p><p><a href="https://www.patreon.com/c/eviltester"><strong>Join our Patreon</strong></a><strong> from as little as $1 a month for early access to videos, ad-free videos, free e-books and courses, and lots of exclusive content.</strong></p><p><em>Originally published at </em><a href="https://dev.to/eviltester/software-testing-podcast-ai-optimism-or-pessimism-the-evil-tester-show-episode-028-4jln"><em>https://dev.to</em></a><em> on December 22, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=0bd018642b48" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Convert Chrome Dev Tools Recording to WebDriver Java Code Using Free OpenCode AI Agent]]></title>
            <link>https://medium.com/@eviltester/convert-chrome-dev-tools-recording-to-webdriver-java-code-using-free-opencode-ai-agent-7d737019f91d?source=rss-1884120cfdf5------2</link>
            <guid isPermaLink="false">https://medium.com/p/7d737019f91d</guid>
            <dc:creator><![CDATA[Alan Richardson — EvilTester.com]]></dc:creator>
            <pubDate>Fri, 19 Dec 2025 11:51:32 GMT</pubDate>
            <atom:updated>2025-12-22T16:51:18.990Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*MJheIsaHuz0mvMwb.png" /></figure><p><em>TLDR; Record and Playback is a notoriously bad way to automate. But what if we record, then have AI write code from the recording? Answer: it is a little better.</em></p><p>I performed an experiment to try tand demonstrate how to replicate the AI-powered test automation features commonly found in commercial SaaS testing tools using open source solutions. The process involved recording user interactions in an application with Chrome DevTools Recorder, exporting the recording as JSON, and then using an AI tool to convert the JSON into JUnit tests with WebDriver and page objects.</p><h3>Overview Video</h3><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F9P_-9AvucwE%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D9P_-9AvucwE&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F9P_-9AvucwE%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/9749e0121df4fc0a4cac5ecd389af9e0/href">https://medium.com/media/9749e0121df4fc0a4cac5ecd389af9e0/href</a></iframe><p><a href="https://www.youtube.com/watch?v=9P_-9AvucwE">Watch on YouTube</a></p><p>In this video I:</p><ul><li>record some web actions using Chrome Dev Tools recorder</li><li>fail to play them back because record and playback is notoriously bad</li><li>feed the recorded JSON file into OpenCode with a prompt</li><li>check the results of the AI process writing Automated Execution using WebDriver and Java</li></ul><p>We do get working automated coverage out the back. I would need to edit it a little, and if I wanted to do this longer term I would need to amend the prompt to meet my coding preferences.</p><h3>process</h3><p>I used the triangle application on Test Pages:</p><ul><li><a href="https://testpages.eviltester.com/apps/triangle/">https://testpages.eviltester.com/apps/triangle/</a></li></ul><p>Test Pages has a deliberately hard to automate side bar. The easy automation is through the top level menu.</p><p>But I recorded the Triangle application by:</p><ul><li>clicking on the side menu bar to open the Triangle application</li><li>and the instructions</li><li>reading the instructions</li><li>clicking back on the triangle application</li><li>interacting with the app to generate invalid and valid triangles</li></ul><p>When I tried to playback the recording in the Chrome Dev Tools, it failed.</p><p>Most of us experience failure with record and playback tooling.</p><h3>AI tooling</h3><p>I have OpenCode, configured to use Chrome Dev Tools MCP, and working with Kat Code Pro Free in OpenRouter, but could just as easily have used any of the free coding LLMS on Open Code Zen or with Ollama locally.</p><ul><li><a href="https://opencode.ai/">https://opencode.ai/</a></li><li><a href="https://github.com/ChromeDevTools/chrome-devtools-mcp/">https://github.com/ChromeDevTools/chrome-devtools-mcp/</a></li><li><a href="https://openrouter.ai/">https://openrouter.ai/</a></li><li><a href="https://openrouter.ai/kwaipilot/kat-coder-pro:free">https://openrouter.ai/kwaipilot/kat-coder-pro:free</a></li></ul><p>I used a basic prompt</p><p>“given the json attached, convert the json into a set of junit tests using webdriver and page objects to recreate the flows recorded by the user. The json file is a recording made in Chrome Dev Tools recorder”.</p><p>The process took about 5 minutes to run.</p><ul><li>It opened the web page a few times,</li><li>created some Page Objects.</li><li>Found that the menu interaction was hard.</li><li>Decided to jump straight to the triangle page and automate that way</li><li>It did replicate the functional interaction included in the recording</li><li>it did generate assertions for the results</li><li>I could have used the test code generate with some small amendments.</li></ul><h3>Lessons Learned</h3><p>SAAS tools which take this approach will use Agents, which are essentially longer prompts. I should have added into the prompt more conditions:</p><p>“given the json attached, convert the json into a set of junit tests using webdriver and page objects to recreate the flows recorded by the user. The json file is a recording made in Chrome Dev Tools recorder. In the test code, do not write any findElement code, always abstract the location and interaction into a page object. Use existing page objects where possible and try to re-use existing methods. Make sure that any page object methods you use are used in a test so we have coverage of the interactions.”</p><p>Over time, if I wanted to continue to automate using this process, I would refine the prompt further until I got the results I wanted, and possibly encode this prompt as an agent.</p><h3>Would I do this?</h3><p>Unlikely that I would do this in the real world.</p><p>This seems like the replication of a record and playback approach.</p><p>I do create Page Objects and have created initial execution coverage using AI.</p><p>I have some empathy for AI tooling scanning applications and building flows through the application to automatically interact and generate a wide variety of data, to increase the scope of coverage across a few simple flows. I can see potential in augmenting a team to allow the team to go off and do more detailed stuff. Provided the AI tooling doesn’t get noisy and keep distracting the team.</p><p>For any functionality as simple as I experimented with here, I hope that when people evaluate SAAS tools, that they also compare the paid tools to the capabilities of Open Source tools.</p><p>Nothing wrong with paid tools. They are incredibly valuable when they make hard things simple, and add value when they make hard things easier.</p><p>But, when the activity is simple and the functionality differences between Open Source and paid are small, I hope companies consider “what if we upskilled our staff a little such that we could use the Open Source tools”.</p><p><a href="https://www.patreon.com/c/eviltester"><strong>Join our Patreon</strong></a><strong> from as little as $1 a month for early access to videos, ad-free videos, free e-books and courses, and lots of exclusive content.</strong></p><p><em>Originally published at </em><a href="https://www.eviltester.com/blog/eviltester/ai/chrome-dev-tools-recording-ai-writes-code/"><em>https://</em>eviltester.com</a><em> on December 19, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7d737019f91d" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Episode 022 — Practicing Testing with James Lyndsay — The Evil Tester Show]]></title>
            <link>https://medium.com/@eviltester/episode-022-practicing-testing-with-james-lyndsay-the-evil-tester-show-845297701d02?source=rss-1884120cfdf5------2</link>
            <guid isPermaLink="false">https://medium.com/p/845297701d02</guid>
            <dc:creator><![CDATA[Alan Richardson — EvilTester.com]]></dc:creator>
            <pubDate>Thu, 18 Dec 2025 11:11:39 GMT</pubDate>
            <atom:updated>2025-12-18T11:14:36.808Z</atom:updated>
            <content:encoded><![CDATA[<h3>Episode 022 — Practicing Testing with James Lyndsay — The Evil Tester Show</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*WxfALmYDwghitIOV.png" /></figure><p>Welcome to Episode 22 of The Evil Tester Show, where we’re diving into the fascinating world of practice with the renowned James Lyndsay. In this conversation, your host Alan Richardson chats with James about the essence of practice in software testing, exploring how exercises and real-world scenarios can enrich our skills. James shares insights on his weekly online practice sessions and the interactive Test Lab concept, offering a dynamic playground for testers.</p><p>This was released to <a href="https://patreon.com/eviltester">Patreon supporters</a> early, with transcript and no ads.</p><p>Discover how practice blends with rehearsal and learning, and delve into the intriguing intersection of testing and development. With firsthand experiences in software experiments, fencing, and scientific investigation, James and Alan discuss the art of modeling and exploring software systems. Whether you’re refining your testing techniques or embracing new perspectives with AI, this episode offers a wealth of wisdom for testers at all levels.</p><p>Join us as we learn, laugh, and leap into the realm of testing practice. Tune in, engage with new ideas, and maybe even find inspiration for your own practice sessions. Don’t forget to check out James’s resources at workroom-productions.com for more testing challenges and exercises.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fu8LeqNex5I8%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Du8LeqNex5I8&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fu8LeqNex5I8%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/ecde64cf4b2cedb04e863b71d7831764/href">https://medium.com/media/ecde64cf4b2cedb04e863b71d7831764/href</a></iframe><h3>Special Guest: James Lyndsay</h3><p>Find James @ <a href="https://workroom-productions.com/">workroom-productions.com</a></p><p>Follow James on <a href="https://www.linkedin.com/in/jameslyndsay">LinkedIn</a> to learn about up-coming practice sessions.</p><ul><li><a href="https://www.workroom-productions.com/about/">About James</a></li><li><a href="https://www.workroom-productions.com/tag/articles/">Blog Posts and Articles</a></li><li><a href="https://exercises.workroomprds.com/">exercises</a></li><li><a href="https://blackboxpuzzles.workroomprds.com/">puzzles</a></li></ul><h3>Key Takeaways</h3><ul><li>The Importance of Practice: The episode focuses on the theme of practicing, emphasizing how James Lyndsay uses practice exercises, tools, and software to improve testing skills.</li><li>Different Testing Styles: Alan and James discuss how different testing styles can lead to varied approaches in testing exercises, improving both technique and understanding.</li><li>Online Practice Sessions: James Lyndsay mentions his online practice sessions, which are focused on interactive exercises rather than lectures, aiming to foster learning through play and discussion.</li><li>Test Lab Concept: The Test Lab is introduced as a space at conferences where people can test real software with freedom and creativity, highlighting the benefits of hands-on collaborative learning.</li><li>Solo Practice vs. Group Rehearsal: James distinguishes between solo practice, which is more about experimentation, and group rehearsal, which focuses on coordination and collective practice.</li><li>Exploratory Testing: The episode highlights the value of exploratory testing, modeling, and understanding how systems work through testing rather than following predefined rules or scripts.</li><li>Role of AI in Practice: James describes an experience where an AI helped him understand a programming issue, suggesting that AI could change how we approach practice and learning.</li><li>Developing Models: Both speakers emphasize the need to build models while testing, to direct further exploration and identify gaps in understanding.</li><li>Interactive Learning: They also discuss how learning with others in a test lab can be more beneficial than isolated practice, as it offers diverse perspectives and shared learning experiences.</li><li>Mindset in Testing: The importance of having an open mindset that welcomes failure as a learning tool and not being constrained by the need to know what software is ‘for’ before engaging with it is reinforced.</li></ul><p><a href="https://www.patreon.com/c/eviltester">Support our conteent. Join our Patreon for only $1</a></p><p><em>Originally published at </em><a href="https://dev.to/eviltester/episode-022-practicing-testing-with-james-lyndsay-the-evil-tester-show-2gd5"><em>https://dev.to</em></a><em> on December 18, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=845297701d02" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Episode 021 — Context In Context Driven Testing — The Evil Tester Show]]></title>
            <link>https://medium.com/@eviltester/episode-021-context-in-context-driven-testing-the-evil-tester-show-be46367cf452?source=rss-1884120cfdf5------2</link>
            <guid isPermaLink="false">https://medium.com/p/be46367cf452</guid>
            <category><![CDATA[software-testing]]></category>
            <category><![CDATA[podcast]]></category>
            <dc:creator><![CDATA[Alan Richardson — EvilTester.com]]></dc:creator>
            <pubDate>Tue, 16 Dec 2025 09:43:39 GMT</pubDate>
            <atom:updated>2025-12-16T09:49:51.439Z</atom:updated>
            <content:encoded><![CDATA[<h3>Episode 021 — Context In Context Driven Testing — The Evil Tester Show</h3><p>This episode explores how to navigate context in testing environments, adapt our approaches, and effectively challenge and evolve systems. Discover the importance of context-driven testing in software development, exploring models, adaptability, and useful practices.</p><p>This was released to <a href="https://patreon.com/eviltester">Patreon supporters</a> early, with full transcript and no ads.</p><h3>Episode</h3><p>Watch:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F2XpPkxzNkuk%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D2XpPkxzNkuk&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F2XpPkxzNkuk%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/7b122d675cc728886ed5c4d6cd3aac99/href">https://medium.com/media/7b122d675cc728886ed5c4d6cd3aac99/href</a></iframe><p>Listen &amp; Subscribe:</p><p><a href="https://www.eviltester.com/show/021-context-in-context-driven-testing/">Show notes page with listen links</a></p><h3>Notes</h3><p>These were the notes I used to create the podcast from. They are not a transcript.</p><h3>What is Context?</h3><p>There is no such thing as Context Driven Testing but it is a useful phrase when thinking about our approach to testing, so in that sense you could call your approach Contextually Driven.</p><p>There is no such ‘thing’ as context, it is a model.</p><p>Context is a relationship between our model of the real world and our mental model of what is going on.</p><p>Context changes over time.</p><p>Our being part of the context, changes the context.</p><h3>What is context driven testing?</h3><p>Context-Driven-Testing.com</p><p>In the olden days, the world of testing was trying to show that not all testing is the same, not all testing can be standardised, and standardised testing holds people and the craft of testing back.</p><p><a href="https://context-driven-testing.com/">https://context-driven-testing.com/</a></p><p>The website lists the creation of the concept as initially developed by</p><p>James Bach, Brian Marick, Bret Pettichord, Cem Kaner</p><p>I used to think about this in terms of Systems within Systems — System of Business, inside is a System of Development, inside a System of Programming and Testing and Analysis and Design, and then projects and programs and individuals have their own System.</p><p>Context Driven Testing seemed like a fine set of words to describe this.</p><p>I didn’t sign up to the school or manifesto I didn’t like the wording of the 7 basic principles, and I didn’t think it needed principles and why are their only 7?</p><p>This is an old site and an old list and everyone who was involved initially has moved on and created their own ways of talking about it.</p><p>But the reason for mentioning it is… what site current ranks top for the phrase “Context Driven Testing”? Yes, context-driven-testing.com</p><ul><li>What are we doing?</li><li>Is it helping more than hindering?</li><li>What does better look like?</li><li>How can we do it better?</li><li>Span systems — People systems, Role systems, Project Systems, computer systems etc.</li><li>I do not impose methodologies or processes and call it testing.</li></ul><p>Context Driven Testing takes into account: stakeholders, process, communication, individuals, aims, beliefs, mandates, etc.</p><h3>Ideology</h3><ul><li>context, avoiding ideology</li><li>explore everything can work</li></ul><p>If your beliefs about something can be predicted then you might be in the grip of an ideology.</p><p>Context requires beliefs that open up options rather than shutting them down.</p><p>As an exercise you can use Chat GPT to provide answers about a topic and if you find yourself agreeing with it all then you’re really agreeing with an average position, so think of a situation that breaks the advice or description in chat gpt. Identify a context which exposes the ‘wisdom of the crowds average’ as an ideology.</p><p>We want every individual to be unique.</p><p>Context is the same, applied to environments, projects, companies. Context is about allowing environments to be unique and to change and to grow.</p><p>Introducing ideology into an environment can restrict the context.</p><p>Ideologies:</p><ul><li>tool X is better than tool Y</li><li>we need more automated execution at the Unit level than the UI level</li><li>Unit tests don’t interact with anything else</li><li>In a Unit test every interaction is mocked</li><li>Page Objects are bad, use Screenplay</li><li>Screenplay is bad, use Page Objects</li><li>etc.</li></ul><p>Important to note that ‘everything can work’</p><h3>Agile Testing</h3><p>There is no such thing as Agile Testing, there is testing within an Agile Context where the Agile Context might be different for each team and project within the company.</p><h3>Could, Should, Would</h3><p>Really what we can look at is:</p><ul><li>capability</li><li>context</li><li>contextual fit</li></ul><p>Capability comes down to “could” you do it, or “can” you do it.</p><p>Context covers “should” you do it. What’s the impact?</p><p>Contextual fit covers “would” you do it. Will you commit to doing it? How will you do it? What changes do you have to make?</p><p>e.g. Forcing/Teaching testers to code so that automating our systems speeds up our release process</p><p>I think people read the statement in terms of capability. i.e. is it possible for people to learn to program if they are testers? If they do learn to program can they become good at it? The statement suggests… probably not. But the answer to both of the questions in my experience is Yes.</p><p>But it depends on the context and contextual fit.</p><p>If they are being ‘forced’ then you need to have really good training, really good experiential training, really safe ways to learn, and really good ‘this is what good looks like’.</p><p>People need to be motivated to learn, and if they are being ‘forced’ then they might not.</p><p>If the context that is forcing people doesn’t know what ‘good’ is then they might stop at the point where they have given people the most basic skills and that can translate into ‘bad’.</p><p>Sometimes the ‘forced’ might be — If I don’t learn this then I won’t get a job. So the context is survival. This can be quite motivating. But… if you don’t have the right training, and don’t have a safe place or time to learn then you might not get over the initial learning hump and might not take the time to get good.</p><p>Contextual fit also covers is the change happening for the right reasons? Because people might be trying to achieve unrealistic goals with the wrong approach.</p><h3>Adaptation</h3><p>I’d rather adopting a more scientific approach of:</p><ul><li>Observing what we do,</li><li>Modeling (rather than standardising) to understand,</li><li>Investigating our inefficiencies (often caused by lack of understanding, or forgetting because we don’t do the same thing the same way multiple times every day)</li><li>Experimenting (to improve against our existing model and observed approaches)</li><li>Letting and experiment settle so we are evaluating it after developing competency in it, and not trying to perform too many experiments at once</li><li>Repeating to evolve based on the needs of the people and the environment</li></ul><p>We have to be prepared to challenge the approaches and even interpretation of results.</p><p>If someone claims something is a success, then that can often prevent learning from all the mistakes that happened to create that success.</p><h3>Contextual error patterns</h3><p>Contextually, looking at errors that have happened on applications in your own environment, generalising the error, then looking for that error in other applications in your environment, is often an easier way to find issues that are likely in your own environment.</p><p>From an observation work back through the application and discover the design issue that triggered the problem.</p><p>This now becomes a higher likelihood error pattern in my model of the legacy apps and is something I look for more, or treat as a higher risk.</p><h3>Follow on</h3><p>Have a look at James bach’s site for the Context Driven Methodology post</p><p><a href="https://www.satisfice.com/blog/archives/74">https://www.satisfice.com/blog/archives/74</a></p><p>There is no such ‘thing’ as context, it is a model.</p><p>Context is a relationship between our model of the real world and our mental model of what is going on.</p><p>Context changes over time.</p><p>Our being part of the context, changes the context.</p><p>There is no one true way. We make decisions and we need to be able to justify those decisions</p><h3>Episode Summary</h3><p>In Episode 21, we explore the boundaries of “context-driven testing,” considering it less as a fixed methodology and more as a model to understand dynamic testing environments. Our journey begins with a look at the historical context of the term and how it was coined. We unravel the layered systems approach, examining how interlocking systems within a business shape the context and how testers must adapt to these systems.</p><p>Moving into the heart of testing, the discussion opens up on changing contexts and the resulting dynamic processes. As contexts within testing environments are ever-evolving, testers are encouraged to think adaptively and engage with the environment in a way that embraces its complexity and unpredictability.</p><p>We also consider the importance of capabilities-what can be done, what should be done, and how context influences these decisions. We stress the importance of the ability to adapt, how we challenge assumptions and interactions within the project. Additionally the importance of deliberate error detection and close with thoughts on models and reality, stressing the necessity of adaptability and openness to change.</p><h3>Key Takeaways</h3><ul><li>Context-driven testing isn’t fixed; it’s an adaptable approach to evolving testing environments.</li><li>Understanding the interplay between systems and context is essential for effective testing.</li><li>Testing isn’t about following best practices but evaluating what’s right for the current context.</li><li>Constant adaptation to change-both personal and environmental-is crucial.</li><li>Identifying and learning from error patterns in context-specific situations enhances problem detection.</li><li>Testing is about decision making and persona responsibility.</li><li>We are part of the context.</li><li>Our actions shape the context.</li><li>The context can push back.</li></ul><h3>Quotes and Examples</h3><blockquote><em>“There’s no such thing as context; we create models that interact with real-world situations.” </em>[01:10]</blockquote><blockquote><em>“Projects unfold unpredictably, so our testing approach must comfortably handle these variations.” </em>[17:11]</blockquote><blockquote><em>“It’s not just the practice; it’s how people view the practice and how the results are perceived.” </em>[07:04]</blockquote><blockquote><em>“Our beliefs should expand options and facilitate experimentation, not limit them.” </em>[14:39]</blockquote><blockquote><em>“Deliberate error detection in context can uncover unique issues on your project.” </em>[22:48]</blockquote><ul><li>“Testing isn’t about best practices; it’s about what’s suitable for the context.”</li><li>“Adaptation is key as the environment and our understanding continuously evolve.”</li><li>“Context is an ongoing relationship between personal and external models.”</li><li>“Error patterns particular to a context can illuminate unseen issues.”</li><li>“Never let ideologies limit your testing explorations and experiments.”</li></ul><h3>Discussion Questions and Exercises</h3><ul><li>Defining Context: How do you define ‘context’ in relation to testing?</li><li>Context Changes: How does being part of a project team influence the context? Can you provide an example from your own experience where team dynamics changed the context of a project?</li><li>How have you deliberately tried to change the context?</li><li>How has your presence on a project changed the context?</li><li>Context-Driven Testing Principles: Have you read the context-driven principles? Do you agree with them? Do you agree with the wording of them? How would you state them in your own words?</li><li>Systems within Systems: The podcast considering testing in terms of “systems within systems.” Do you view systems like this? How could this perspective influence your approach to testing?</li><li>Adaptive Approaches: What does an “adaptive and evolving approach” mean with respect to testing? How does this differ from traditional or process-driven methods?</li><li>Avoiding Ideology: Is avoiding rigid ideology in testing important? How can one ensure their beliefs about testing remain flexible and context-aware?</li><li>Role of Experimentation: The podcast describes the necessity of experimentation in context-driven testing. Can you think of a situation where experimenting led to a better understanding or improvement in your testing approach?</li><li>Agile and Context Driven: How does context-driven testing align with or differ from Agile testing? Have you encountered any challenges in melding these approaches?</li><li>Capability, Context, and Fit: What are the differences between capability, context, and contextual fit as described by Alan? Have you analysed your approach in these terms? Contextual Error Patterns: How can identifying contextual error patterns improve your testing process? Can you identify any recurring errors in your current or past projects that might benefit from this approach?</li><li>Have you tried the chat GPT exercise? Ask Chat GPT to describe part of your testing approach or testing definitions. Do you agree with it? Find the differences and gaps between your description and the Chat GPT description.</li></ul><p><em>Originally published at </em><a href="https://www.eviltester.com/show/021-context-in-context-driven-testing/"><em>https://eviltester.com/sho</em>w</a> <em>on Jan 4th, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=be46367cf452" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Software Testing Podcast — BrowserStack Community QnA — The Evil Tester Show Episode 028]]></title>
            <link>https://medium.com/@eviltester/software-testing-podcast-browserstack-community-qna-the-evil-tester-show-episode-028-d1e583ce50ab?source=rss-1884120cfdf5------2</link>
            <guid isPermaLink="false">https://medium.com/p/d1e583ce50ab</guid>
            <category><![CDATA[software-testing]]></category>
            <dc:creator><![CDATA[Alan Richardson — EvilTester.com]]></dc:creator>
            <pubDate>Mon, 15 Dec 2025 17:01:26 GMT</pubDate>
            <atom:updated>2025-12-15T17:01:26.789Z</atom:updated>
            <content:encoded><![CDATA[<h3>Software Testing Podcast — BrowserStack Community QnA — The Evil Tester Show Episode 028</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/300/0*pXrC0HSckQDzJUxf.jpg" /></figure><p>The answers given during an AMA session held on Discord on the 11th of December 2025, following a live LinkedIn video stream. The session focused on “Mastering Automatability for Test Automation”. The main theme is the concept of Automatability, which I view as <strong>the ability to automate</strong>, this personal skill is more critical than reliance on specific tools. The discussion covers various topics, including how to separate automation problems from application design issues, dealing with slow UIs and non-automation friendly third-party widgets, evaluating automation readiness, and addressing common architectural failings related to large-scale UI automation.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2FOmYp1YCPZSU%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DOmYp1YCPZSU&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2FOmYp1YCPZSU%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/924f51c6556e3c4b2dd4418d108f6502/href">https://medium.com/media/924f51c6556e3c4b2dd4418d108f6502/href</a></iframe><p><a href="https://www.eviltester.com/show/028-browserstack-qna/">Audio versions of the above video can be found here.</a></p><p>This was a Browserstack hosted event. The initial Q&amp;A session started on Linkedin with a conversation between Alan Richardson and David Burns.</p><p>Recording for initial session can be found here:</p><ul><li><a href="https://www.linkedin.com/events/7404135559247343616/">LinkedIn Event</a></li></ul><p>The session then moved to Discord. The Browserstack discord has many AMA and interviews so is worth signing up to have a look at.</p><ul><li><a href="https://discord.com/channels/1290239387031961602/1389791860116951100/threads/1447958914791637012">the Browserstack Knowledge-hub forum on Discord</a></li><li><a href="https://discord.com/channels/1290239387031961602/1447958914791637012">Alan Richardson AMA</a></li></ul><p>Other AMA sessions include:</p><ul><li><a href="https://discord.com/channels/1290239387031961602/1431252937102987274">Jason Huggins</a></li><li><a href="https://discord.com/channels/1290239387031961602/1426181624038690857">Gil Zilberfeld</a></li><li><a href="https://discord.com/channels/1290239387031961602/1415293533090222163">Testers’s Day Panel with Vikas Mittal, Jenna Charlton and Manish Saini</a></li></ul><p>Join the BrowserStack Community on Discord and discover many more sessions, videos and conversations.</p><ul><li><a href="https://www.browserstack.com/community">https://www.browserstack.com/community</a></li></ul><h3>Q&amp;A Session Questions and Summaries</h3><p>I’ve listed the questions and summary answers. Full answers can be found in the podcast, audio or video, or on the <a href="https://discord.com/channels/1290239387031961602/1447958914791637012">Discord AMA chat</a>.</p><h3>Q&amp;A 1: Understanding Automatability for a First Automation Framework</h3><p><strong>Question:</strong> If I’m building my first test automation framework, what’s the one thing about automatability I should understand?</p><p><strong>Summary of Answer:</strong></p><p>The most important thing to understand is that automatability refers to <strong>your ability to automate</strong>. By having a strong ability to automate, you become less dependent on specific tools, making it easier to create workarounds and choose from multiple tools. Developing experience in how to automate allows you to succeed more often, and means you are not reliant on a tool interacting with your system, which makes workarounds harder. Automating is fundamentally about your understanding of <em>what</em> and <em>how</em> to automate, and practicing the application of that ability.</p><h3>Q&amp;A 2: Separating Automation Problems from Application Design Problems</h3><p><strong>Question:</strong> How do you separate automation problems from application design problems?</p><p><strong>Summary of Answer:</strong></p><p>If an issue causes problems when you are automating, I would call it an <strong>automation problem</strong>. While this problem might be <em>triggered</em> by an application design problem (such as a state-based system that is hard to track, or features that are harder to automate), the issue itself remains distinct. If the team cannot change the application design, they must figure out how to automate the application as it is. This might involve absorbing the issue, figuring out how to automate it at a different level (not end-to-end), or handling it through testing processes using observability tools like DataDog.</p><h3>Q&amp;A 3: Slow UIs and Testability/Automatability</h3><p><strong>Question:</strong> When dealing with slow UIs is the slowness a testability issue an automatability issue or both?</p><p><strong>Summary of Answer:</strong></p><p>Slowness is likely <strong>both, and more</strong>, because it is also a usability/user experience issue. If the slow UI impacts the user experience, it is more likely to be addressed than if it only impacts testing or automation. In cybernetics terms, testers or automators must possess the “requisite variety” to handle the variety (slowness) in the system being tested, which means knowing how to synchronize or potentially cleaning the environment to improve speed. The focus would be on the <em>impact</em> of the slowness, rather than the slowness itself, and whether the team or its tools can absorb the variation in response times.</p><p><strong>Question:</strong> How do you handle third party widgets like payment gateways that are inherently not automation friendly?</p><p><strong>Summary of Answer:</strong></p><p>If a third-party widget is “not automation friendly” for one tactic (e.g., UI automation), it might become easier to automate by adopting different tactics, such as issuing HTTP requests using cookies collected from the UI. Teams may not need to automate the full flow of the widget, but instead focus on ensuring the widget is <strong>wired up correctly</strong> within their own application. This can involve only testing partway through the flow, or using a mock or stub in the environment so that the full widget flow doesn’t need to be tested constantly.</p><h3>Q&amp;A 5: Evaluating Automation Readiness and Consultancy Frameworks</h3><p><strong>Question:</strong> How do you evaluate an application’s automation readiness during consulting? Do you follow a framework?</p><p><strong>Summary of Answer:</strong></p><p>I do <strong>not</strong> use a formal consulting frameworks. The closest methodology used is the meta model from Neuro-Linguistic Programming, which involves asking questions to build a model of the client’s environment and processes, comparing it to the reality they face.</p><p>Regarding automation readiness, an application is considered ready to automate “as soon as someone wants to automate it”. Readiness is judged by whether the client is prepared to <strong>commit to whatever it takes</strong> to automate the application at that specific point in time to achieve their desired outcomes, regardless of the application’s current state.</p><h3>Q&amp;A 6: Architectural Patterns Failing in Large-Scale UI Automation</h3><p><strong>Question:</strong> What architectural patterns do you see repeatedly failing when it comes to large scale UI automation?</p><p><strong>Summary of Answer:</strong></p><p>Recurring issues often stem from the team lacking the <em>ability to automate</em> and consequently blaming the tool for problems, rather than creating necessary workarounds. Common process anti-patterns, not strictly architectural patterns, include deploying differently into test and production environments (not using the same install process).</p><p>A major failure point is <strong>test data maintenance</strong>, especially trying to use production data or any data that the team does not control. Automating against specific data conditions without control over that data causes random test failures. This can be worked around by hardcoding tests against data <em>conditions</em> instead of specific data, and dynamically selecting the required data during execution.</p><h3>Q&amp;A 7: Prioritizing Testability and Automation in Sprint Planning</h3><p><strong>Question:</strong> If testability improves debugging and automation improves scale, how do we prioritize them during sprint planning?</p><p><strong>Summary of Answer:</strong></p><p>Prioritization can be based on <strong>what the team wants to achieve</strong> (the outcomes) by the end of the sprint, specifically focusing on the expected coverage from testing and automation. It is beneficial to plan for features that need extensive testing to be delivered early in the sprint. Ideally, testing and automating occur in parallel, and teams automate at lower levels (like unit level) to reduce the necessary coverage at the higher UI level. Issues often arise when teams are divided into isolated roles, creating process problems that hinder effective interaction and prioritization.</p><h3>Q&amp;A 8: Playwright and the Illusion of Reduced Automatability Design Needs</h3><p><strong>Question:</strong> Do modern frameworks like Playwright reduce the need for high automatability design or is that an illusion?</p><p><strong>Summary of Answer:</strong></p><p>It is an <strong>illusion</strong>. Frameworks like Playwright are designed to absorb application variability through features like retry mechanisms (for synchronization) and locator strategies (like visible text), which reduces the need for constant notification when minor changes occur. This absorption capability makes Playwright effective for agent-based automation where the goal is checking an end-to-end path and the final result.</p><p>However, this absorption can hide issues that a team might want exposed. Even when using Playwright, developers must still understand how to automate and structure their code using abstraction layers (like page objects, domain objects) to ensure long-term maintainability and efficiency.</p><h3>Q&amp;A 9: Explaining Automatability as an Investment to Leadership</h3><p><strong>Question:</strong> How do I explain to leadership that improving automatability is an investment not a delay?</p><p><strong>Summary of Answer:</strong></p><p>The explanation depends on what is being improved. If improving automatability means increasing the team’s ability to automate, it can be presented as an <strong>investment in staff training</strong>. If it involves adding technical aids (like IDs in the UI), leadership might perceive it as a delay because they may not value UI execution coverage or may already be confident in unit-level automation. To convince leadership, the team could demonstrate the return on investment by <strong>showing the alternative world</strong>. This involves comparing the current reality to a scenario where improved automatability allows the team to do beneficial things they otherwise couldn’t, thereby highlighting the value gained.</p><h3>Q&amp;A 10: AI Agents Dealing with Dynamic Elements</h3><p><strong>Question:</strong> We are exploring AI agents for our teams and I want to know how does the AI agent deal with dynamic elements like rotating banner banners, third party widgets or AB tests?</p><p><strong>Summary of Answer:</strong></p><p>How the agent deals with dynamic elements depends on how it works (e.g., building high-level BDD scripts or generating code). Agents often operate on <strong>first principles</strong>. If an agent uses a BDD approach, it works from a runtime specification and handles dynamic elements because it works from scratch for each execution, constantly aiming to fulfill the objective. For example, if an unexpected pop-up appears, the agent clears it and continues.</p><p>If the agent writes code, it uses what is often called “autohealing”. This process automatically amends the script based on the current application state, prioritizing the achievement of the objective regardless of whether the change is “right”.</p><h3>Q&amp;A 11: Early Signals of Flaky Features</h3><p><strong>Question:</strong> What early signals tell you that a feature will become flaky once automated?</p><p><strong>Summary of Answer:</strong></p><p>Early signals involve understanding the <strong>synchronization points</strong> of the page. A feature is likely to be flaky if the application is populated or amended over time by JavaScript and the automation tool is not synchronizing properly on the DOM buildup. If a page is being constantly updated in the background without clear visual indicators (like spinners, which are easy to synchronize on), flakiness is more likely.</p><p>The signal is the update process itself, particularly when it is non-deterministic (e.g., how totals are updated in a shopping cart). The core question is whether synchronization is required to prevent flakiness, and if it is difficult to synchronize, that is a strong signal. If necessary, automatability might be enhanced by adding an extra flag to the DOM to signal when the update is complete.</p><h3>Q&amp;A 12: Layers to Focus on in Microservices for Automatability</h3><p><strong>Question:</strong> In microservices setups which layers should teams focus on first to increase overall automatability?</p><p><strong>Summary of Answer:</strong></p><p>The foundational layer to focus on is the <strong>human understanding</strong> of the architecture and the requirements for automation. In microservices specifically, teams would typically focus on the <strong>interface layer</strong> and their ability to automate it while keeping the interface standard.</p><p>If microservices are communicating via HTTP messages compliant with a version standard, automation can be relatively easy. If interfaces are internal and change randomly, issues may arise, requiring attention to managing event-based queues if applicable. Strategies include using versioned interfaces or having a process to update automated coverage and abstraction layers when microservice interfaces change. It is crucial to avoid replicating interface objects (like payload objects) directly into test code, as this can prevent tests from spotting issues when new or removed fields occur in the application interface.</p><h3>Q&amp;A 13: Budget-Limited Automatability Fixes for Fastest ROI</h3><p><strong>Question:</strong> For a team with limited budget which problems around automatability should we fix first to get the fastest return on investment?</p><p><strong>Summary of Answer:</strong></p><p>The fastest return on investment comes from enhancing the team’s <strong>ability to automate</strong>. This improvement allows teams to develop workarounds, find alternative solutions, and identify when to use different techniques. It is not about purchasing multiple or expensive tools. Instead, investment could be placed in training, practicing, exploring the capabilities of existing tools, and eliminating fundamental issues like test flakiness by fixing the root causes.</p><h3>Q&amp;A 14: Collaborating Earlier to Avoid Automatability Rework</h3><p><strong>Question:</strong> How can developers and testers collaborate earlier to avoid expensive rework on automatability issues?</p><p><strong>Summary of Answer:</strong></p><p>Collaboration is achieved by <strong>removing the barriers</strong> that cause people to be isolated into silos, such as separate programming, testing, or test automation teams. The core issue is often that a “development team” is defined only as a programming team, instead of encompassing responsibility for design, programming, product suitability, testing, and production.</p><p>Practical steps include involving the programming team in the automated execution maintenance. When programmers contribute, they often ensure data IDs are present, which removes many hard-to-automate issues typically found during end-to-end testing. Sharing the responsibility for maintenance ensures people understand and resolve related issues earlier.</p><h3>Q&amp;A 15: Automatability in Continuous Delivery and Trunk-Based Development</h3><p><strong>Question:</strong> How should teams think about automatability when shifting towards continuous delivery and trunk-based development?</p><p><strong>Summary of Answer:</strong></p><p>This environment requires high automated execution coverage that runs quickly, often with features being merged multiple times a day. Automatability is achieved by ensuring the person or pair responsible for creating a feature is <strong>also responsible for adding automated execution coverage</strong> (unit tests). These tests demonstrate that the feature has been tested and provide future checks against accidental changes impacting the functionality.</p><p>Teams could focus on structuring unit tests at the <strong>domain level</strong> (e.g., focusing on users or orders) rather than strictly class level. This approach results in a degree of internal end-to-end flow tests without needing extensive external system testing. Furthermore, application architecture can be designed so that interfaces (like HTTP interfaces) can be tested primarily at the domain level, reducing the need for numerous actual HTTP calls.</p><p><em>Originally published at </em><a href="https://www.eviltester.com/show/028-browserstack-qna/"><em>https://www.eviltester.com</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d1e583ce50ab" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>