Planet Test Automation: First Steps, Part 2. Automating Desktop UI Testing

Alexey Grinevich
Oct 30, 2019 · 11 min read

It is part 2 of the Planet Test Automation: First Steps webinar series.

Today we will talk about desktop testing.

Demo Application

Now few words about today’s application under test. We don’t want to use well known applications or our internal demo applications in this webinar.

We are doing live demos of Rapise and see various applications every day. We know that sometimes application is well supported and Rapise have seamless or well documented ways of automating it (such as Dynamics or Salesforce). Sometimes application is a pure picture or remote snapshot, and we cannot either spy its contents or read any text information. But it is just an another corner case.

The vast majority of real life cases is that application is supported to some extent. But it still contains some dark areas, some unsupported widgets with lack of support of automation. So how friendly each particular application to automation? Roughly, we may show expected level of automation maturity like that:

If there is a development team available, we provide guidelines on how to drastically improve the situation with minimal efforts. So application may move from ‘No Chances’ to ‘Supported’. But this may take time to happen or happen never, so we need to start doing something with an application in its current state.

As we discussed earlier all widget library vendors provide such kind of apps to demonstrate capabilities of their controls. So we are going to pick up one of these. Usually these apps are intended to show capabilities of the widget library and are not expected to be 100% what they declared to do — because it is a demo. And this is a good model of what we expect to see in real life: application is developed, some parts of it are mature, some are new, some are at POC stage.

I was trying to find some app that is:

  1. Windows Desktop app — it is our today’s topic
  2. Publically available, so you may try it
  3. Non-trivial (something looking like a business application, having data and more than one screen)
  4. Contains demo data.

It is a demo CRM client from Telerik:

It is a Customer Relationship Management panel. It looks impressively and may actually be a panel of real life CRM. So let’s play with it today.

Master Plan

We suggest a certain order of steps to do automation. Today we are going to follow it to demonstrate the purpose of certain steps.

Here is our master plan on implementing automated tests. Four steps: Analyze, Try, Plan, Implement. Or, more

  1. Analyze AUT — technology, vendor, entry points, APIs.
  2. Try widgets and screens. See what we deal with and what is supported.
  3. Plan testing: decide what to test first, what next and what then.
  4. Implement: Record, Learn and write test.

This application is small and we are limited by the scope of the article, so I’ll touch some of these topics briefly, but will try to follow the whole master plan.

Step 1: Analyze Application

The analysis is an overview of the application itself. We want to figure out its structure and operation and understand what parts may cause us problems in the future.

Briefly, here is what we want to figure out:


  1. Check file system
  2. Check entry points
  3. Data Loading

Visible Controls

  1. Login
  2. Synchronization Primitives
  3. Navigation: Menu, Toolbars, Command Line
  4. Keyboard
  5. Special controls: Charts, Graphs, Maps
  6. Data Views: Grids, Tables, Trees
  7. Calendars


We discussed technology analysis in more details in the scope of Test Automation Demystified webinar series. Here I can refer you to the following links:

For this appication we know the technology (WPF) and widget vendor (Telerik). We may also check application binary folder for presence of additional executables.

Note: This application is installed form the windows store. We may find its location and look at files using the task manager as follows:

Then we may investigate folder contents and make sure there is no other executables that may be interesting for us.

In this app there is single entry point, no additional configuration files, APIs or command line clients. So we will limit our efforts to GUI testing.

Visible Controls

The list of key widgets is:

  • Menu
  • Grid in Table mode
  • Grid in Group Mode
  • Tabs
  • Menu
  • Calendar

At this point we got overall impression about the application implementation and ready to proceed to the next stage. It’s time to try these controls.

Step 2: Try

Try Menu

An attempt to Record main Menu gives strange results:

Menu items are not recognized by name and menu itself is looking at radio button set. Checking it with spy, we see that it is true:

Normally for well known apps (such as MS Dynamics) there are Rapise has special logic that works to help with such controls. We may find proper hack in this KB362:

function TestPrepare()
if (g_recording)

After pasting this into Main.js we get improved recording for the menu:

Tables / Lists

Grids/Tables are the most crucial part of the application from the automation point of view. Here we see the same table widget, but in different view modes.

Grid / List

  • With Grouping
  • Without Grouping

With/Without Filtering

  • With
  • Without

With/Without Expander

First we try to record each grid and see how it is recognized. There is one important thing to notice. When some item is selected in the COMPANIES module we have an error message:

It is also good to notice when planning further automation efforts. Errors are something we need to avoid or bypass.

Let’s see what we have when we try to record actions for the table. There is a couple to things that we may note:

  1. Recording is low level. We can see it from the recording activity action:

It has action ‘Click’ and object is recognized as a UIALabel. With good support it would recognize ‘ClickCell’, that means the whole table is understood and the cell is detected by the recorder.

2. If the group containing the contact is collapsed then object is not found. This control is dynamic and only contain actually visible items. We can see from the Spy:

As we see in the Spy tree the data row is the last one available. After it we can only see groups without row contents. Overall it makes it hard to, for example, get total number of rows from the grid.

Also it is a problem if we need to pick one item that is not initially on the screen. We need to know what group to expand and where to scroll to be able to do it. It is possible with additional efforts for supporting this kind of grids as a whole and we don’t have it at the moment.

3. Luckily there is a ‘Filter’ and we may find a workaround. If we filter, we will have single row containing data we need. So if we make sure that we put unique values into some columns with filters then we may always select corresponding items via the filter, i.e.:

Also note, that filtering expands corresponding group and makes found item the first. So if we managed to filter, we know how to select specific row in the grid.

Step 3: Plan

At this stage we going to consider the following factors:

  1. Modules
  2. Module maturity (which are ready for UI automation and which are better to do manually)
  3. Controls Support
  4. Scenario Priority

We use information gathered on the previous steps and here will try to produce meaningful test plan that takes it into account. Now we already have an idea about application parts. Our current plan is to add a testing roadmap.

First, we may have a preliminary one, looking like that:

Well, it looks like we still need to refine it a bit. While list of modules is OK, operations needs to be updated (at least for module Dashboard). So we look at possible features and get refined list looking like that:

This is still an intermediate plan. Many steps here may be refined (i.e. Dashboard ‘View Stats and Charts’ should be elaborated to deeper details)

In many cases people have plan like that from the beginning and what they try to do is to just start from 1st cell and try to record a test. In our case beginning is ‘Dashboard’. But, we know that it contains graphical information and two lists that have no filtering. So we may find better modules to begin. We have a list of notes about our findings from the analysis stage:

It helps us to pick up the scenarios that pretend to be the low hanging fruits. It is worth to start implementation from these, to get automation running and quickly get first results:

You may see that we marked whole module ‘COMPANIES’ with red — the error message we noticed is a problem we need to solve first. And we may postponse it while we have better pieces on our plate. I marked such items in green. Let’s start from these to get quick results.

Step 4: Create Test(s)

In this session we are going to implement only one scenario. It is ‘Add Contact’. We know how it may be done:

  1. Go to ‘CONTACTS’ using main menu
  2. Click on ‘Add’ to fill contact data. Give it unique name.
  3. ‘Save’
  4. Go to list and filter by name.
  5. Check it was found and only one like that is available

One thing that we note while recording the test. There is a ‘Loading…’ message displayed after we press ‘Add’ or ‘Save’ buttons.

Also you may see that contact editor fields are already displayed behind. So if we just record a test and try to play, it will likely fail if control is reached while the whole form is being loaded.

Luckily this message is displayed for a long enough time to locate in the spy and see how to wait for it to disappear.

So we may learn it. Then we may try to check it in spy several times. Normally we would just check object presence. However in this case the object is always here. It is found by its locator. So we need to find a property that signals if the ‘Loading…’ message is actively displayed or not. After some experiments that is what we find in spy. Check this:

against this:

We see that property ‘IsOffscreenProperty’ changes its value, so we can use it. We can use a simple function like this one to implement the sync:

function WaitAll()
while( SeS('Loading___')._DoGetWidgetProperty("IsOffscreenProperty")==false)

Recorded script may look like that:

We used renaming to give objects reasonable captions during recording.

Many other things may be changed. The way how we enter the “company” dropdown: we may either show dropdown and learn a company name. It is quick an easy to record, but it is not flexible. We prefer all variable data to stay in the right column in the RVL.

We may launch and close an application for each test script. Running windows store applications may be a bit tricky, we will need to polish it.

We may make a script state-tolerant, so whenever the contacts table is already expanded or not we may make it working (i.e. expand if needed).

It is worth to do this kind of polishing before recording more test cases, so the test framework will be flexible and robust.


We just tried to demonstrate all stages form first looking at the application through analysis and planning and up to creation of the first automated test case.

This article is coupled with a webinar where we will do live demonstration of the creation of the test case. So you are welcome to see the preparation of the test set and recording of the first scenario.

There are many little things that may be improved in the recorded test, we are going to demonstrate it in the live webinar.


1. Application for the demo

2. Rapise

3. Recording of the Webinar

4. Recap of the Webinar

5. Webinar Series: Planet Test Automation — First Steps

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade