Measuring Data in App with Marketing Event Tests 🔴🟡🟢

Feyza Dayan
Trendyol Tech
Published in
5 min readJul 26, 2021
Image on martechadvisor by Ameya Dusane

“You may not know the user, but you can find out what the user wants.”

How can we monitor user behavior in the application, how can we learn the requests of users?

  • At application launch,
  • When the buttons are clicked, when the buttons are seen,
  • When texts are clicked, when texts appear,
  • When the pages are viewed, scrolled,
  • When switching between pages etc... We can count them.

We may analyze users’ behavior as we wish, including but not limited to the above.

Marketing event tests allow you to specifically track clicks on buttons, texts, pages, and views so that you can better understand and analyze the behavior of your users.

In the Trendyol application, we conduct some A/B tests in order to better analyze the behavior of our users and provide the best experience, and we take actions as a result of these tests.

We develop most of the features we do in applications as A/B tests. We also perform event tests in all the features we do A/B testing. We do marketing event tests not only on A/B issues but also on every issue we want to measure.

Image on linkedin.com

🔸How much time did the user spend with this development in the app?

🔸If the improvement we made was an improvement that affected the purchase, how much did it affect the purchase metrics?

🔸How often did she/he use this feature?

We look at the metrics one by one as above and finally we publish the winner version to all our users as 100%.

So how do we test these events, how do we control them in the application?

Providing Controls on All Dashboards

We are trying to measure our marketing event tests with more than one tool. These tools are tools such as Firebase, Google Analytics, Adjust, In-House Analytics Platform.

How do we test?

  • We define the access authority to the boards. (Separate access authority are defined for each board.) ✅
  • In which environment do we want to run the tests? Stage or Prod? We determine them. ✅
  • Where are the events going to be thrown? To Firebase or Adjust or anywhere else? This is determined by the product owners. ✅

After all these are determined;

  • We compile the packages from the relevant branch to the relevant device. ✅
  • In the application we have compiled, we perform the desired actions defined inside the issue. Then we go to the relevant board and check whether these events are reflected or not. ✅

For example, in the image below, the “Click” event is thrown if the user clicks on the notification icon, and the “Seen” event is thrown when the user sees the notification icon. (This event is requested to be thrown to Firebase)

In the content of our issue in Jira,

Event Category
Event Action
Event Label

It is written by the Product Owner.

  • We provide the choice on which device we performed our test on the board. ✅

After selecting the device, we can see the actions (events) on the board. 💯

For example, with which parameters and events are thrown on the Homepage, we check all of them.

Thanks to the event thrown, improvements
❓On which device,
Which user,
❓In which time and date range,
❓ on which platform

used, we can find the answers to these questions. In this way, we can take our actions accordingly.

We can also perform event tests via the console.

We write the relevant device, environment and where the event is thrown, and we can perform the actions in the application and thus control it.

Issues with marketing tests are also an effective metric for scoring our test effort.

For more information about Estimating Test Effort, you can refer to my previous article below.

After Which Metrics Are The Events Finalized?

Image on appinventiv.com by Prateek Saxena

We look at metrics such as Session, Revenue, CR..

What do these metrics mean?

We’re looking at the daily breakdown rates for versions A and B. Then we take their deltas and make comments. If the standard deviation is not significantly negative, we usually close it with the new version.

An example of breakdown ratios in version A and B

Finally, we look at all these metrics and close the A/B test with the appropriate version (as-is or new version).

I can’t say that we finalize the data after X days, because all tests have their own metrics and bindings to measure and finalize, so the finalization time varies.

After looking at the data, below is a correspondence example of which version won/lost and why. (we share this information with each other via slack.)

Good morning, we finalized our few AB tests on iOS and Android on Friday. I share the details below, have a good week 🙋🏻‍♂️📱📱
- iOS PUDO — Ability to hide City/Town/Neighborhood field
We added CTA so that the filter area can be hidden on the Pudo map screen. We aimed to provide a larger map screen experience. However, we closed it as asis version as it caused -0.08% loss in CR and -1.55% loss in R/S.
- Android Cart — Removing Single Size information
We did a test to remove the Single Size field in the cart from the product card. Since we did not see an extra drop in CR, we closed it with the new version.

How Do We Manage All These Events?

We keep the data of all metrics and events in Excel. (for each domain)

We also have a Regression Set for marketing event tests. While performing our Regression Test, we recheck all events, and then we approve.

Event tests are very effective in measuring the behavior of users in applications and offering the most suitable features for them.

As in every test, we try to work systematically when performing a marketing event test. If you want to analyze and measure user behavior in your application, following the above methods will help.

For any questions, please do not hesitate at all.

--

--

Feyza Dayan
Trendyol Tech

Sr. Developer in Test at Trendyol International @Berlin, MBA, BSc. Computer Engineering https://www.linkedin.com/in/feyzadayan/