How Unwanted Ads Harm Apps & Users and What We Should Do As Publishers

This review discusses the types of threats hidden in ads, from illegal content to malicious scripts, as well as feedback management, analytics, and complaint processing. We also share our own experience counteracting malicious ads

FunCorp
13 min readMar 9, 2021

Ads have gone above and beyond from being a mere monetization tool; they are now, in many ways, an integral part of app content. Facebook, TikTok, Twitter, and many other media offer native advertising, which is integrated directly into the main feed, and that’s why any poorly designed or fraudulent creatives not only provoke a negative response from users but have a direct impact on the product metrics. Publishers need to approach this issue with the utmost diligence, which is why we have implemented a stand-alone flow for reporting ads directly from FunCorp apps.

The main issue is that, despite all the restrictions and tools for moderating creative ads that advertising platforms may use, ill-intentioned actors still manage to get their ads published. Our task is to track down every instance of malicious activity as quickly as possible, investigate them, and take appropriate measures.

We’d like to address some of the main issues associated with advertisements and go through our company’s solutions: feedback management, report post-processing, and a set-up for an in-house analytics and reporting system.

How advertising can disrupt app usage

Let’s start by listing several particularly glaring disruptions and then take a closer look at each of them:

  1. Inappropriate content
  2. Auto-redirect
  3. Media playback in the banner
  4. Fraud

Disruption №1. Inappropriate content

Advertising platform policies tend to explicitly mention the ad categories that cannot be hosted or promoted. Notably, these restrictions apply to both advertisers and publishers alike.

Here are some excerpts from the MoPub Policies for Publisher Partners, valid as of January 2021:

You can find the full texts of the MoPub Policies for Publisher Partners here.

Inappropriate content also includes low-quality creatives: broken, flashing, or distracting ads, or ads that can frighten the user. Kids’ apps have their own set of restrictions: for example, no ads for fireworks, tattoos, or piercings.

The publisher may design their additional list of ad categories that are prohibited from displaying. For example, we at FunCorp do not allow casino, betting, tobacco, cigarette, and alcohol advertisements within our apps. To ensure that the rules are followed accordingly, each ad has a set of tags added by the advertiser and by the platform itself automatically.

Furthermore, each advertising platform uses its scenarios for moderating ads before they are published and enter the bidding process; such methods are usually based on a combination of automated and manual tools. But even after passing the vetting stages, inappropriate creative ads still sometimes reach the publisher.

This can happen for a variety of reasons. For example, malicious advertisers may sneak past the admins by designing ad creatives with dynamically-loaded content. In this case, the banner code contains a link to a JS script, so during moderator review, the advertising platform’s team sees one type of content (compliant with the rules), while the users see something entirely different.

We have all come across something like this at least once. Best not to click.

As a result, such inappropriate content inconveniences the users and makes the advertising platform question the publisher’s choices. In some cases, such content’s intrusion into the workplace or personal space can be counterproductive or even dangerous.

To summarize, the publisher cannot ensure complete protection from inappropriate content in ads, so our mission is to monitor ad quality continuously and promptly report any issues to our partners.

Disruption №2. Auto-redirect

Banners are among the most popular advertising formats in mobile apps. Simultaneously, they are deliberately built into the app to prevent accidental clicks by the user. For example, banners may appear at the top or bottom of the screen or inside the content block. However, it is usually away from the user’s elements.

  • First and foremost, it’s done to prevent accidental clicks from impacting the ad and its placement performance metrics. Plus, sudden context switching tends to provoke a negative response, thus reducing the time users spend in the app.
  • And on top of that, the rules of using advertising platforms prohibit placing ads in a way that encourages accidental taps.

But today, banners are so much more than just a static or animated image with a target URL, which can raise a whole host of other issues.

We now deal with MRAID (Mobile Rich Ads Interface Definition) banners, including videos, audios, mini-game scripts, and more. Content can open automatically in WebView (used for displaying banners in mobile apps) written into the JS code, either accidentally or deliberately. In the most primitive case, it looks like this:

As a result, as soon as the banner appears on the screen, the deep link will open automatically, and the user will be redirected to an app store, another app, a third-party website, etc.

This isn’t good for the user and the publisher.

Disruption №3. Media playback in the banner

This does not go against the rules. The use of video and audio in banners does not violate advertising platform policies in of itself, but it can still be a pain for the publisher for several reasons:

  • Unwanted noises in the app. Sounds from the banner create another media playback channel that can disrupt user interactions with the app’s main content. Not to mention that sudden noises can be quite startling.
  • Resource consumption. Banner advertising features include auto-refreshes and rotation. In other words, as soon as one loading process ends, another begins, and the uncontrolled creation of media players within banners can lead to a memory drain. Plus, as the device loads the media, the load on its processor increases, power consumption and download traffic increase, and this causes the device to heat up.

Disruption №4. Fraud

As a user watches an advertisement, the executable code can also infect their device with malicious scripts. Such scripts won’t pose any danger in the best-case scenario and artificially boost impressions, clicks, and installations for other publishers.

However, such activity still causes issues with app performance at the very least. The publisher, in turn, risks a direct loss of income, for example, if ad impressions are attributed to a third-party app rather than the publisher’s app, which is what the user installed.

Even worse is when fraudulent ads are used by malicious parties to force a subscription to various services upon users without their knowledge or spread viruses

The Publisher’s Response

In order to address an issue, we must first be aware of it.

One of the very first steps is to create a feedback channel. There are several potential feedback channels, each with its upsides and downsides:

  1. Store reviews
  2. Emails to Support
  3. Feedback from beta testers
  4. In-app tickets

Now let’s discuss what the publisher needs to account for in each case. Then, we’ll share our dedicated flow for processing ad reports, which we have integrated into the FunCorp apps. With that done, we’ll move on to post-processing and the internal analytics system.

Feedback type №1 Store reviews

Users are mostly accustomed to reporting issues via store reviews. But this channel is poorly suited for identifying advertising-related issues because studies lack metadata. It means that:

  • We can’t be certain that an ad caused the problem
  • There are no additional artifacts (attachments), such as screenshots or videos
  • It’s hard to determine the date and time when the issue first appeared
  • It’s hard to identify the user
  • Reviews are isolated from the app-integrated analytics
  • If there is more than one SDK integrated into the app, it is impossible to know which network the issue occurred

On the other hand, store reviews are the most commonly used reporting channel, so application owners should still pay attention to review content and patterns and then carry out their investigations to identify issues.

Feedback type №2 Emails to Support

The service policy or the company website should mention the Support email, at the very least. In iFunny’s case, it’s support@ifunny.co. The user sends the support team an email in free form; it is then forwarded to the mail aggregator (or another system for addressing user queries), where the Support team processes the user’s complaint.

This path differs from store reviews in several ways:

  • The user has to make more steps
  • The user can add attachments
  • It’s easier to identify the user, for instance, by their email address.

Other than that, the efficiency of emails to the support team as a feedback channel depends on the quality of the user’s initial description of the issue and the user’s willingness to keep up the dialog with the Support team. The same applies to investigating advertising issues.

Feedback type №3. Feedback from beta testers

As a rule, beta testers (if any) are provided with an additional communication channel where they can get in touch directly with the Support team. This allows developers to address complaints as early as possible before the release reaches production.

The amount and quality of beta tester tickets depend on the number of users in the beta program and the audience’s characteristics, while the channel’s capabilities depend on the tools used. For example, if the beta version is spread via TestFlight, users submit their tickets directly through the TestFlight client.

Ticket submission scenario in TestFlight

But our experience shows that beta testers don’t usually complain about ads all that much. The one exception is when ads significantly alter the user experience, for example, when new ad space is added, or an unfamiliar format is introduced. We at FunCorp introduce such changes through A/B experiments and see some preliminary results during beta testing before starting the main experiment.

Feedback type №4. In-app tickets

Design approaches may vary, depending on the app and the purpose of collecting feedback.

Our users can leave feedback via the Support screen:

Accessing the feedback screen in the iFunny app

Upon confirmation, the interface creates a request and sends it to the backend. The request contains the full name of the app version, the device’s model and OS, and the user’s email:

Example request

Some of the parameters are based on what the user fills in; others are copied automatically from the app. The parameters used here are suitable for other apps as well.

The request heading also contains useful information. For example, if the user entered an invalid email address, they can be identified, thanks to the authorization token.

This prompts a conclusion that in-app feedback is more suitable for investigating ad-related issues than the other channels.

Feedback submitted from within the app initiates a request addressed to Support via email, making it easier for the team to gather all the information in one place and sort through it. The Support team then forwards all issues related to advertising to the Advertising QA team.

Dedicated Report Flow for Ads in FunCorp Apps

To more effectively counteract low-quality ads, we have created a different scenario for reporting ad issues. This procedure is seamlessly integrated into the user flow and provides the Advertising QA team everything that they might need for an investigation.

Let’s take a look at some examples.

Mostly, users encounter two ad formats in our apps: native ads and banners. The reporting procedure somewhat differs by design. This is how it looks in iFunny:

Reporting native ads

It is possible to access the banner reporting feature from the sharing menu, but the best way to report a banner is to swipe over it and tap on the screen that appears:

Reporting banner ads

As you can see, the report screen is reused, but the category list is different. The feedback type is always “advertisement.” The Inappropriate Content category is selected by default since this applies to most feedback.

The back end request looks like this:

A request for an ad complaint is sent with a MoPub native ad creative used as a test example

As the user is redirected to the report screen, the app prepares data on the creative ad. This is what the user was seeing on their device at the time. The data includes:

  • Ad type: banner or native
  • Tier name
  • Creative ad ID, if available (the test MoPub native creative ad doesn’t have an ID)
  • Screenshot

Plus, if the user’s creative ad is supposed to see the next part has already loaded, at this point, it will add any data on it.

If the ad has already changed by the time the report screen has loaded (for example, due to banner rotation), the report will include all the data relevant to the previous ad.

To avoid situations where the user mistakenly associates a native ad issue with a banner ad, the bottom of the screen also shows data on the preloaded native advertisements previously displayed. And vice versa: when the user reports a native ad, their report will also include data on the preloaded 320x50 banners that they once saw.

On the back end, reports are added to a database, where the Advertising QA team then processes them. Copies of the ad reports are also emailed to Support. This can be useful if an express response is required (for example, on the weekends).

The benefits of this method for advertising-related investigations are quite evident. But it would be best if you kept in mind that in the case of full-screen advertising (interstitial, rewarded video), such scenarios will be difficult to predict, and you’ll still have to rely on other feedback channels.

Report Aggregation and Post-Processing

For an even more comprehensive investigation and analysis, we employ additional tools, in particular aggregation. This is used for drawing graphs and tables, which, in turn, help us spot any deviations from the average amount of feedback:

A sample graph, showing the amount of feedback through time

You should keep in mind, though, that this peak in the graph does not necessarily correspond to a harmful disruption. Sometimes we deal with false positives: for instance, if the user is upset by the very fact that the app has ads in it and starts reporting every ad block they see.

Tables allow us to make a preliminary judgment of what caused the issue. Sometimes a description is sufficient, but if users don’t provide that, we first check the screenshots and then do some “live” testing by launching the app.

Example of aggregating ad app reports

Screenshots help us find inappropriate content and detect layout issues affecting native ads. They can also help find the corresponding partners’ ad creatives through the admin control, if available.

In complicated cases, we get Support involved and request additional information from the partner at the same time. For example, we can ask the partner for the code for the creative ad that users are complaining about and then run that code in the app.

As soon as we figure out the issue, we notify the partner. From this point on, we can go down several paths:

  • If we manage to identify the problem with this ad creative (maybe it contains inappropriate content, redirects somewhere automatically, etc.), we block it. The partner usually has a unique interface available for that.
  • If it’s impossible to identify a specific ad creative or domain, we consider completely blocking the partner’s traffic until the issue is resolved.
  • If there is a bug in the advertising SDK, we let the partner know and proceed with app integration after the bug is fixed. At the same time, we do our best to minimize the negative impacts on the user. For example, if an SDK update caused the issue, we roll it back to the previous version.
  • And if there is a technical malfunction in our application, we forward the feedback to our tech team to address it.

Analytics and Alerting

When conducting our review of advertising issues thus far, we have used user experience as an example. But another part of the Advertising QA team’s job involves detecting potential ad problems by studying analytics. This includes ad analytics, technical analytics, and product analytics.

We use this approach because not every user is ready to spend time typing out a report, while analytics help us track hundreds of thousands of incoming events.

Unfortunately, analytics don’t let you see if an ad network has started sharing inappropriate content, but you can, for example, notice auto-redirects. A possible “symptom” of the latter is a spike in CTR for a specific network:

An example of an analytical anomaly that may point to auto-redirecting

If such incidents in the publisher’s analytics are supplemented with creative ad IDs, you can summarize them in tables and pinpoint the source of the problem, which you can then report to your partner. But for this to happen, the ad SDK must support this feature.

Sample table with an ad creative summary

Advertising analytics can also help discover other anomalies. They include timeouts for receiving ads, drops in fill rates, and more. So much more that it’s a story for another article.

In the case of technical analytics, crashes are the first thing you ought to look into. They can be caused by an update or rollback of the advertising SDK, a new format on the partner’s side, and other factors.

It would help if you focused on tracking all your analytics whenever a new product version came out, including beta releases and phased rollouts. Fully rolling out a new version can only be safe if all advertising, technical, and product analytics are in order.

The most important metrics should also have alerts set up. Product and advertising alerts are triggered automatically when the metrics deviate from the predefined confidence interval. Technical alerts are triggered when new crash groups are detected or when crash rates change.

Slack notification about CTR deviation

The Bottom Line

Low-quality advertising angers users and directly affects technical performance and product metrics.

Unfortunately, even passing all of the advertising platform’s and publisher’s vetting stages, low-quality ads still manage to get published. That’s why we are conscientious about detecting anomalies, to the point of designing a unique in-app ad report flow. This provides us with more tools for collecting the information we need to investigate further the issue (compared to what store reviews or support tickets provide); therefore, we can act with maximum efficiency.

Such issue detection requires constant cooperation and interaction with partners, but it does pay off in the long run for everyone — publishers and users alike.

--

--

FunCorp

Since 2004, we’ve been doing our part in changing the entertainment & tech space. Learn more at http://bit.ly/3XWJemV