The Truth about Mobile Web

Karan Peri
Trenches of Consumer Product Management
14 min readJan 26, 2019

--

and why most companies have it wrong.

Open Web is one of the largest, most evergreen channels of distribution and growth for consumer Internet products. It’s the one true, persistent and democratic platform that makes billions of products and services accessible to people instantly regardless of their device or location (well almost!).

Percentage of all global web pages served to mobile phones from 2009 to 2018

However, I’ve seen a pattern emerge in how companies build for mobile web. Here it is — Most companies see mobile web as a platform with low engagement and poor ROI resulting in bare minimum investment in time and resources as compared to other platforms such as mobile apps, desktop web, voice etc. (ceteris paribus).

Below is an infographic that represents this common perception well.

Mobile Web’s perception as compared to other platforms

At its core, this view is formed due to a like-to-like comparison of metrics between mobile web and other platforms. At face value, the numbers do seem irrefutable. However, in the past few years I’ve had the chance to build consumer mobile products (both web and native) at scale with world-class teams in the toughest markets, and found this perception about mobile web to be often misplaced. It leads companies to make erroneous investment decisions making the growth of their products harder and costlier.

Building Flipkart Lite, the worlds first large scale commercial progressive web app as an upgrade to the Flipkart’s legacy mobile website confirmed this belief in mobile web’s strength and its ability to sustainably boost growth in ways that few other channels can.

In this article, I will share few of the top factors that show how mobile web is structurally different and needs to be analysed with a different product-thinking lens to understand its nature and contribution to a product’s success. Mastering each of these factors is a journey on its own, but I hope is that recognising these first-principles and related product-thinking pitfalls will open up a new realm of possibilities as you carve the own growth path for your product

So let’s get straight into it.

To make the narrative more palatable and sensible, we’ll assume comparison with mobile apps as a platform in a transactional industry (product or services sold over the Internet in exchange for money) where Conversion Rate is usually a key metric. E.g Travel or E-Commerce.

For the sake of this article, let’s baseline the defintion of Conversion Rate as: Total Purchases/Unique visitors in a specific period of time.

Factor 1— Different visitor characteristics

We assume that our user base is similar on all platforms we offer our products or services on. This might be true at face-value, but is rarely the case in one specific way- Channels that require users to explicitly sacrifice their attention, time, mobile network data and storage space (E.g. Native apps) in order to get access to the product is a high-intent subset of the user base i.e. more probable to spend the effort required to explore your product. Whereas with mobile web, the channel is exposed to all types of users across the spectrum of intent, without any bias.

Users who finally reach the ‘Use’ stage are most likely to engage as they go through hoops that test their intent.

To understand this better, assume Mobile web to be an open market where anyone can walk-in and Native apps to be an exclusive club with strict admission criteria. Given this difference, it’s not a surprise if an average customer in the club would spend more as compared to one visiting a farmer’s market. If you only let the best in then they would engage as ‘best’ customers would, along any common metric.

Mobile web is akin to an open market where all are welcome
Channels like Native apps require considered download before being allowed to experience the product, akin to a club with restriction and set of rules.

Taking this one level deeper, a user who uses a product on the web might visit the native app, already primed and under different circumstances with a different club-like mindset exhibiting the premium engagement we see in apps. This effect can clearly be seen in an analysis of the cross-device usage of logged-in users of a consumer internet product operating at scale. However, such depth of analysis is uncommon, leading to incorrect conclusions.

Data from comScore suggests that more users interact with brands across multiple platforms than on one platform, and that this trend is only gaining momentum. Also, 46% of mobile shopping sessions include at least one transition between mobile site and an app (Source: Google/Verto, U.S. “The Mobile Shopping Journey,” Oct. 2017). Branch estimates that up to 30% of conversions are misattributed due to the limitations of legacy models (Source: Branch.io)

Impact on Conversion Rate: The conversion of mobile web isn’t really that low. Just that the denominator has high variability in customer intents which manifests in the end result.

Key takeaway: Realising this structural difference between platforms helps attribute the cause for differences in metrics between platforms to structural fundamentals rather than experience of the channel itself. Use this understanding to avoid a like-for-like comparison between platforms and go deeper into the holistic role they play in user acquisition and engagement overall.

Factor 2— Role of Discoverability (a.k.a distribution)

A great hook doesn’t have to achieve anything concrete. The simpler, the better. It’s an experience designed to influence a user to interact with your platform often enough to form a habit- Nir Eyal

Being in front of customers increases a product’s visibility. Visibility acts as reminder of a product’s existence. In this attention starved world, ‘reminders’ increase the chances of users re-trying a product. Re-trying a product builds repetition. Repetition feeds into ‘Engagement’ which in turn increases a product’s discoverability — this flywheel of events illustrates one of the central concepts that make Targeted Advertising, Push Notifications or Voice Assistants so effective. Reminding users of an existing intent is one of the most potent of all engagement triggers. This is also the reason why getting users to install an app is considered as such a high value action (or in case of voice forward devices such as Amazon Echo or Google Home, the visible device itself acts as a physical reminder to users)

Reminding the user of an existing intent is one of the most potent of all engagement triggers.

In our context, a mobile phone’s homescreen is a highway that people cross a hundred times a day. A native app with an icon on this highway acts as a billboard (or reminder) for a product which users might have tried once but can use a nudge to try it again. On the other hand, mobile web has a break in this flywheel where people have to implicitly remember to visit the site and type in a URL in the mobile browser, or chance upon it again via search, an ad or any other means. In case of mobile web, there is no reminder waiting on a user’s natural path.

Here’s what the discovery flywheel looks like for Mobile web and Native Apps:

Impact on Conversion Rate: Repeat traffic almost always has better purchase rates than first visits. This is essentially due to the fact that these repeat users are familiar with the app and require minimal education to get started. Moreover, given that they have retained the app on their devices means that this set of users are pre-sold on the apps value to some extent.

Key takeaway: Knowing the importance of discoverability, you know that one of the main reasons for lower mobile web conversion is not due to the experience itself but due to a structural advantage that other platforms lend. This aspect can now be tackled via ‘Add to homescreen’ campaigns, browser visibility partnerships etc. to leverage the organic user base ending up in on mobile sites, rather than give them a poor experience and making it harder to reacquire these users later. Lost user trust is hard to regain and directly reflects in Customer Acquisition Costs (CAC).

Factor 3 — Traffic channels and their varying intent

Leading from the first two factors, the third factor is the role that sheer variability of incoming traffic channels plays on how metrics are calculated. To help explain this, here is an indicative split of incoming traffic into Native apps and mobile web-

A high level and indicative traffic split for Native App vs Mobile web

Mobile web is inherently open in nature and has a well developed ecosystem of traffic channels to acquire users (E.g. Organic, Search Engines, Affiliate, Paid ads, Social networks and messengers, via links in transactional/order related SMS es etc.). However, Native apps see most of their traffic from direct organic visits (mostly owing to better visibility) and Push Notifications (which also increase visibility by acting as reminders). As discussed earlier, these channels will mostly consist of repeat traffic and inherently have relatively higher engagement rates.

Impact on Conversion Rate: In mobile web, there is an increase in the percentage of relatively lower converting traffic, increasing the denominator in the Conversion Rate metric. The end effect is a seemingly lower conversion rate, which in reality has little to do with the product experience itself.

Key Takeaway: The improvement in mobile apps metrics due to this factor has little to do with the experience itself, but is rather a result of the variability of intent in the numerous incoming traffic channels at play here. The temporarily lower converting (yet important) traffic from some sources do not reach native apps to dilute the impact of the strong ones. While in mobile web, a segment on direct and repeat traffic would show significantly higher metrics, making it more of a game of numbers rather than any weakness in the web as a platform.

Factor 4— All cookies aren’t sweet.

~30% of Internet users disable/clear cookies every month. This number could go up to almost 50% in emerging countries such as India which is dominated by relatively cheaper, low power devices. — The reasons range from privacy concerns to managing on-device storage. Regardless, these stats are important to us primarily because of how analytics tools work and the subtle yet significant way in which this impacts web metrics.

Put very simply, most of the popular web analytics tools rely on cookies for two key pieces of information: How many times was a site visited? and How many unique visitors made those visits? This information that web analytics tools rely on to calculate metrics such as Conversion Rate and Active Users get polluted when cookies are cleared by users — A single user is treated as new user on their subsequent visit if the cookie is cleared which is clearly incorrect since it’s the same user. This phenomenon inflates Unique Visitors and any positive engagement that happens (purchase, adding to wishlist, adding a payment method, signing-in, CTRs etc.) is now assumed to have been done by a much larger set of users than in reality.

Impact on Conversion Rate: Conversion Rate metrics calculated as ‘Purchases/Unique Visitors’ is calculated on an inflated denominator depressing the result. To put this in context, a 20% rise in the denominator drops the Conversion Rate by 17%. This is a 1700 basis points drop!

Key Takeaway: This issue does not manifest in other platforms (Native apps, voice etc.) since they don’t rely on cookies to track users. This makes any Unique Visitor based web metrics seem lower than they really are. In this case, blind trust in such metrics can lead to incorrect conclusions about the contribution of web to the overall business. It’s nearly impossible to figure the exact number of cookie clearings, but adjusting web metrics by conservatively estimating visits incorrectly identified as ‘new’ would be a good step towards a metric cleanup. An additional adjustment to increase accuracy of this estimate could be to exclude visits from users who are logged-in (since they are not affected by cookie clearance)

Factor 5— Intentless Traffic

Web traffic web is increasingly prone to bot or non-human traffic that visit a site for reasons other than engaging with the services the site offers. I call this ‘Intentless Traffic’ as it has a near 0% chance of engaging in any meaningful way such as purchasing something or reading an article.

While good bots help in delivering your product or service, bad bots have a negative impact on your business and are deployed by competitors and fraudsters with mal-intent. Bot are of several kinds: Search engine crawlers, feed-fetchers, backlink checkers (such as Ahrefs), monitoring bots, scrapers for competitive data mining (to inform competitor price engines), inventory estimation and hoarding, application DDoS(most fail but remain analytically relevant), API abuse, carding bots, ad fraud bots etc. Studies have estimated this type of traffic to be anywhere between 20%-40% in 2018 and growing. A good heuristic: Larger the access and relevance of a service to end users (E-commerce, Travel, Video), larger the bot traffic problem a business will have. (Note: specifics from The Big, Bad Bot Problem report by ShieldSquare)

Impact on Conversion Rate: Similar to the cookie clearing issue, Intentless traffic inflates the denominator resulting in a lower Conversion Rate metric.

Key takeaway: ‘Intentless traffic’ is not going to engage with a site in a way that’s meaningful for businesses to track in terms of engagement. They inflate analytics and impede our decision-making processes by portraying a diluted view of the business. Bot traffic is much lower for native apps owing to the effort it takes to overcome native apps’ structural advantage of requiring installs and the general unavailability of content via public URLs (although the landscape is slowly changing as bots learn to take advantage of native app APIs). According to a study by ShieldSquare, nearly 91 % of worldwide bad bot traffic was on Web (mobile and desktop) vs only 8.71% on Mobile apps.

Several popular analytics tools now provide basic bot filtering which can be applied before calculating the metrics. Other efforts can range from adjusting metrics with an estimation of such traffic to setting up bot and fraud detection teams (for those who can afford to do so).

Factor 6— Product Experience

Keeping this one for the last for a reason. Visible part of product UX is the easiest to observe which is probably why it’s also cited as the main reason for the alleged poor performance of web as a mover of metrics. By now we know that there’s more than what meets the eye and visible UX is just one of the several influencing factors.

Having said that and keeping all product features equal between platforms, Experience is also one aspect where native apps do genuinely shine, specifically in 1 main area: Performance. Although performance in itself needs several blogs worth of deep-dives, let’s quickly see the difference it makes before coming to the key takeaway on Experience:

Real Performance: Mobile apps have inherent performance advantages that allow them to respond to users faster. Native apps provide a reliable and immersive experience to users that loads instantly. This is made possible since an app’s minimum UI is stored locally when it is downloaded the very first time, with content pulled in dynamically via APIs. These pre-loaded ‘shells’ pay performance dividends by loading instantly every time a user uses the app providing an instant experience irrespective of connection flakiness. Most websites, on the other hand, are loaded anew on every visit leading to users seeing white pages when visiting a URL or while transitioning between pages on a site. Dealing with this requires front-end wizardry and does not come as an inherent advantage. This issue becomes worse when baggage (such as bloated scripts, 3rd party integration etc.) from the desktop world with tethered and reliable connections are directly translated to mobile sites more prone to flaky and broken network conditions on underpowered mobile devices. The result are slow rendering mobile sites with uneven and janky experiences and journeys.

Perceived Performance: Another aspect of performance is the user’s perception of it. There are technical limits to performance improvements which is why we also need to focus on optimising the perception of performance to make progress feel faster than it really is. Some examples of perceived performance are: Activity and progress indicators, shimmers on button taps to provide immediate feedback to customer action, Loading above-the-fold content first, Predicting user action and pre-loading content, loading fast blurred images while the actual images are fetched, caching and reusing data already fetched etc. These tactics collectively lead to an experience that feels ‘native’ or ‘app-like’.

But, isn’t all this is possible in web these days? That’s absolutely right! Modern web tech does allow us to do this and more now, which brings us to our main point- Improving key web performance metrics such as Time to First Paint, Time to Interactive, and Time to First Meaningful Paint require improvements such as optimising and splitting JavaScript and CSS bundles, optimising caching strategies, balancing server side and client side rendering, mitigating impact of client side rendering on web crawling/SEO etc., all of which require front-end development expertise built with deliberate investment in time and resources.

Achieving native-like performance on mobile web requires deep focus on real and perceived performance improvements which means building world class JavaScript/CSS expertise, consistent prioritisation of performance improvement by product teams and of course leadership intent.

Companies that do invest see significant, direct and long-term improvements in business results across their most meaningful metrics (such as Time on Site, Ad Clickthroughs, Conversion, Revenue, DAU, MAU). Flipkart, Pinterest, Twitter, MakeMyTrip, AliExpress are all examples of companies that have gone above and beyond to provide a consistent experience for their users regardless of their platform of access.

Key Takeaway: Perception of poor metrics drive reduction in investment and reduction in investment kill performance improvements due to the deep expertise required to achieve it. This downward spiral is a self-fulfilling prophecy leading to lower mobile web metrics. However, this can be dealt with by diving deep into the metrics and understanding the impact of improved performance and building the required expertise in-house. Although this expertise can be hired as well, in my opinion it doesn’t take much for product experience and performance to regress if let unattended (and it usually does). The key word in ‘Consistent improvement’ is ‘Consistent’.

All the factors we have discussed above are just a few of the many reasons why web metrics appear poor. There will be several more specific to your product and industry which can be uncovered only by questioning first impressions and digging into your data.

No two platforms are made equal and knowing the merits and demerits of each is a good first step towards providing the best possible experience to users where they choose to meet us, not the other way around. With growing Cost of Customer Acquisition, High Uninstall Rates, Rising Barriers to creating new channels and Higher proprietary walls around them — Open Web remains the one evergreen platform that will go to work for us, if we let it!

Share some of your own thoughts below👇🏼 or send me a tweet at @karanperi

Found it helpful? Hold down the 👏 and help others find this article.

--

--