The Rise of Vernacular language social network platforms and why is it important to learn from the West

Bharath Shanker
8 min readNov 21, 2018

--

Vernacular language social apps like Sharechat, Helo, Roposa etc. have given voice, entertainment and an entry to a digital world which looks and sounds familiar to people belonging to India 2 and India 3, combinedly called as ‘Bharat’. The next billion users of the internet are to come from here and the ability to use vernacular language is a brilliant enabler to accelerate this entry. These local language start-ups have identified a big space and are racking up user-base and along with it funding and valuations. Sharechat, which recently raised a $100 Mn, is valued at $460 Mn and has a DAU (daily active users) of 8 mn users. It covers 14 regional languages with only 50 employees. Roposo with $ 21 Mn funding says it has 11 Mn signed-up users. Helo which was launched by China’s Toutiao has over 10 Mn+ installs.

Above graphs show the remarkable rise of Sharechat and Helo, two of the most popular local language social apps, in the rank list of Google play store in the last 3 months (data from Appannie). The virality and network effects have kicked in and the apps are amassing users on daily basis.

Sharechat app store ranking (Aug — Oct 2018); Source: Appannie
Helo app store ranking (Aug — Oct 2018); Source: Appannie

The similarity to the West

The troubling part in this rapid rise and the red flag I wish to raise is this — these Indian language social networks are following the same path social networks like Facebook, Instagram, twitter have shown in dealing with platform issues like abuse, fake news, misinformation campaigns, hoaxes, etc.

Facebook’s and Twitter's troubles with handling hate speech, platform abuse has been repeatedly exposed with the Russian election meddling, fake news, Cambridge analytica scandal etc. More recently, the failings of social media again came to the fore in the Pittsburgh synagogue shootings where Gab, a far-right social media platform allowed anti-semitic posts by the alleged shooter. Journalists like Kara Swisher have been making the point that in all the cases of social media abuse, the platform has been used in almost exactly the way the platforms have been designed. The core reason for this is that the founders/builders of such social networks do not give enough time to think how their platform could be misused and abused. The same reach and connections that social networks espouse, in the wrong or irresponsible hands, could be used for accelerating the spread of fake news, hoaxes, abuse, hatred against a community, character shaming etc.

The laser focus aim of scaling-up using network effects to become the de-facto social network often comes in the way of founders thinking about anything negative that might transpire out of their product. In some ways, the struggles that they have to go through to build the product and the company amidst competition, fund search etc. makes it justifiable that they do not think pessimistically about the effects of their product. But unfortunately, this beast called the social network that they are building is very difficult to be contained once bad actors get in and start misusing it. This is playing out at facebook, twitter where every week there is some negative event that spurs out of their platforms.

A similar story is happening in the vernacular language social network space in the country. These apps (Sharechat, Roposo, Helo), whose soul is the ability to let people socialize, create and find content in their native language, do not even provide their “terms and conditions” in the language that people choose to use the network. The landing page in all apps prompts the users to choose the language of their choice to move ahead. Once we choose the language of our choice, we land on the next page where users are asked to create their account. Here, when we try to understand the terms and conditions of our usage of the platform, there is a small link in the local language providing the link to it. Once we click this, lo and behold, a new page loads in English, detailing the usual list of T&Cs like how the platform is not responsible for the content posted, what kind of content can be posted, points about copyright material etc.

Sharechat landing page in Tamil; T&C in English

Imagine a native language user, who most probably is not strong with the English language, creating an account on one of these apps. She/he does not get an opportunity to understand what she is getting into and what her/his duties are in responsibly using the platform. I might sound like an alarmist for bringing this up for most of us don’t read the terms and conditions before signing up for a service. But this oversight is an indicator to how these startups are thinking or in this case not thinking about platform misuse and abuse.

Content moderation — Holy grail of social platforms

This is an important issue because the prevention, detection and action of misuse in the case of local language apps look more complex than the English-dominant platforms. One way English majority platforms like facebook, Twitter etc. can monitor is using AI. They have used AI to remove spam, porn and fake accounts. But I doubt if we have enough training data in local languages for algorithms to learn context, hate-speech, incendiary topics etc. to notify toxic and sensitive content. I agree that images are indeed language agnostic and the companies might be able to parse through images better than text. But even here, I doubt if certain cultural elements could be understood right away.

The culture challenge in India is so diverse and complex that things acceptable to one community/state might offend people in other community/state. Facebook’s hate-speech detecting AI could not detect Burmese which in part was a reason for the spread of the message and the eventual genocide in Myanmar.

The other way of doing this parsing is to have content moderators who sift through tons of images and posts to classify them. But this task needs hundreds, if not thousands, of operators which these startups would not be able to afford. Even twitter which is a large company has submitted that it doesn’t have the resources to do this task themselves and Jack Dorsey, its CEO requested users and the media to help in weeding out toxic people out of the community. Sharechat is reported to have 50 employees; contrast that with the 7500+ people facebook has for content moderation alone.

For a paranoid person, it is very easy to see how the misuse of these platforms can bring massive problems to the communities they are used in. For example, in Tamilnadu, smart evil elements can cause political instability, spur caste fights, encourage secession sentiments etc. There already have been incidents of lynching in the state because of fake WhatsApp rumours.

Need for user education

Content moderation of social networks is becoming one of those holy-grail hard problems to solve and I don’t think the startup ecosystem in our country — be it the startups, VCs, journalists and users are giving enough due to this challenge. To be fair, the startups are using AI to monitor content and give the ability to users to flag content and block users. This alone wouldn’t suffice is what I believe.

There is a need for pro-active education and timely reminders to users about the responsibility they hold in making their social network safe for all. Every new user who signs up must be made aware of what is acceptable and not acceptable on the platform and educated about the availability of options to report and block. This education is extremely crucial to new vernacular language users who might be more vulnerable to platform abuse than the first set of users on facebook/twitter/Instagram. This need for ‘prevention’ rather than ‘cure’ comes from the challenges that platforms face in moderating content. User education is extremely crucial because it is not the ‘evil/incendiary/abusive post’ that causes damage but the ‘shares, likes & comments’ that naïve / unprepared users make the damage.

As Caterina Fake, the co-founder of Flickr strongly opines, in online platforms, the first set of users set the community standards and the founders must make it really clear what is and what is not acceptable on their platform.

“You are the framer. You are the framer of the Constitution in this world that you are building. You are the Abraham in the series of begats” — Caterina Fake on Masters of scale podcast

These are hard decisions that founders must embrace without taking the easy way out which is to hide under the garb of free speech, “we are just a platform” etc. It is a very fine line but it is important for founders to have strong opinions on these issues.

Responsibility Quotient for Founders

One other way I can think of founders and growth teams in startups tackling this issue is to introspect how much time they spend thinking about effects of their social media platforms and how certain elements can misuse it. One way of doing this is to have a metric like “Responsibility quotient”

Responsibility quotient (RQ) = Time spent thinking about misuse & abuse / Time building the product

Like how tech companies run hackathons and provide bug bounties, it could be possible that they run experiments in a sand boxed environment (with selected and consenting users) to put their platforms to soft attacks.

Eco-system responsibilities

All the journalistic pieces on today’s startups are predominantly about the product, funding, success and failure stories. Very few articles have questioned the first and second-order effects of such services on our society. In case of social platforms, it would be great to have statistics on how many users use the report/block option and how many posts were taken off in a period along with the DAU, MAU figures that are usually highlighted in news/review pieces. Tech journalists have this vantage point as a user and as an observer. It is necessary that tech journalism plays a role in educating users as well as flagging issues that platform owners miss.

--

--