Safety@ Koo (updated)

Rajneesh Jaswal
Koo App
Published in
7 min readJan 13, 2023

--

Safety @ Koo

Koo’s mission is to unite the world and create a friendly space for healthy discussions. Koo prides itself as the friendliest social network.

While running a platform to perfection at scale and be fair to everyone is a mighty challenge, we strive to keep creating and improving processes to deal with the perils of some evils like hate speech, fake news and impersonation.

Koo works hard to provide its users a safe and trusted microblog in a language of their choice. In order to provide users with a wholesome community and meaningful engagement we take a number of steps to keep the platform safe.

Koo has identified five areas which have a high impact on user safety:

  1. Child Sexual Abuse Materials & Nudity
  2. Toxic comments and hate speech
  3. Misinformation and Disinformation
  4. Privacy Rights
  5. Impersonation

Framework to handle issues:

  1. Proactively identify and deal
  2. Enable the community to highlight
  3. Work with experts and enablers like NGOs to identify and tackle
  4. Work with intelligence and state authorities to keep improving

Content moderation is a journey and platforms need to constantly evolve their methods and processes to ensure that it’s abreast with the latest attacks from chaos creators to protect the citizens at large. Our endeavor is to work with the best authorities and agencies to achieve near perfection while we acknowledge the fact that 100% accuracy is a north star.

The intent is to keep our internet and public social space safe.

Child Sexual Abuse Materials and Nudity

Child sexual abuse is one of the most heinous crimes and Koo condemns it in the harshest manner. There is zero tolerance on Koo for child sexual abuse materials. Anyone posting, exchanging, requesting, following, reacting or engaging with such materials has no place on Koo.

Koo is a thoughts and opinions platform where a large majority of users freely express themselves in a language of their choice and on a topic of interest, including politics, comedy, poetry, culture, sports and art among 1000s of other topics. To provide such users the most safe and engaging experience any content exhibiting private parts or live sex or depraved or obscene behavior or nudity is not allowed on Koo.

Koo’s policy on child sexual materials and nude or sexual materials is implemented in the following manner:

  1. Koo’s in house ‘No Nudity Algorithm’ proactively and instantaneously detects and blocks any attempt by a user to upload a picture or video containing child sexual abuse materials or nude or sexual content. These detections and blocking take less than 10 seconds.
  2. Immediately on detection, persons posting such pictures or videos are restricted in their use of the Koo platform. They are immediately blocked from (i) posting content; (ii) being discovered by other users; (iii) being featured in trending posts, or (iv) being able to engage with other users in any manner.
  3. On the Koo platform as well, our official handle @KooForGood is the lightning rod for reporting CSAM materials.

Toxic Comments and Hate Speech

In line with the UN Strategy and Plan of Action on Hate Speech, Koo defines toxicity and hate speech to mean any kind of communication whether in speech, writing or behavior, that attacks or incites to attack, insinuates possible or obvious harm, uses pejorative / contemptuous or discriminatory language referencing a person or a group on the basis of their identity for example religion, ethnicity, nationality, race, color, descent, gender or other identity factors.

We recognise that the longer toxic comments and hate speech are available for public viewing, the more is the harm they can cause. Koo, therefore actively detects and blocks Toxic Comments and Hate Speech (instead of the reactive approach by other platforms).

This detection and blocking is practically implemented in the following manner:

  1. Koo has collated dictionaries on hateful words in many languages including Portuguese. The dictionaries are sourced from the open web, academic institutions, Government bodies and our own machine learning patterns.
  2. These dictionaries are combined with inputs from class leading external non-partisan sources (e.g. Perspective API toxicity detector) to create an in-house ‘No Toxicity Algorithm’.
  3. The Koo ‘No Toxicity Algorithm’ is a unique feature that actively detects and removes or hides any content which has toxic or hateful words. The detection and removal occurs within 10s of a user posting toxic or hateful content. The algorithm currently detects text only and is being fine tuned to detect toxic or hateful words in pictures.
  4. Persons engaging in repeat acts of posting hateful and toxic content are restricted in their use of the Koo platform and are blocked from the ability to (i) post content; (ii) be discovered by other users; (iii) be featured in trending posts or (iv) be able to engage with other users in any manner.
  5. In the rare event of an incorrect removal, users are given an opportunity to file an appeal using our Grievance Redressal mechanism and incorrect decisions are reversed within 72 hours or less.

We will work with local authorities to strengthen these dictionaries to incorporate local words and contexts that we may be unaware of.

Misinformation and Disinformation

Misinformation and Disinformation have been identified as the most dangerous wrongs on social media. Given the extremely dangerous nature of this topic, Koo is taking a proactive approach to detecting and labeling misinformation and disinformation. This approach is different from other platforms who only “react” to reports or sometimes actively support such content.

Koo’s in-house ‘Misinfo & Disinfo Algorithm’ actively detects and labels misinformation and disinformation in the following manner:

  1. Koo has created a database of malicious links, URLs and websites which are well known for spreading misinformation and disinformation. This database is actively updated with information from various trusted sources.
  2. Koo has also created a database of news, articles and links already labeled as fake news by reputed news and media houses. This in-house database is updated with real time information provided from class leading non-partisan sources.
  3. The Koo ‘Misinfo & Disinfo Algorithm’ constantly, and in real-time, scans all viral and reported posts against these databases.
  4. In the event of match, the post is immediately (i) flagged for deeper check by an IFCN certified Fact Checker, (ii) displayed with a caution to users, (iii) removed from trending and active feeds, and (iv) not allowed to be shared.
  5. Once a professional fact checker confirms that a post is unverified or partially verified, the post is removed from the platform and all users who interacted with the post are proactively informed of the fact checking results.
  6. Persons engaging in repeat acts are restricted in their use of the Koo platform and are blocked from (i) posting content; (ii) being discovered by other users; (iii) being featured in trending posts, or (iv) being able to engage with other users in any manner.
  7. In the rare event of an incorrect removal, users are given an opportunity to file an appeal using our Grievance Redressal mechanism and incorrect decisions are reversed within 72 hours or less.
  8. Koo has started discussions with several fact checking agencies to bring them on board as trusted fact checkers in addition to engaging with NGOs to ensure a non-partisan approach.

In order to reduce the effect of bots in spreading misinformation and disinformation, Koo has implemented a system of Voluntary Self Verification. Voluntary Self Verification is already live on Koo. The process relies on Google’s ReCaptcha and Fingerprint’s bot detector to identify accounts which are human (and not bots) and place a Green Identification Tick on their profile. From our experience, users who voluntarily self verify themselves are less likely to engage in misinformation and disinformation. This Voluntary Self Verification is provided free of charge to all users.

Apart from that, we have systems that look for bot activity and neutralizes their effect on our systems, algorithms and visibility of such accounts.

Privacy Rights

Koo believes in the concept of transparency, fairness and user control as far as their data is concerned. We are transparent in what data is stored and used, fair in not doing anything without consent and give users the control on their data.

Koo is ISO 27001: 2013 certified and implements the requirements of the ISO while collecting, storing, sorting and handling data. User data is encrypted at rest and in transit.

Specifically:

  1. Data of users for a particular part of the world is encrypted and stored in-region, if not in-country.
  2. Koo has a global, consistent approach to privacy which aligns with GDPR, LGPD and other high threshold privacy laws around the world. No personal data or sensitive data is collected without consent. On download of the Koo App, users are informed that proceeding with use of the Koo App is subject to the Koo Privacy Policy.
  3. The Privacy Policy is accessible on Koo’s website and as a separate section in the App Settings.
  4. The Privacy Policy; the Reporting and Redressal Section and the Contact Section of our website and App have embedded forms and detailed contact information from where users can reach out to us to request for processing of private information.
  5. In order to comply with legal requirements and assure our users complete transparency and responsiveness in handling their data requests, Koo is appointing Data Protection Officers across the world. Information can be found in the Privacy Policy page.

We believe that the above measures are strong and resilient and will help in creating a better social media experience for our users.

Impersonation

Bad actors are known to create profiles which impersonate eminent users in order to spread misinformation, disinformation or just cause disruption to user experience. Apart from the harm and disruption to users, impersonation also creates a reputational impact for eminent personalities / organizations.

In order to avoid such risks Koo has created an in-house ‘MisRep Algorithm’. The MisRep Algorithm constantly scans the platform for profiles who use the content or photos or videos or descriptions of well known personalities. The MisRep Algorithm is constantly updated with pictures and content of eminent personalities / organizations.

On detection:

(i) the pictures and videos of well known personalities are immediately removed from the posts,

(ii) any such accounts are immediately flagged for future bad behavior, and

(iii) the posts are deprecated from circulation.

All these actions occur within 10 seconds. Fan pages and other accounts who follow the well known personalities for genuine purposes are encouraged to use “Fan Page” in their description.

This MisRep Algorithm in conjunction with our Eminence Yellow Tick feature, ensures that the risk of impersonation is severely minimized and bad actors are unable to influence regular users.

Request for Demo / Proof

If you would like to set up a demonstration of any of the above features or require any further information please do not hesitate to contact our General Counsel Rajneesh Jaswal on corpcomm@kooapp.com or publicpolicy@kooapp.com.

--

--