The Promise of AI Content Filtering Systems For Digital Marketplaces

Ahmed Medien
Checkpoint
Published in
4 min readMay 11, 2021
Sources: Statista and other sources

The past decade has seen the rise of the social e-commerce platform. C2C platforms such as Wish, Vinted, and Wallapop have each raised a new series of funding at $300 million, $141 million, and $191 million in the past two years alone. They withstood the COVID-19 pandemic, which has annihilated traditional brick-and-mortar retail. Other emerging social platforms like TikTok seek closer collaboration with e-commerce companies such as Shopify to integrate shopping features on their platform and build better social traffic pipelines for e-commerce. The trend is irreversible. In a recent 2020 survey, more than 37% of global e-commerce companies stated that they plan to sell directly on social media.

What is a C2C platform?

C2C or customer-to-customer platforms are e-commerce platforms that bring together buyers and sellers of trade goods and services. C2C platforms have built on the traditional newspaper classifieds model and now lead on the mobile-first front. They provide not only shopping features but also social features for their customers to comment, “like”, review, and generally exchange information. Mobile-friendly platforms such as Instagram, Snapchat, and Pinterest have continuously flirted with new shopping features with buttons and product pages on their platforms.

C2C platforms in the US have garnered more than 125 million clicks between Q4 of 2019 and 2020. In China, where Tencent-owned “WeChat” platforms are viral, C2C e-commerce represented 41% of the total online retail sales in the country in 2020.

The ecosystem of fraud and content abuse

Before products are available on C2C apps, users must create their sellers' profiles and upload products on the platform. Platform moderators, whether humans or automated systems, review new content for public viewing. Bad actors can infiltrate digital marketplaces on these platforms that rely on user-generated content for direct media consumption or commerce.

Of all the different types of content abuse and fraud cases, malign manipulation campaigns can be in the form of “bombing” by uploading content that violates the platform content policies or “brigading” by targeting users. On the other hand, fraud can take many shapes. Product “hijacking,” bought and sold fake reviews, delegitimizing competitors, elaborate schemes of identity theft, and credit card fraud, the list goes on. As we outlined the different threats facing digital marketplaces in a previous post, it is important to understand content abuse and fraud as part of a system with nodes and moving pieces.

A screenshot of the content and fraud cycle by Sift

Trust and safety as enablers for e-commerce growth

As e-commerce and digital marketplaces continue to grow, distrust of the internet and digital marketplaces due to their perceived vulnerability is also growing. Users distrust the safety of digital marketplaces for many reasons; one is privacy concerns or their own trend adoption tendencies like online dating or buying products from other users which they may never get to touch physically beforehand. To mitigate this reticence, users and potential buyers need more confidence in their online choices and trust in the brand and platform they choose to spend their money on. Trust between buyers, the vendors, and the social platform is the currency that is likely to deliver transactions and future growth. This is why it is paramount that every social e-commerce platform invests in its ability to detect spam, detect fake listings and scams, act against abuse and bad faith behaviour, and protect users from violating local and international laws and regulations.

What AI content moderation systems can do for you?

Whether a digital marketplace chooses to monetize its services by charging a subscription or by hosting a platform for users to exchange products and services on, the presence of unfiltered social chatter can be detrimental to one’s brand or business. After all, you can’t control users’ behaviours. Unfortunately, the worst behaviour on the internet has been a race to the bottom, with users flocking towards services that are less likely to act proactively in the face of abusive content or speech that may call for harm.

An AI content moderation system can implement content moderation rules as understood by humans at scale. Facebook can venture into verticals such as dating or Facebook marketplace globally because of a robust AI moderation system that filters objectionable and violent content. Furthermore, with an AI content moderation system that can keep tabs on user-generated content, platforms can quickly analyze coordinated threats on their platforms, take adequate action and develop robust community policies over time to meet the safety needs of their users. An AI system can match hyper-specific rules to create a hybrid model of content. User behaviour moderation provides a safe and trusted social marketplace for users and their communities.

Good content and civil relationships with other users attract business and more economic activity, which will enable further commercial opportunities and growth for the brand owners. By making sure that consumers can interact civilly and healthfully with each other, without abuse, and build a friendly online community, brands and businesses have the additional ability to track consumers’ perceptions and sentiment without invasive access to their data.

--

--

Ahmed Medien
Checkpoint

Manager and lead of several projects in the credibility, counter misinformation, open knowledge and trust and safety spaces. Msc in process optimization.