Next wave of AI business model: on your privacy

Huynh Nguyen
Red Gold
Published in
3 min readDec 2, 2019

or why we help find your value on the internet

Open access social network platforms do not mean it is free to use, we all know that. Facebook collects every interaction from its users to sale targeting advertisements, and also Google, or even Amazon. Since they have full control of data from users on their platform, they know what you like or don't like, building the map of your friends, your characteristics and even reading your personal inbox to know our concerns, and within their service agreement, we have no choice to protect our privacy if we want to continue connecting with our friends or our business networks.

That privacy-for-convenience business model makes those platform multi-billion dollar businesses by selling our privacy meanwhile the users receive no more than free service access and annoying advertising, sometimes exposing our personal sensitive subject. Even worst, users become a victim of their own privacy, it allows online scam to get into your feed news easier by purchasing your personal interests from those platforms. On the fight with those scam/fake content, the platforms are left behind since there are limited resources to validate the huge amount of content and their business model allows them to avoid if the user problem is not affecting their advertising business.

Our AI solution for personality contents on social networks does not follow the same rule. Our solution relies on two advanced concepts:

In device moderator: unlike traditional server-side processing which is not transparent yet resource-demanding with a fast-growing set of users, our AI solution is client-side deployment, enable transparent policies, unlimited scale and economical.

User can review the content labels and understand the moderation made by AI

Besides moderating, we target to implement the learning solution within user scope with distributed data. It open for multiple moderation engine can be built with limited supervision of platforms that are customed for the specific needs of each community.

Learning from censored data: Our solution for censor client data is to mix it with noise, which means we allow users to replace their real behavior with random noise given by us. That limits the platform to learn only the information that they inform users, and users can negotiate on the value of their data on how it is used on the platform by adjusting the noise level on their side.

Use noise to censor data protect user privacy

Readers can refer to this article for details of our solution.

Refer this article Telar Social AI privacy-preserved recommendation

Our AI solution is open source and actively developed under the Telar project, also some interesting articles from us:

All kinds of contributions and feedbacks are very welcome.

--

--