The social impact we aim for with the net.vote system

Rado Raykov
dialogue through data

--

AI as a technology that may benefit all.

There is a barrier to the use of AI. It is normally not a tool that anyone can reach and reap the benefits of. Some of our finest entrepreneurs today guard against the danger of monopolisation of AI, such as Elon Musk.

Access factor

Our first approach is to make our tool net.vote and its application of AI available and useful for every individual with the least possible barrier, and broadest possible benefit of use.

  • Works in the browser, no installation required, on any device.
  • Serving some of the fast and easy analysis of all data to every logged in user of the system.
  • Profile page that shows detailed view of all information relevant for the user, extracted through analysis of data they chose to share.
  • Black box effect in market research practices and most scientific practices — we avoid this effect and grant instantaneous access to analysis of data given not only by the user, but also by all other users.

Educational factor

Our second approach is making the use of AI to analyse data as understandable and humanly meaningful as possible

  • We use conversational interface that feeds the information back in a language that is accessible — multilingual; free speech form; use of current consumer culture (emoji’s).
  • We demonstrate to the individual what is done with the data for research purposes and make that research meaningful for the individual, through their curiosity for self discover.

Outreach — “point of opinion formation”

Our third approach is putting the tool in the hands of people wherever they participate in public opinion formation. ( Integration of net.vote into HTML for media.)

Quality research outcomes and research partnership

Our fourth approach is partnering with research and educational institutions. We wish our work to support as much as possible the work of researchers — primarily in the areas of AI, Sociology, Anthropology, Psychology, Media studies, Political science, Data science.

We believe partnering in making sense of the data gathered is a great way to involve researchers of all relevant areas and extracting the broadest possible value for individuals and for the public. The envisaged results of such partnerships include

  • Extracting more and new information from the data.
  • Allowing viable and real-time information to be used in the research done by our partners.
  • Making more and new information available to users of the system — both general research findings and how they relate to themselves.
  • Producing research through the system and publishing the results through all available channels — including media partners (discussion panels on topics that we have new interesting data on), research partners’ outreach(webpages; online media pages, events) and our company communication channels with users (email; personal profile pages).

All measures above involve the individuals, asking them to participate and truly be part of forming and understanding public opinion as it happens.

We believe the access and educational factors in our work are the most important distinguishing factor in allowing an ethical use of technology. By making the technology available and understandable to more people we open it for better monitoring and audit by the public. Thus, unlike many companies that have taken the route of practically spying on people (referring to social media listening practices that also apply AI for the purpose of analysing public opinion) , giving little to no understanding of how data and algorithms are used, we purposefully simplify, provide and promote the particular use of technology to the public, making it as transparent as possible for what it does, how and why it does it, but also what that means on an individual level as well as for different social structures. The goal is that the individual sees, understands and consents to what happens with their data every step of the way -

1. You are greeted by a “robot being” — representation of AI that is there to “tell you how the world feels”

2. When asked for data you may see that by knowing your gender the bot can tell you how people from your gender feel;

3. Curiosities are to be fed in the results page that represent at what stage of complex analysis the system is capable for the particular topic — (An example of that would be that the bot has figured out that “138 women in townX agree with you on this!”, whilst generally people from your age, sex and location may disagree — showing that the system is able to identify clusters of shared traits and effectively produces a map of these clusters and their opinion)

4. The profile page is dedicated to self-exploration but also to exploration of how depersonalised data can be used to generate a very detailed persona. We wish to share every development in this part of the algorithms so that the individual can make an informed ongoing decision of sharing their personal and other data. — (consent in our view requires not only a deliberate, but also informed decision)

Constructive communication for social causes

Incentivising corporates to be active in social causes

We are geared to aid our corporate customers in discovering what are the social causes their consumers care most about and thus aid these organisations in structuring their existing, but also new corporate social responsibility efforts. An example would be the work with testing the new products of Unilever, where we might discover a large proportion of those who care about a product line, also care about disabilities, so we provide Unilever with the information that is makes sense that this brand gets involved in or launches a program that aids people with disabilities.

Aiding organisations with social causes connect and work with the general public

Our constructive communication mechanisms aid in getting the right kind of information from and to individuals so they may be given an easy way of contributing to causes they have already stated they care about. In three steps we are able to acquire consent from individuals to be approached by relevant organisations and provide these organisations with the relevant information — how much this individual cares about which cause and what they are willing to contribute. This type of engagement we believe is the new form of engaging the general public with causes in a way that is in line with current online culture.

An example would be reading an article about climate change. A media partner enables voting on the Environment within the article and within two follow up questions we obtain the consent to pass on to a relevant NPO that this individual is willing to give their spare time to aid the cause. The organisation immediately receives the information and contacts promptly the individual and connects them with a representative near them. Thus the automated communication has taken care of the hard networking work and administrative work (gaining the contact, their location and so on) with little effort on both sides and consent from the start in a relevant, non-obtrusive and meaningful fashion.

Responsible AI — How we gear the algorithms to do the “good” work.

Our algorithms are meant to process vast amounts of sensitive data. That in turn means there is a lot of responsibility in the outcomes this analysis will produce. Each step from gathering the data, through storing it, using algorithms to structure and clean it, using algorithms to extract meaningful and accurate information, to the actual delivery of such information is absolutely soaked with potential to do harm or to promote social values.

For this reason we have a dedicated team member, working with all internal teams, clients and educational and non-profit sector partners, who ensures that the design of our technology, the basis of our profitable and non-profit partnerships are compliant with our values and goals and are all in service of these values and goals. Here I do not distinguish between social values and goals and other goals and values of Consent IO BV, as the company is established with the sole purpose of realising the net.vote project and the core of the project is developing technology for democratising access to public opinion and making it visible and impactful.

Empowering individuals

  • Free access to any individual to express opinion of any entity (technical restrictions — access to any device; internet)
  • Free topic generation — the public participates in creation of the topics that matter to them enough to create them & express an opinion.
  • Serving a constant automated communication channel between individual and organisations in as many points of contact as possible.
  • Making individual voices matter — use of data to make a meaningful report that backs up individual opinions, by clustering them in relevant demographic groups and gives management of organisations a clear way to act on what groups of individuals ask for and agree on; whilst using mechanics to serve individual feedback that may also be looked at on a granular level (per response basis); separate system of crisis management asking for individual and timely attention, when a very poor opinion is expressed.
  • Keeping the individual secure
  • Bringing the power of AI to individuals — what companies now use to know people, we wish to bring to every person using the system, so that they reap the benefits of self-discovery and better understanding of how data is used

see more at https://consent.io and try it at https://net.vote

--

--

Rado Raykov
dialogue through data

founder of holler.live, building software to democratize the power of public opinion.