Something curious is happening in Finland. Though much of the global debate around artificial intelligence (A.I.) has become concerned with unaccountable, proprietary systems that could control our lives, the Finnish government has instead decided to embrace the opportunity by rolling out a nationwide educational campaign.

Conceived in 2017, shortly after Finland’s A.I. strategy was announced, the government wants to rebuild the country’s economy around the high-end opportunities of artificial intelligence, and has launched a national program to train 1 percent of the population — that’s 55,000 people — in the basics of A.I. “We’ll never have so much money that we will be the leader of artificial intelligence,” said economic minister Mika Lintilä at the launch. “But how we use it — that’s something different.”

Artificial intelligence can have many positive applications, from being trained to identify cancerous cells in biopsy screenings, predict weather patterns that can help farmers increase their crop yields, and improve traffic efficiency.

But some believe that A.I. expertise is currently too concentrated in the hands of just a few companies with opaque business models, meaning resources are being diverted away from projects that could be more socially, rather than commercially, beneficial. Finland’s approach of making A.I. accessible and understandable to its citizens is part of a broader movement of people who want to democratize the technology, putting utility and opportunity ahead of profit.

This shift toward “democratic A.I.” has three main principles: that all society will be impacted by A.I. and therefore its creators have a responsibility to build open, fair, and explainable A.I. services; that A.I. should be used for social benefit and not just for private profit; and that because A.I. learns from vast quantities of data, the citizens who create that data — about their shopping habits, health records, or transport needs — have a right to say and understand how it is used.

A growing movement across industry and academia believes that A.I. needs to be treated like any other “public awareness” program — just like the scheme rolled out in Finland.

Data is critical to the development of artificial intelligence. The most common form of A.I. is machine learning, where computers build models based on large volumes of data. The models it creates are only as reliable as the quality of data it relies on, and concern has been building about tools for recruitment or loan applications being built on biased data sets. One data set, Labeled Faces in the Wild, used to train facial recognition systems, proved to be unrepresentative of both women and people of color — of the more than 13,000 images, 83 percent were of white people and nearly 78 percent were of men. The discrimination had been baked into the data. And because systems often rely on multiple data sets, it can be hard to identify which decision points relied on bad data.

A handful of new data science tools are beginning to add features that explain their decision-making process to users, such as TensorBoard’s What-If Tool, which provides a no-code way to ask lots of what-if questions of the model, and SHAP (SHapley Additive exPlanations), which uses game theory to explain the output of machine learning models.

Understanding how data is collected and used to make decisions are therefore the first crucial steps in democratizing A.I. and building the necessary trust among the people it impacts the most.

Alejandro Saucedo, chief scientist at the Institute for Ethical A.I. and Machine Learning, sees the democratization of A.I. developing in four stages, laid out in its strategy for responsible A.I. “Our strategy initially looks at empowering individuals through best practices and applied principles. The second year is all about empowering leaders, and years three and four focus on entire industries. Only after that can we look at regulation and empowering whole nations to practice responsible A.I.”

Outside Europe, big tech companies harvest vast amounts of data from users in order to keep optimizing and personalizing platforms.

The General Data Protection Regulations (GDPR), which were introduced across Europe in 2018, aim to give EU citizens control over their personal data by making consent to collect and process it clear and explicit, most commonly through opt-ins. This has gone a long way to delivering that initial empowerment that Saucedo describes, but there is still plenty of education to be done. For example, most people applying for a loan won’t realize that they have the right for that loan to be assessed by a person rather than a machine.

Outside Europe (and even inside Europe, in the notably suspect case of Cambridge Analytica), the use and abuse of data is harder to control. Big tech companies harvest vast amounts of data from users in order to keep optimizing and personalizing platforms and, from a profit perspective, to satisfy the needs of advertisers.

This excessive control over users’ data has been a key motivation for Tim Berners-Lee, the inventor of the World Wide Web. He is building a data platform called Solid to “unlock the true promise of the web by decentralizing the power that’s currently centralized in the hands of a few.” It aims to give users control over their own data, so they can decide how much to share and with whom.

One of the first applications to be built on Solid will be a digital assistant to rival Amazon’s A.I.-powered Alexa. Rather than giving data to Amazon with every voice command that is made, the Solid system, called Charlie, will ensure that users own all their data. That will mean they can trust it with their health records, information about their children, or their financial data. It’s a paradigm shift away from the current models being promoted by Amazon, Google, and the rest.

Smaller companies also deserve the opportunity to develop compelling new uses for A.I., and some are doing so in a way that shares knowledge and tools. Jack Hampson is the CEO of Skim, a company that uses A.I. to extract structured data from websites.

“At our innovation days, the company, as a whole, works for a day on a problem that has nothing to do with our core business. We look at new technologies and think about their practical applications and the problems they could solve,” he says. “We ensure that a minimum percentage of the projects we work on each year are for social good, and open source some of our code and offer free usage of our API to students, so that others can use our A.I. technology for free and build their own applications.”

“We will need to drastically change the way we share our data. It’s fundamental to A.I. development.”

At Satalia, a London-based A.I. consultancy, CEO Daniel Hulme says engineers are encouraged to be creative and develop projects within a strong ethical framework that will produce “technologies for good.”

“We take the core learnings from the solutions we have developed for our clients and make them available for free as blueprints and tools for people to use to make their businesses and lives better,” he explains.

New A.I. tools like H20, Kortical, and Seldon are all making it easier for engineers to exploit their data and use it for social good as well as for helping businesses innovate. “More open-data initiatives should be encouraged,” says Hampson. “The U.K. government does a great job with its own data, but we need more encouragement within industry verticals to broker data exchanges, and create open data sets. We will need to drastically change the way we share our data. It’s fundamental to A.I. development.”

The global A.I. market size is forecast to reach $169 billion in 2025, up from $4 billion in 2016. The Pentagon is expecting to spend $2 billion on next-generation A.I., and the U.K. government has pledged £1 billion to invest in similar projects.

The democratization of A.I. will not be easy, especially as it fights against aggressive expansion by immensely wealthy and profit-driven technology companies. But beyond profit and growth targets, A.I.’s real value will be in its ability to provide a positive social impact and give citizens the freedom to benefit from their own data. To do that, it has to have the trust and involvement of the people, and that is the responsibility, today, of A.I.’s businesses, creators, and educators.

As Tim Berners-Lee says: “I believe we’ve reached a critical tipping point, and that powerful change for the better is possible — and necessary.”