AI & Business — How France bridges the gap

Charlotte Gerrish
15 min readApr 15, 2020

--

What is AI?

Artificial intelligence is a key talking point across the globe. An enigma in itself, many individuals and businesses associate these two words with exponential change, referring to it as the next stage of human evolution. Amongst this global race to unlock machine intelligence, France is on a mission to establish itself as a leader.

As part of a national strategy, France has made significant advances towards nurturing home-grown talent and tapping into its strong tradition of academic research. This includes hosting the AI France Summit, now in its second year. Gerrish Legal had the honour of attending the 2020 Summit in Paris, hosted by TECH IN France and the Ministry of Economy and Finance. In this article, we analyse AI in France, including the country’s infrastructure for innovation, its global competition, and the legal and financial hurdles that European players face. Consequently, the next stage of the battle is clear to see: bridging the gap between the academic world and commercial markets.

There are many technical definitions of Artificial Intelligence (AI), but at it’s core, it is the replication of human intelligence through machines. This concept can be broken down into three words: data, algorithms and results.

The Vice President of the European Commission for Values and Transparency, Věra Jourová, stated that “France plays a leading role when it comes to AI; with vibrant industries and a strong political drive” and that “AI is not just about tech, it is about human advancement”. Indeed, the transformational power of AI systems has been championed in France, with over 650 native start-ups developing such technology. Macron’s government has also played an active role in shaping the European Commission’s new digital blueprint on AI, through a collaboration with Germany, the ‘AI for Humanity’ plan with Cédric Villani, as well as launching ecosystems such as La French Tech, and le Village de l’IA, the first European centre of its kind.

The Reality of AI

With so much emphasis being placed on AI, both on a political and economic level, one would think that the answer to commercial success lies within such technology. However, this myth has been partially dispelled. The findings of a study conducted by EY on behalf of the OPIIEC showed that AI is not always the optimal solution for businesses and in a study conducted by PwC, fewer businesses will take up AI in 2020 compared to last year. Therefore, it is not always safe to believe the fantasy around AI — in order to progress, we need to get past the hype and focus on reality.

Most companies need to invest in digitalisation, if they have not done so already, but that does not necessarily mean investing in and deploying AI. Urs Bergmann, Machine Learning Research Lead at Zalando SE in Germany, highlighted that when it comes to AI, it is a matter of strategic investment. For an e-commerce business like Zalando, using AI to create personalisation services for consumers does not work when you have a client base of 13 million. Therefore, you need to consider the scale that you are operating at, the types and quantity of data you process, and the business needs that you are trying to fulfil.

However, with the arrival of the Covid-19 pandemic, perhaps we will see the prediction of less AI use in 2020 altered. For example, Kwalys, a French company who have created a system to design chatbots/callbots without having to write a single line of code, are testing a CovBot in order to pre-diagnose patients via an AI voice assistant.

France’s Global Position

As optimistic as the French are in relation to their strength in the field of AI, it would be foolish not to acknowledge the fierce competition they face from the East and the West.

Realistically, whilst Europe has the potential to lead in this field, given its strength of academic research, when comparing its position to the superpowers, namely the US and China, we are not yet in a position to compete with them. This is mainly because Europe lacks investment at the same level. Also, as seen with China, having a leader in the field to pave the way for further development helps to create a certain technical standard in the market. In this respect, AI in Europe has not reached the ubiquity that is apparent in Asia.

This can also be pinned down to consumer attitudes. For example, a higher percentage of people in China would be open to AI video recognition at their front doors than in France. This supply and demand regulates the rate of development of AI, and more importantly, it accelerates the rate at which such AI is bought to markets.

There is also a risk for French (and European) AI developers of overly relying on the US through the use of cloud and web services provided by the tech giants Google, Microsoft, Amazon and Apple. There are further concerns in allowing such providers to access the data they hold, given the controversial US CLOUD Act enacted in 2018. It is not uncommon for Google and Microsoft to offer partnerships with French AI leaders, exchanging their engineers for the software and data that is deployed in the GAFAM Cloud.

Nevertheless, measures have and are being developed to create sustainable European infrastructure in order to avoid relying on our neighbours across the Atlantic, as well as the creation of the new standard of cloud security in France with the cloud souverain.

The Problem of Ontology, Languages and Interoperability

Aside from the lack of funds, there are many barriers to AI advancement in France, and in Europe.

Peter Dröll, Director of Research & Innovation at the European Commission stated that ontology is a big divider in this area, “in the sense that we need to use the same languages and values across the entirety of Europe to create a durable and valuable use of AI; follow the EU pillars of prosperity, people and planet”. Indeed, such an issue is most pertinent in France, where academics and commercial engineers speak of AI in different ways, both sticking to their termes canoniques.

The situation outside of France is another issue. France’s Secretary of State for the Digital Economy, Cedric O, highlighted the fact that whilst France wants to champion home-grown AI, the country’s strategy for AI cannot progress without internationalism. English is the de facto business language of the world, but its penetration into scientific journals is even more pertinent. Whilst China leads the way in terms of quantity of scientific publications, accounting for nearly one-fifth of all science and engineering papers listed in the Scopus database in 2018, and an estimated one-third of all scientific articles, the ability to not share research in English can be fatal to your work. This is something that France needs to push, as in order to have more French-Japanese conversations, or French-Korean conversations, everyone must be able to communicate through the medium of the English language.

Looking further, we have the issue of interoperability and omnichannel compatibility at the technology level. Developers must ensure that the languages used work with other systems and remain transparent for end users, whilst still protecting the commercial interests and intellectual property rights of authors. There will also be a big need in the future to create omnichannel compatible technology. In a utopia, these systems would work and use the same languages, but this is unrealistic. In reality, we need more of an open discussion across all sectors to create compatible systems; the medics need to talk to the engineers, the engineers need to talk to the investors, and investors need to talk to the academics.

The importance of such multi-disciplinary dialogue has already been acknowledged by large global players, as AI is not a separate market in and of itself, it is something that is vertical to all markets. More diversity of partnerships has been seen with the collaboration between Samsung and Marriott Hotels to create AI-driven room heating, and EDF, Thales and Total at Paris-Saclay. And whilst you always need to contextualise the algorithm you will be using in order to ensure accuracy; such two-way business conversations will allow France to accelerate innovation in a fierce global market.

Big Data vs. Open Data

If a lack of open dialogue is seen in this area, it is not a surprise. It all comes back to the great problem of open data and sharing data between companies. We are sure you have all heard the saying data is the new oil by now; and no one wants to share their oil.

However, data is the crucial element in the machine learning stage of developing AI. We need an environment of data sharing, both from a legal and commercial point of view. Even if there is competition in the creation, marketing and deployment of AI, there is a common interest at the development stage to share and use data. This is an even greater challenge for France, and Europe as a whole, as competitors from the US, China and India already have large banks of data to tap into and are less constrained by privacy and data sharing laws.

Essentially, France needs to industrialise machine learning, in order to make trustworthy data sets more readily available and move away from a culture of data silos. Vice President of Regulatory Affairs at Criteo and ex-Legal Director at Microsoft, François Lhemery, highlighted the fact that Criteo has made available a data set, which has been available for over five years now. This data set, which is anonymised data that people can use to test their code and machine learning, has even been used by researchers at Berkeley and Google.

As useful as these data sets have been, they only go so far. When developing AI, such data cannot be universal. The data required by a wind turbine company will be extremely different to that of a sports company, as mentioned by Ronan Bars, General Director at Eurodecision. Bars further highlighted that you must use necessary data, and not just all data for the sake of it. Take the example of data collected by smart meters, something that has caused quite a stir in France in recent months. Yes, such systems can gather personal data, such as how many people are in the house. But, what will they do with that data? Do they need it? And what does the law say? These are all points to think about, backed by scientists, who say that results aren’t as good when too much data has been used.

Lhemery also highlighted that larger enterprises are gaining the most from big data at the moment and that perhaps we need a free market for data. However, whilst this could be logistically possible, it would be a legal nightmare. Nonetheless, there is enough data in Europe and the recent European Commission White Paper on AI showed that there is a demand for a certain level of fluidity of data.

Good Data vs. Bad Data: How to Avoid Bias

Once French players have access to data, the next big battle is to avoid bias. This is something that both science and business have struggled with; and the root cause lies within the data used.

AI is only as good as the data feeding it, and this is where creating with transparency in mind is important, according to Céline Castets-Renard, Law Professor at the University of Ottawa. For scientists however, despite their best intentions, it can be difficult to pinpoint exactly when bias is introduced into a system. Removing and adding data incrementally could be an option, but this does not always work due to the nature of machine learning systems.

Avoiding such bias is crucial as it often leads to unlawful discrimination. An example of this can be seen with theAmazon recruitment bias case, where it was found that the algorithm used to process candidate files had a tendency to reject females for technical roles and to favour them for HR roles. This was due to the bias that was integrated into the algorithm itself as their system was trained using their own records from the last ten years.

Building up a reliable data set is no mean feat. Historical data naturally contains societal bias and tampering with such data produces the same effect. However, companies such as Talentsoft, who have a client base in the millions, have accumulated a large data pool that is trustworthy, stable and pertinent and as a result, they do not need to open the “pandora’s box of any old data” to train their HR-related AI software, according to Elodie Champagnat.

However, not all discrimination is unlawful; it is unlawful in cases where it brings prejudice to the data subject. Consequently, not all cases of bias and discrimination in machine learning are bad. In healthcare, discrimination is desired — you want the system to pick up if you are male or female, old or young, your ethnic background, in order to decipher whether you are more prone to have a certain condition.

The Legal Framework

The General Data Protection Regulation (2016/679) (GDPR), heralded the champion of data protection and the new global reference point, is now seen by some as a roadblock to the path of innovation. This would be an overstatement, as so far, the GDPR is the only effective text to regulate AI use in Europe. Google Cloud’s Damien Roux keenly highlighted that the GDPR has been a great tool in terms of education on a global scale, and more people are now talking of data protection rights as a result.

However, smart regulation does need to be more agile and it is no secret that the GDPR is not fully compatible with the development of AI. Even the European Commission has admitted that further work is required. Businesses, as mere custodians of data, must ensure that their data processing does not contravene the rights of data subjects.

The privacy and security of data used in the testing and deployment phases is a real struggle for those operating within the remit of the GDPR. Regardless of this, French companies would be shooting themselves in the foot if they were to ignore its importance. Roux highlighted that usually, from his experience, the first step for companies who want to embrace AI is to be obsessed with the model — will it work, what will it do, what does this mean in monetary terms? And then, one of the last things they think about is the legal framework — something that should actually be the first thing on their agenda.

For companies creating such tech, the Privacy by Design and by Default principle is one of the best ways to ensure legal compliance, right from day one, as was done by Andjaro, a French workforce intelligence platform presented at the conference by CEO and co-founder Quentin Guilluy. This principle is really important to implement, even from a commercial sense. It is easier to iron our difficulties at the start and as you go along, than to put all your capital into development and then to find out at the last minute that your algorithm is unlawful, and therefore, unmarketable.

Other important considerations for French companies is to look at their grounds for processing personal data, if they are processing sensitive or biometric data, if their algorithm leads to any automated decision-making that has a substantial effect on data subjects (something that individuals have a right to object to) and logistically, are they able to extract/delete/amend such data if a request is made. This last point is the most difficult aspect of the GDPR when applied to machine learning software and aggregated data.

Furthermore, Legal Counsel at Linklaters, Sonia Cissé was keen to highlight that whilst the GDPR does not have any provisions on AI alone, it is best to look to your national supervisory authority for guidance on the matter. The new ePrivacy Regulation could have an effect too (and this will most likely be orientated towards individual’s rights).

There has been a pattern of championing individual data rights over commercial goals. For example, the CNIL (the French data protection authority) has released new lignes directrices on the use of cookies, placing an even greater importance on consent, at the expense of website operability.

Another reason that such emphasis is placed on rights in Europe is because the GDPR is a global leader, and in the eyes of the European Commission, the USP of AI in Europe. Jourová stated that “these rights are about European DNA, and not having to go the Chinese way or the US way. Europe shall ensure that it will put its people first, not the States”.

Indeed, amongst this global crisis, surveillance has been a key tool in the East in order to track and contain the spread of the Covid-19 virus, but it has sparked an ethical debate over the choice between privacy rights and health — are we not entitled to both? Such discourse from Jourová is no surprise when you consider that she is the woman who negotiated the provisions of the GDPR. Whilst we hope the GDPR will be adapted in the coming years, it would be prudent for French players to ensure full compliance.

The Economic Gap

As mentioned above, Europe generally lacks the level of investment that is seen in Asia and the US. However, there is a particularly significant gap in funding at the scale-up stage. This is often the most significant stage for such businesses; you cannot just make the tech and then hope that the markets will do the rest of the work.

Furthermore, we are seeing innovative risk hedging in the sense that French (and European) start-ups are less willing to test out more innovative (and thus riskier) ideas due to the fear of running out of funding, as such riskier ideas often need more trials and testing (and more money) than safer, tried and tested ideas. However, is this a hindrance to true innovation? Can you put a price on true innovation? This is an area where the base concept of business (to make money) and the base concept of science (the pursuit of knowledge through trials) do not align, unless investors are willing to put a lot of capital at risk.

Europe is making advances towards closing this economic gap, including investment into French processes. 20 billion dollars of funding is expected to be invested by 2025, as well as investment into Horizon Europe and the Digital Europe Programme in AI. But in order to achieve this, the bloc needs to remain united and strong, something that has been put into jeopardy by the departure of the European AI leader from the Union this year.

The Human Cost: Effect on Employment

A fear that accompanies the use of AI throughout the world is the alleged loss of employment it brings with it. However, Francois Tillerot, CMO and co-founder of AI Marketplace Orange Cloud for Business, states that we are not seeing the end of working as we know it, but rather a revalorisation du travail (revaluation of work), by harnessing the power to quickly process data and link ecosystems. It is a transformation of the functions, and by whom such functions are carried out.

An example includes the legal industry and LegalTech. Indeed, AI is more than just a tech system — it touches on human skills to make “human” decisions. But the reality is that lawyers will always be required, we are far away from replicating their skills in a machine. AI is helping to free up the time of junior lawyers, as highlighted by Linklaters’ Sonia Cissé, so that they can focus on more intellectually demanding work. This is a welcome change for many stagiaires in her Paris office, who in the past could be found conducting due diligence on M&A deals till the early hours of the morning. Indeed, France’s bureaucratic and paper-heavy processes could benefit the most from AI on a global scale.

Another reason that AI will not mean the obliteration of human workers, is that such systems are extremely efficient at doing the tasks that they have been designed for, but they are not so good when faced with a range of problems that do not align with their algorithms. Humans on the other hand can jump from a finance issue, to a HR issue, to unjamming the printers. Furthermore, the soft skills that humans can bring to a workplace are difficult to be replicated by AI, even though conversational AI systems such as Ivy are now being trained in France to recognise and respond to small talk. And the plethora of new opportunities that this industry will present is a huge opportunity for future French employees.

Finding the Talent of Tomorrow

France is not only fighting with other countries for funding and market share, but also for talent. It is not uncommon for talented young French engineers and scientists to be tempted by the riches and freedom offered by tech giants abroad. Indeed, when looking at the restricted mobility and traditional working environments that France is clinging on to, this is not a surprise. Coupled with the millennial desire to travel the world and flexibility to mould their careers, France has a real risk of running out of talent to develop home-grown AI.

Now more than ever, France needs to look at addressing the gender and imbalance in the tech industry. Women only make up 17% of all people in this field despite there being more women than men in Europe as a whole, and female tech workers have been found to outperform their male peers by 63% as stated by Jourová. Dröll also stressed the economic benefits of addressing gender equality and Bruno Sportisse, President of INRIA drew attention to the fact that there are ten times more men than women who take digital classes in French universities. He stated that this needs to change by portraying a more positive message to both genders and ensuring that there is an inclusive atmosphere in classes from an early stage.

Additionally, the talent of tomorrow needs to be more diverse in the sense of having a multi-disciplinary background. Data scientists who also have significant business experience are more efficient at solving problems and will help to bridge the gap between science and business. Furthermore, a real concern of AI is that it will replicate the bias that exists in real life. If France chooses to have diversity at the production level, this can be avoided.

Conclusion

France has established the infrastructure required to accelerate home-grown innovation, both at a political and economic level. However, faced with a strict regulatory environment and fierce competition from both sides of the globe, they must look to further collaboration with European players and harnessing the power of tomorrow’s talent. The trust that will be instilled in French and European consumers through the adherence to transparency and data protection will be the distinguishing factor for such French tech companies.

Author: Komal Shemar — Legal consultant @ Gerrish Legal, holder of an LL.B in Law and French Law from the University of Birmingham and CRFPA (French Bar Preparatory Course) candidate at the Sorbonne Law School, Paris, specializing in data protection, commercial contracts and intellectual property in the tech, e-commerce, recruitment and fashion industries.

--

--