AI Collaborations: key contractual risks and issues

Artificial Intelligence (AI) is the latest, newest and shiniest tool in fintech’s bag of tricks, with many seeing AI as the key to unlocking and accelerating innovation of financial services and products, not just in the near term but in the longer run too. And, in a digital world, with banks and other firms struggling to maintain personal connections with their customers, AI has the potential to help re-establish these connections through personalisation of products and services.

by Tim Wright, Partner at Pillsbury Winthrop Shaw Pittman LLP

Emerging Use Cases
Already a wide range of fintech AI use cases are emerging, such as automated customer support, customer on-boarding, anti-money laundering, fraud prevention and detection, claims management, predictive analytics and wealth management. Chatbots, in the role of a virtual financial assistant, can assist customers in many areas, such as checking bank account details, providing notice of upcoming payments and creating financial budgets. Depending on the application, benefits can include reduced time-to-market, better risk decision making, reduced processing time, quicker approval processes, error reduction, improved security and increased customer retention.

Unlocking AI’s potential
Looking further ahead, however, AI is expected to play an increasingly important role in the financial services arena. The benefits of tackling operational pain points, reducing manual processes and further driving efficiencies suggests that AI, particularly machine learning (and its first cousin, deep learning), has the potential to be a huge game changer for the banking and insurance sectors with the promise of massive operational and strategic efficiencies through lower operational costs and increased productivity.

Data is our biggest asset
The old adage that “people are our greatest asset” is changing. Fintech AI collaborations are built on data. To really drive AI innovation, financial firms will need to collaborate even more closely with the fintech community in order to unlock the value of the huge volumes of data which firms have access to, but given the enormous legal, reputational and other risks to those involved in a data breach or cyber-attack, these collaborations need to be carefully considered and supported by robust, binding agreements which properly assign responsibility and liability should something go wrong.

Fintech Collaborations
Each collaboration must uniquely satisfy the specific needs of its participants, whilst recognising the wider regulated environment, especially where it requires the sharing and processing of personal data, given the huge potential fines under the GDPR. Apart from the fines, the GDPR ushered in a new EU-privacy regime which has a number of implications for AI adoption in the financial services. For example, Article 11 provides a right to “an explanation of the decision reached after [algorithmic] assessment,” and Article 22 states that a data subject should not to be subject to a decision with legal or significant consequences based solely on automated processing. Data Protection Officers should be brought in early to ensure that privacy impact assessments are completed, and other tenets of the GDPR complied with, such as privacy by design. Collaboration agreements will need to include GDPR-compliance provisions, especially where one collaborator acts as the data processor of another (the data controller).

Collaborations between fintechs and financial services firms take many forms, from proof of concept to full-blown collaboration, and can often lead to acquisition, investment or joint venture. The term “partnership” is much used but should be treated with care as it can lead to unwanted tax implications. These collaborative models need careful consideration, and the negotiating and contracting process should not be rushed.

Understanding the Issues
A number of legal and compliance issues will need to be considered in a heavily regulated environment like financial services, such consumer protection, and anti-discrimination laws, where problems of prejudice caused by data bias, or unwittingly introduced by an algorithms’ developers, may arise. Imagine the compliance and litigation nightmare if an AI fintech app was found to have used historical data to disproportionately deny finance to ethnic minorities. Another issue is the lack of transparency of decision making (aka black box syndrome) with deep-learning algorithms. The many layers of artificial neurons that weigh the utility of decision-making paths make it very difficult to understand the rationale for a particular decision.

Other considerations for any collaboration agreement include defining success and required outcomes; protecting confidentiality; testing and accepting, and dealing with the ownership of newly developed intellectually property; sharing of data and validating its quality; commercialising the outputs, and apportioning risk and liability.

It is not always easy, at the beginning of a collaboration, to set boundaries and define responsibilities but care needs to be taken to do so, especially where there are multiple parties involved. Collaboration agreements should clearly allocate how responsibility sits amongst the various suppliers, operators and users of AI and machine learning systems: for example, a vendor’s financial product may be based on data input devices or algorithms developed by another party entirely. If something goes awry, which party is responsible and for what? To what extent can one party rely on another party’s expert systems, and in what scenarios?

Agreements should also contemplate change — although many of the AI technologies being brought to the marketplace today are not new, current deployments (many of which are dependent on access to huge amounts of data) are proceeding at a scale and with a speed not seen previously, hence regulators are, to an extent, playing catch up. This means that in the next few years, we are likely to see an increased amount of regulation being introduced, some of which may impact the cost and/or manner in which AI services can be delivered and collaborations carried out. Where applicable, collaboration agreements should set out how such a change in regulation should be handled and any related implementation costs borne.

Trust is the Key
Ultimately, the success of any fintech AI application will depend on how much consumers trust the app with their money. Ensuring that consumers’ data is properly protected and secured, and only processed in accordance with the GDPR and other applicable regulations, will help secure this trust, and for this a robust collaboration agreement between all relevant parties is an essential requirement.


Thanks for reading. We hope you enjoyed another contributed article by hands-on industry experts. Let them know you liked it by clicking the 👏 button — as often as you like.

Visit us on Twitter and don’t miss the current fintech newsletter issue here.

Like what you read? Give FinTech Weekly a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.