Legal and Ethical Considerations of AI Adoption

Heidi Craven
Radical + Logic
Published in
5 min readJan 17, 2018

AI Summit Review: How every industry needs to prepare for changes for advancements in AI software before adoption.

Artificial Intelligence (AI) and the startup culture is changing the shape of business and is creating a general need to innovate across all industries. As more business are incorporating AI into their business strategies, uncertainties around future liabilities are becoming the forefront of many IP Lawyer discussions. During the AI Summit lead by Norton Rose Fulbright, we had the privilege of hearing from experts in the legal field and the tech field to assist in predicting the legal challenges of adopting AI so that businesses can manage risks effectively. For a comprehensive guide to managing ethics and risks associated with AI visit http://www.aitech.law/.

What is AI?

We all have an idea of what AI is, but few of us can comprehensively explain what AI can actually do right now and how they are using it in their daily lives. Graham Taylor from University of Guelph provided a succinct summary of AI as it is now and where it is heading in the near future. Currently, there are only 3 types of AI in use today: Visual Processing, Audio Processing, and Natural Language Processing. These programs use unstructured and high dimensional data with computation (floating point operations per second — FLOPS). As the Data capacity increase and the FLOPS increase, the complexity of the artificial brain increases proportionally. However, we are already starting to see saturation for both data and FLOPS, so the rate of growth is starting to plateau.

We are also beginning to see a shift from purely predictive models with low dimension decision making outputs to more complicated creative outputs such as computer-generated poetry, music, and chatbots. With predictive models, you can train the AI to correct its own errors, but that is not possible with creative, since you cannot compute the “right answer” when there are many. Right now, many of the AI programs are using “learning to learn” models, but the next phase is to learn from exploration by placing robots out in the real world to discover insights autonomously.

In Japan, an AI program used 20 million oncology records to diagnose a rare type of leukemia and identified life-saving treatment faster than humans could have done with genetic testing.

Ethical AI

Ethics cannot be an afterthought when incorporating AI into your business strategies. If developing an AI program, ensure diverse stakeholders such as social scientists, lawyers, and domain experts are at the table before the coding begins. In general, the ethics of AI is a balance between:

  • Rights/Value
  • Transparency, and
  • Accountability

On the rights/values ethics of AI, you must be aware of and account for inherent biases from the data that is training the AI. For example, many of the predictive policing apps and court sentencing software uses existing crime data, which is already inherently racist. To try and mitigate these risks, ensure that stakeholders are diverse enough to offer different viewpoints in hopes of satisfying the entire population.

Transparency and accountability are closely tied together, as you can’t have one without the other. Some of the biggest concerns with propriety AI algorithms are that businesses will not share their algorithm in a “white box” capacity since it is the secret of the trade. However, if a user accepts a decision made by an AI program without considering how that decision came to be the user will be liable for the decision that was made, since they ultimately are accountable for that decision. Therefore, if a decision was made off of biased or inherently discriminatory data, the person who used the data will be at fault. That is why it is SO important for businesses incorporating AI into their strategies to be armed with as much information as possible about how the AI works. This can be extremely challenging, since it’s difficult for humans to understand how the program came up with a solution.

Review the Ethics Toolkit by Norton Rose Fulbright Here: http://www.aitech.law/publications/ethics-risk-toolkit to learn how to unmask, process, validate and refine when your business is choosing to use or develop AI.

Who really owns the IP?

Often, companies think that they own the IP around their analytics algorithms and the outputs that come out of them, but it can be much trickier than you may think. The main question to ask is whether or not the vendor owns the data being used to produce the outputs. This is THE question to ask before embarking on a massive AI project. Be extra cautious about building algorithms when you do not own the data. The business may end up paying revenue to the other company that supplied the data and all the work will result in limited or even negative ROI.

Sometimes, non-humans can own the copyright. In Amsterdam the advertising agency J Walter Thompson developed a unique software that used a facial recognition algorithm to analyze all 346 of Rembrandt’s known paintings. The program was connected to a 3D printer, which painted “the Next Rembrandt”. You may expect Rembrandt’s predecessors to own the copyright of the painting, since it was based on his work. In this case, however, AI was part of the artistic process for creating the piece so the AI itself actually owns the IP.

Key Takeaways

When deciding to use or build AI technology, always do your due diligence in identifying risks surrounding its adoption. Boards must take time to understand the AI they’re using so that they can defend themselves in potential future liabilities. Upon your review ensure to pay special attention to the rights/values of segments that may be impacted, try to provide or receive “white box” explanations of the algorithms and data used for the program, and ensure you know where the accountability lies depending on the type of software, the outputs, and how it will be used. When in doubt, consult your legal counsel for advice.

Although the laws around big data and AI are yet to be finalized around the globe, what we are debating now is similar to when we first started using the internet. We are at a particularly new and fast-paced time for AI to disrupt every market, but eventually the rate of growth will saturate and the markets will settle. AI will become a commodity, and rules and regulations will be in place to protect yourself and your assets. Humans must be at the center of the program design, as well as the usage of its features.

--

--