Artificial Intelligence can now be an Inventor: Where to from Here?

Monash Deepneuron
12 min readMar 2, 2022

Written by Amy Liberman and Julian Siedle

On 30 July 2021, the Federal Court of Australia decided that AI systems can be inventors. In a word-first determination of Thaler v Commissioner of Patents¹, the Honourable Justice Beach found that AI systems can be the inventors on a patent application under Australian patent law.

The decision has been appealed to the Full Bench of the Federal Court, which may decide to overrule it. For now, however, the decision is binding in Australia. Read on to find out what a patent is and an overview of the decision. We also chat with Ryan Abbott, one of the patent attorneys behind the Artificial Inventor Project, to discuss ‘Where to From Here?’.

What is a Patent?

Before we get into the Thaler case, let’s discuss what a patent is and what is used for. A patent is a ‘legally enforceable right for a device, substance, method or process’. For an application to be successful, the invention must be ‘new, useful and inventive or innovative’.²

The key benefit of a patent is an exclusive commercial right to the invention. This gives the patent owner the right to stop others from exploiting their invention without their permission. Patents encourage innovation, research and development in Australia.
Find out more about patents here.

Background

Now that we know what a patent is, we can delve into the facts of the case. Dr Thaler filed a patent application for a plastic food container (based on fractal geometry) and a flashing light at IP Australia. However, instead of naming a human as the inventor on his application, he named his artificial intelligence system known as DABUS (Device for the Autonomous Bootstrapping of Unified Sentience) as the inventor. The Deputy Commissioner of Patents initially rejected the application as s 15(1) of the Patents Act 1990 (Cth)³ was said to preclude AI as an inventor.

The Federal Court of Australia was therefore faced with several questions:

  • Can AI systems be inventors in a patent application?
  • Does the Patents Act require a human inventor?

Similar applications were also filed by Dr Thaler worldwide as part of the Artificial Inventor Project. These applications were rejected in other jurisdictions including the UK, Europe and US. Dr Ryan Abbott, who we chat to about the Thaler case, is one of the patent attorneys leading these applications.

Find out more about the Artificial Inventor Project and DABUS here: https://artificialinventor.com/

Source: https://artificialinventor.com/

The Decision

It was ultimately decided that an AI system can be an inventor under the Patents Act. We will now run through the reasoning behind this decision.

Nature of an inventor

The critical piece of legislation in dispute is the Patents Act, which requires an inventor for a valid patent.⁴ There are many people under the Act (such as employers and heirs) who may be granted a patent, but these all derive their title from an original inventor. According to Justice Beach, the word inventor should be treated as an ‘agent noun’, in that it simply refers to a thing or person which invents, rather than the human figures which are commonly seen as inventors.⁵ Although this may be contrary to our usual image of what an inventor is, Justice Beach emphasised that the term should be read flexibly. According to Justice Beach there is nothing in the Act which specifically excludes an artificial inventor, and so allowing such inventors is permitted, if this assists in the purposes of the Act.

Purpose of the Act

Another section of great importance is s 2A, which states that:

The object of this Act is to provide a patent system in Australia that promotes economic wellbeing through technological innovation and the transfer and dissemination of technology. In doing so, the patent system balances over time the interests of producers, owners and users of technology and the public.

While a relatively new inclusion, this object clause is crucial to understanding how the entire Act should be interpreted by judges. Thanks to this clause, Justice Beach could consider economic wellbeing as a driving factor in determining which interpretation should be preferable. Two main arguments were persuasive in deciding that non-human inventors would promote economic well being:

  1. Inventions by artificial intelligence bring significant benefit, so their development should be significantly encouraged.⁶
  2. If patentability of an invention can be denied due to an artificial inventor, the owners of AI systems may decide to protect them by keeping the inventions confidential, so there will be no public disclosure of the inventions.⁶

These two items are primary rationales for recognising inventions generally. Justice Beach stated that courts should ‘recognise the reality’ of a machine invention being analogous to a human invention, such that these considerations will apply in the same way. It was therefore held that the objects of the Act are served by not arbitrarily distinguishing between kinds of inventors.

Source: https://www.wipo.int/wipo_magazine/en/2019/06/article_0002.html

Overall Reasons

To summarise the above, Justice Beach accepted that a non-human can be an inventor for the following reasons:

  • The owner of the patent and the inventor of the patent are two distinct concepts that should not be conflated. Whilst only a human can be an owner of a patent (as an AI system ‘does not have legal personality and cannot own property’ — Artificial Inventor Project), it does not follow that only a human can be the inventor of a patent.
  • In situations where there was a patentable invention but no human inventor, you could not apply for a patent. This is the ‘antithesis’ to the objects of the Act — being the promotion of economic wellbeing through technological innovation and the transfer and dissemination of technology.
  • Third, it is important not to limit or qualify a statutory definition unless clearly required by its terms or its context. The purpose of the Act (to promote economic wellbeing) is ‘at odds with the unreality of persisting with the notion that artificial intelligence systems cannot be inventors’.
  • Fourthly, the Commissioner should not focus on dictionary definitions of ‘inventor’, which Beach J described as ‘old millennium usages of the word’.

“But what this all indicates is that no narrow view should be taken as to the concept of “inventor”. And to do so would inhibit innovation not just in the field of computer science but all other scientific fields which may benefit from the output of an artificial intelligence system”.⁷

A diagram shows the fractal container invented by DABUS. Source: https://www.abc.net.au/news/2021-08-01/historic-decision-allows-ai-to-be-recognised-as-an-inventor/100339264

Where to from Here?

Now that we’re up to speed with all things patent law, AI and Thaler check out our interview with Dr Ryan Abbott where we ask the important questions: In what circumstances is AI more than ‘a tool’? What does this mean for inventors using AI? How is the patent application going in other jurisdictions? And what are the implications for investment and innovation in AI?

Opinions on the Case

As the development of technology continues to move us into the unknown, we must continue to think about the effects it will have on our society, laws, and eventually ourselves. Below are two separate opinions from the authors of this blog.

Now, what do our members at Monash DeepNeuron’s Law & Ethics Committee think? Julian will argue that a position of “AI Neutrality”, such that AI and human inventors are treated equally, is conceptually appealing, but distracts from the ultimate consideration in interpreting the Patents Act: economic well being. Amy will discuss the benefits of equal treatment for humans and AI.

Julian Siedle
Amy Liberman

Julian: Should Artificial Intelligence Systems Really be Treated the Same as Human Inventors?

Although Justice Beach states that the algorithms should not be anthropomorphised, this approach necessitates a complex distinction between different forms of autonomy, to determine if a machine has truly “invented” something. This means that as programs become more sophisticated, from brute-force computational tools to autonomous inventors, there is eventually a cut-off point at which the machine has been invented independently of its human operator. There are appealing philosophical rationales for this, but the Act’s purpose should remain the guiding force behind interpretation.

From an economic perspective, it does not matter whether an invention is produced by a sophisticated AI with no human guidance, a numerical tool used by a human inventor, or was revealed in random words found in a bowl of alphabet soup. What the patent system needs to do is to incentivise human beings to facilitate the production of inventions through any means possible (be they autonomous or not), and to disclose these inventions to the public. For this end, there is little sense in distinguishing between the operator of a computer running artificial intelligence software, or any other inventor using a tool, no matter how sophisticated this tool may be.

The Concept of AI Neutrality

AI neutrality is the idea that results of artificial and human intelligences should not be treated differently by the law, so that they may compete on an even playing field. In his book The Reasonable Robot, Dr Ryan Abbott uses the many examples of how regulation can bias industries for or against the adoption of AI, which prevents it from being used efficiently. Employment taxes may incentivise companies to replace human workers with artificial intelligence, even where it would not be otherwise reasonable to do so. Product liability laws penalise companies producing driverless cars, even if they may be safer than human drivers.

While not discussed directly in this case, the principle is generally a guiding force behind policy and decisions regarding artificial intelligence. Dr Abbott has argued that failing to recognise an AI as an inventor unreasonably discourages the use of such systems, while creating unnecessary confusion. I respectfully disagree with this conclusion. In my opinion, the correct way forward is to recognise AI systems as mere tools, which are used by human inventors. While there may be circumstances in which an AI is so sophisticated that a human has done little work (making this a potential legal fiction), adopting this analysis will still allow such inventions to be patented, and will reward the use and development of AI.

Incentivising Development, Use and Disclosure

The use of the patent system to recognise and protect the moral rights of inventors may seem paramount at first. There are strong arguments that recognising the rightful inventor of something (whether or not that inventor is a human or a computer program) is something we ought to strive towards. However, the patent system has not always been this way. For example, there was once a time in which someone could collect a patent merely by being the first person to import an invention into a new country. Obviously this is no longer appropriate in our globalised time, but it illustrates an important point: patents are there to incentivise economic growth by rewarding people for introducing new technology. Any vindication of a “true inventor” is merely a side effect of this.

In the current judgement, Justice Beach pointed out that a human or corporation is still required to file for a patent, or claim ownership of it. This means that, in theory, the ability to apply for a patent will remain the same if AI was universally categorised as a tool. There are a large number of unexplored issues, however, which will follow from AI itself being recognised as an inventor. For example, if a user is operating AI software under a licence, and uncovers a new invention, the invention may still vest in the owner of the AI. Even so, the user is the one capable of finding and applying this invention. The owner or author of the AI has already had ample opportunity to profit from the licensing and use of the software, so why should they also be entitled to creations recognised by others? This will also result in lengthy and unnecessary litigation to decide exactly how much autonomy is required for the AI to qualify as inventor, and will unjustly favour the owners of AI technology over the people who use it to benefit society.

Assessment of “Obviousness”

Another implication which has been discussed by many commentators, including Dr Abbot, is the impact AI will have on how “obviousness” is assessed. A requirement for a patent is that the invention is not obvious to a “person skilled in the relevant art”. It has been argued that as AI invention becomes more commonplace, such a skilled person will be that of a human utilising AI technologies, and will eventually exclude inventions which would be obvious to an AI itself. Although full of legal fictions and inconsistencies, there is a well-developed body of law with its own tests to identify what would be obvious to a human practitioner equipped with state of the art knowledge and tools. While the use of AI poses some challenges, these tests can still be adapted to new situations, including a person equipped with full use of AI. However, if one looks at how “obvious” an invention would appear to a machine, any arguments about intentionality or creativity break down completely. Describing intentionality, and a sense of what a computer “could have done”, is a growing field of philosophy, pioneered by many writers such as Daniel Dennett, but courts are not yet equipped to perform this analysis.

Conclusion

Regardless of how autonomously they act, AI systems (for now) are still tools of their human users and should be treated as such. Courts should not be concerned with inquiries around whether the human has contributed to an invention, or merely pressed a button. These questions should be saved for the philosophers and sci-fi writers.

An AI may create hundreds of inventions, but a human being is still required to sift through these, and decide whether they are legitimate and have viable commercial applications. The actual user of the system is the one who has brought these inventions into reality and should be recognised as such. I return now to an absurd but illustrative example: what if a person sees an invention, which just happens to be spelled out in their alphabet soup? This person has not invented anything, but the patent system incentivises them to recognise this invention, bring it into the world, and benefit society in doing so. It would be absurd to think that the inventor should be the soup, and begin pointless inquiries into who the owner of the soup is, or whether an invention is obvious to a bowl of soup. If looking for inventions in alphabet soup was a viable way of generating inventions (unfortunately it is not), then it should be encouraged by the patent system, not discouraged for the mere fact that the patentee has not really created the invention themselves.

Likewise with the use of AI, it is the humans using it that are providing the real benefit. Common practice today is for AI users to file a patent in their own name: this should continue, and the patent office should not interfere. There might be a time in the future where AI will create a new invention, recognise its commercial use, and implement it. When we reach that point, our entire understanding of work and innovation will have changed. Until then, our AI will remain a tool for humans.

Amy: AI Investment and Innovation

Labelling autonomous AI systems such as DABUS as inventors will encourage AI investment and innovation. According to Dr Abbott, ‘People who build AI, own AI and use AI are responsive to patent incentives — so allowing AI generated inventions to receive protection would result in more investment and development in AI which would ultimately lead to more invention.’ The example reported by ABC clearly highlights a situation where there was a patentable invention but no human inventor resulting in an inability to patent. In 2019, Siemens ‘was unable to file a patent on a new car suspension system because it was developed by AI. Its human engineers would not list themselves as inventors because they could not claim to have had input in the inventing process and the US has criminal penalties for inaccurately putting the wrong inventor down on a patent application.’⁸ Therefore, by recognising AI as inventors under the Patent Act investment in AI is encouraged.

If you are interested in the intersection between artificial intelligence, technology and the law and would like to get involved, reach out to us through our email (deepneuron.ai@gmail.com) or on our Facebook.

Footnotes:

  1. [2021] FCA 879, (‘Thaler’)
  2. IP Australia
  3. (‘Patents Act’)
  4. s 15(1).
  5. Thaler (n 1) [120].
  6. Ibid [130].
  7. Ibid [56].
  8. https://www.abc.net.au/news/2021-08-01/historic-decision-allows-ai-to-be-recognised-as-an-inventor/100339264

--

--

Monash Deepneuron

We are a student team focused on improving the world with Artificial Intelligence (AI) and High-Performance Computing (HPC). https://www.deepneuron.org/