Tech leaders are (and should continue) searching for approaches to effectively implement artificial intelligence into their organisations and, in this manner, drive value. Also, though all AI technologies most definitely have their own merits, not every one of them merits putting resources into.
With so many intricate names, like machine learning, neural circuits, deep learning, predictive analysis etc., it’s comprehensible that people get confused.
From Alexa to self-driving cars, artificial intelligence (AI) is advancing quickly. While science fiction regularly portrays AI as robots with human-like characteristics, AI can incorporate anything from Google’s search algorithms to IBM’s Watson even to autonomous weapons.
The term AI was conceived when a group of researchers led a study to make machines comprehend and use human language and then keep on going on their own. The research was done in 1955, but the term wasn’t officially created until one year later, after the study’s success.
From that point onward, it was accepted everywhere because it had immense potential and it turned into a field of computer science.
It was named “artificial intelligence” because a machine could be considered “intelligent” if it demonstrated anything that looks like human intelligence.
The process involves just programming computers in a new way using tremendous amounts of data to train them, so they can carry out specific tasks that people regularly do.
A simple definition could be that AI is as a set of algorithms that can cope with unforeseen circumstances.
The word algorithm used to be something only calculus students talked about, today press and marketing teams bandy it about like they’re invoking some cutting edge enchanted spell. In any case, in spite of the hype, there’s nothing extremely exceptional about an algorithm — it’s what you can do with them that matters.
At the point when fastened together, algorithms — like lines of code — become more robust. They’re combined to fabricate AI systems like neural networks. Since algorithms can tell computers to discover an answer or execute a task, they’re valuable for situations where we don’t know the response to a question or for speeding up data analysis.
What is the law but a series of algorithms? Codified instructions are prescribing rules and regulations — uncertainties and thens. Sounds a lot like computer programming, isn’t that so? The legal system, on the other hand, isn’t as transparent as coding. Think about the muddled state of justice today, regardless if it are problems stemming from backlogged courts, overloaded public defenders, or swathes of defendants excessively accused of crimes.
So, can artificial intelligence help?
Very much so. Law firms are already using AI to perform due diligence, conduct research and bill hours more efficiently. However, some anticipate that the effect of AI should be significantly more transformational. It’s anticipated AI will eliminate most paralegal and legal research positions soon. Could lawyers share the same fate?
Artificial intelligence today is rightly known as narrow AI (or weak AI), in that it is aimed to perform a small task (e.g. only facial recognition or only internet searches or just driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong AI). While narrow AI may overtake humans at whatever its specific task is, like playing chess or cracking equations, AGI would outperform humans at almost every cognitive task.
Alex Hamilton´s open source glossary for terminology in LawTech refers to magic in the definition of AI.
Artificial Intelligence — A term for when a computer system does magic. “General” artificial intelligence refers to thinking computers, a concept that for the foreseeable future exists only in science fiction and LawTech talks. “Narrow” artificial intelligence refers to a limited capability (albeit one that may be very useful) such as classifying text or pictures, or expert systems. Discussions of AI that blur general and narrow AI are a good indication that you are dealing with bullshit.
As pointed out by I. J. Good in 1965, designing smarter AI systems is itself as an intellectual undertaking. Such a system could potentially undertake recursive self-improvement, triggering an intelligence blast leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, malady, and poverty, and so the creation of strong AI might be the most significant event in humankind’s history. Few specialists have expressed concern, however, that it might also be the last unless we learn to align the goals of the AI with ours before it becomes super intelligent.
“It may even be considered legal malpractice not to use AI one day,” says Tom Girardi, renowned civil litigator and the real-life inspiration for the lawyer in the movie, Erin Brockovich. “It would be analogous to a lawyer in the late twentieth century still doing everything by hand when this person could use a computer.” (Will A.I. Put Lawyers Out Of Business? — forbes.com. https://www.forbes.com/sites/cognitiveworld/2019/02/09/will-a-i-put-lawyers-out-of-business/)
There are numerous reasons to believe AI could help the legal industry in ways as significant as the personal computer. Currently, the legal system relies on the multitudes of paralegals and researchers to discover, index, and process information.
However, for only a small amount of the time and cost, AI could be utilized to conduct time-consuming research, decreasing the burdens on courts and legal services and quickening the legal procedure.
Be that as it may, for only a small amount of the time and cost, AI could be utilized to direct tedious research, decreasing the weights on courts and lawful administrations and quickening the legal procedure.
In spite of the fact that no consensus exists yet as to how AI will ultimately shape the legal profession, we do know AI is ready to change almost every facet of our lives, and the new technologies it’s powering will create a host of unprecedented legal issues, including ownership, risk, privacy and policing. For a sample of what’s coming, consider this: when self-driving cars start getting into accidents, who will be deemed responsible? The vehicle owner? The manufacturer? The software designer?
The very truth these are complicated issues soon to be exacerbated by modern technology uncovers the need for more lawyers, but not just any kind of lawyers.
* This article reflects my own views and not the views of my employer