25 facts about AI & Law you always wanted to know (but were afraid to ask).

In law, AI is still all the talk. Most of it is slightly or utterly incorrect. Discoveries in recent years have little impact on the automation of legal work and the legal industry. Legal reasoning is different from other fields— technology should reflect this.

This list is an excerpt from my course on Legal Informatics and Innovation at the Frankfurt University and some presentations I gave at conferences. You will probably be able to reverse-engineer a decent presentation from this. For comments and suggestions reach out to me here or on twitter.

  1. There is no “AI” (you probably knew this one).

AI is a buzzword, referring to a myriad of different technologies. Or no tech at all. McCarthy wrapped it up quite nicely: “As soon as it works, no one calls it AI any more.” Like general neural networks. Or Convolutional NNs. Or, going back, support-vector-machines. Bayesian Networks. Expert Systems. The term AI mixes the imaginative part of storytelling with the “because science”-like chic of technocratic sophistication to irresistible stickiness. We have all discussed futuristic dystopias based on AI related developments, but as soon as one actually works in the field there are mostly two categories: “Stuff that works” and “stuff that you read in the news”.

@matvelloso on Twitter

2. “Legal Tech” is not Legal AI. Legal Tech It is whatever you want it to be.

Legal Tech is not a brand and it is certainly not a scientific term. Most lawyers expect legal tech to deal with the core task of automating or even artificially recreating legal reasoning in the strict sense. But with startups, investors, journalists and the public extending the term to everything in the intersection of law and IT, even blogs and legal recruiting have received the label. Legal tech has become a synonym for tech enhancement in the legal field in general. This is not a bad thing, but it does turn the field into a very arbitrary one, always leaving room for discussion.

The scientists in the field — the Legal Informatics community (see below) — strictly refuse to be interchangeable with the nascent term but when it comes to the core questions of feasibility, there are countless parallels.

AFAIK, my paper on Legal Tech (download here) was one of the first to introduce the term to the public in Germany in 2014, since then it has gained substantial traction. To avoid confusion, I will only use it in the “industry area” sense and not descriptive of any technological concept.

3. AI in the legal field is not new.

In fact, it goes back to the very first days of informatics, say in the 1950s and 60s. But in the heydays of early IT-dreams, we discovered that most things we want to do with AI in the legal field do not work that simply. Or in most cases, they do not work at all. And we had more funding and more manpower, as in the 1960s and 1970s significantly more people were working on the serious intersection of law and technology than do so today. There were a number of projects that were successful in theory but failed in terms of scalability and practical adoption. Few products were successfully introduced to the market and close to no project has remained since.

AI and Law, Issue №3

4. There is an entire field of research dedicated to AI and law.

And it has come to an end. The field is even called exactly that: AI and Law. In Europe, the same field goes under the label of “legal informatics” but both do the same: they examine, how legal reasoning — the application of facts under a rule following the Aristotelian subsumption — could be enhanced, guided, supported, replaced or reproduced through algorithms.

There had been extensive discussions about what to call the field of law and IT and what it should (and should not) contain and although they helped the field to go through puberty they did not help the early law and IT community to develop practical applications. In the end scientists in continental Europe used the term “Legal Informatics” and made it clear that it was exclusively about researching “technology in legal reasoning” in order to differentiate from the “law in technology” areas (like data protection, privacy, media law etc.). Universities in the UK preferred the term “law technology” and the global community, especially the US, referred to their field as “Artificial Intelligence and Law” or just “AI and Law”. In order to avoid confusion, I will talk about Legal Informatics referring to the (early day) scientists, while the term “Legal Tech” speaks of a more indefinite, heterogeneous industry vertical without clear boundaries.

Legal Informatics has left us with myriads of publications that are extremely deep, thorough and actually important, combining influences of IT, law, math, philosophy, linguistics, neural sciences and management. Some contributors are still active and most of the content is public (see a short summary here).

However, this content is hardly accessible: If you just want to quickly learn what is possible, what is not and what has been done then I must disappoint you. As much as the legal tech world tries to exaggerate technological feasibility of their applications, the Legal Informatics community struggles in explaining their endeavors. I have participated in conferences where the speaker seemed to be the only person in the room to fully understand what the talk was about — and everybody was fine with this.

A accurate summary by Thorne McCarty of Rutgers is of help here and I recommend you start with this. For German speakers, the publications of G. Wolf are a good starting point.

5. We are not really in a “new era”. We are just living another hype.

You have probably heard about the Gartner Hype Cycle (see below), probably even at an AI conference. But before you nod and skip the next paragraph: have you really really thought about what this actually means?

There is a leading market research agency that openly and regularly (some form of machine learning has been peak hype since 2015) debunks feasibility without anyone really caring. According to Gartner, 90% of applications probably don’t work (as advertised) and 90% of talks about AI are bullocks. This also means, that your fears and aspirations regarding implications of that technology are probably unfounded.

And while we are at it: Gartner earns money by hyping technologies, writing about them, selling intel on them and featuring companies that are working with them. For Gartner, going public with a chart that publicly points the finger at most of their customers is pretty remarkable. It shows that for Gartner the hype is so obviously out of proportion that it is a safer bet for them to openly reveal some concepts as snake oil and benefit from the hereby gained trust than to further fuel the hype. Think about this for a minute.

Gartner Hype Cycle, August 2018, © 2018 Gartner, Inc.

6. This is not the first AI Hype. It is just the most elaborate one.

There have been other times like this before. There have been hypes and expectations, focusing on each and every different approach for a while. It all started with hybrid approaches aligning forward- and backward chaining technologies in various hybrid systems in the 1960ies. In the decade following, techies (and lawyers) experimented with fuzzy logic, and early versions of networks. After some failures, the 1980ies were all about (rule-based) expert systems (that nevertheless contained versions of machine-learning in the inference enginges) and the late 1990ies were dominated by tests with networks (again). In the early 2000s, induced by more and more available data (remember the “Big Data” headlines) trends really started to move away from knowledge-based systems to approaches focusing on statistical valuations with mostly Bayesian and Neural Networks. In the early 2010s, variations of Neural Networks are all the hype and in recent years, reinforcement learning and deep learning make headlines.

No doubt the substantial breakthrough in 2012 in Canada, Stanford and Berkeley by running models on Graphic Nvidia Chips have and will further change paradigms, but they are only true for neural networks that require huge sets of data — not necessarily for all the other technologies (more about that later).

7. The AI hype is invented and fueled by businesses.

Stories stick. Most of the AI hype is induced by companies trying to sell software. Not AI software, but ordinary, plain software. Or Cloudspace. Or consulting. There is nothing bad about that — other industries (fashion especially) invent new trends all the time to sell the old stuff in a new box. Telling a CEO that something is changing the business landscape will raise attention, increase budgets and shorten sales cycles.

Dilbert by Scott Adams, slightly altered by someone on the internet.

8. This is true for the legal world, too. There is an AI and Law Hype.

In 2014, there was some hype about a German startup claiming to have solved the eternal problem of legal reasoning through AI. They even made it into the Microsoft Accelerator Program in Berlin and nobody was surprised. I wrote about this a while back (link in German) because I found it quite amusing, how this was at all possible — but after following the IBM Watson-based Startup ROSS one cannot help but surrendering to the fact that good marketing and human craving for good stories can lead to disturbing results.

By the way: that AI startup finally went bust after being sued by the German legal FAQ provider frag-einen-anwalt.de for illegally scraping their content.

9. Legal thinking is different. Law is more than cat pics.

Since you are still reading, you are at least a bit tech-savy, a lawyer, or both. So you probably know that syntax describes the signs and symbols we use to convey information (e.g. the syntax of the word ‘Giant’ is 5 letters, capital G, i, a, n and t.), while semantics describe the content, the meaning of words (e.g. the semantics of ‘Giant’ is ‘very tall person’ including mental connections with the meaning, denotations, connotations, feelings, etc.). While the syntax is universally constant and generally objective, semantics change and always require a certain level of interpretation. For legal content (or any other domain-specific content), there is even an additional level of semantic information necessary to understand the meaning of something.

Intelligent systems based on statistic applications play mostly on the syntactic level. Pattern recognition with neural networks essentially require numbers (or at least formalized, numeric information), and most other approaches do, too. While most legal reasoning requires at least some aspect of semantic interpretation, modern machine learning concepts cannot help here. As soon as an application requires semantic reasoning, we are stuck. Lawyers are trained to gather information from data, to read a text, interpret it, reason on it, and produce a result. The process is a rapid and iterative comparison of several semantic layers that machines cannot mimic or at least not through self-learning reproduction.

I have a number of examples for this — here, I’ll leave it with a simple one: If you have a look at a legal provision, there are words defining normative content and words that define structure and hierarchy of content, like or (highlighted in yellow) in the example below:

Excerpt § 78 Mauritius, Criminal Code (Cap. 195), Highlight added

It explains the structure of the paragraph, making clear that the enumeration (i) to (iv) is meant adjunctively. Comparing it to all other or in the text, it becomes clear that the highlighted or is different, on a semantic level. On a syntactic level there are no differences at all. Even semantically, there are no differences at first. All or mean the same. Yet the or in (1)(b)(iii) has an additional, legal, semantic value, defining the vertical hierarchy of the paragraph.

Excerpt § 78 Mauritius, Criminal Code (Cap. 195), Highlight added

For the legal human mind it is a simple logical task based on reasoning or, after some practice, intuition, to check for the logical order in the next-to-last line of a paragraph. For training neural networks or other recent machine learning concepts, this is a problem. And it is just one picked at random — referrring to the problem of implicit hierarchies — there are many more.

10. The “new” AI-achievements in other industries don’t really help lawyers.

In other words: The factors that have fueled the AI-discussion in the last 7 years are of (almost) no relevance for AI in law.

Yes, there have been some applications where the new tech makes a lot of sense and where results are truly astonishing. This is the case where data can be interpreted in a statistical way but has value for legal reasoning. Take the amount of square meters in lease contracts. If a rule of law requires contracts exceeding 2.000 sqms to have longer termination deadlines, an algorithm can swiftly both learn and find contracts that are invalid under this regulation, but only because there are finite ways to describe the termination and because the sqm amount is a formalized date. Both pieces of data can be put in statistic correlation, and if you have sufficient data you can train a bot that will then apply this to thousands of lease contracts in seconds. Tadaaa — you’ll put every 1st year associate with a felt pen to shame.

But these examples are rare and depend on use cases with ideal conditions and sufficient data and of course, on the right questions (and business case).

11. What about all the document analysis tools?

That is not all. There are numerous providers (Kira, Luminance, Leverton, ThingsThinking, RFRNZ to name a few) that do have workable toolsets to speed up legal reasoning in specific aspects. But their tech does not rely on neural nets, at least not predominantly. Of course, they won’t really tell you what it really is and how it actually runs because this is their secret sauce, but in most cases it is a combination of several machine learning or broader “AI” tech like text-to-vec machines, semantic filters and other stuff that works in a hybrid-like structure. Sven Koerner of ThingsThinking writes about it more overtly and will explain what they do. Others keep it vague, but what matters most: If their tech works (and most of it does), it is not machine learning on legal data.

12. They have been here all the time — it just wasn’t time.

These applications did not appear out of the blue because of an evolution-like sudden breakthrough in ML tech. These tools have been here before, you just did not care enough to notice. Kira entered the market in 2011, Leverton in 2013, PwC and EY had forensic engines in 2010 and applied very comparable stuff in 2012 already.

It is only now that the market has opened up, not just due to technological feasibility, but due to changes in the entire ecosystem. It took lawyers to be fascinated by AI, to make them ask for intelligent tools. This is how more and more intelligent tools come to light. Another thing is truly a game changer: data. There have been data pools before but only with the mass adoption of DMS and the industry standard of digital datarooms the playground is big enough for these analytics tools to work. Only the emerging of ALSPs and the overall tech-infused bonanza got big-law procurement comfortable with supporting due diligence with AI-based tools.

16. We do NOT have the necessary data for training legal machine learning algorithms (Part 1: Formalization).

However, that does not mean that this is applicable for all legal questions and tasks. Most legal reasoning will remain mundane, manual work. This is due to the fact that we do not have formalized data.

We do not even have data, in that sense. Machine learning runs on formalized data like numbers and figures (that consist of pixels that are again numbers), while legal content consists of semantic terms that need interpretation. 5 plus 10 equals 15. Everywhere. In every language and jurisdiction. But declaration of will plus declaration of will does not always create a contract. It might. But it may also not. There are dissertations just comparing the term contract in German and French law. Or US law.

As an example, we cannot easily formalize the terms buyer-friendly or seller-friendly and in doing so, we would struggle a lot. Is it a binary decision? Is it gradual? Is a contract 17% buyer-friendly? Is it a scoring, like 4/5 buyer friendly? Even while thinking about it, we realize how unnatural and ultimately useless this is. It always depends. But with this inability to transform legal terms into formalized concepts we see the lack of overall formalization in law.

And while we are at it: we struggle a lot with the quantitative aspects of the law, which render the statistical value of other fields of application useless for legal automation. We have all been there: three courts have dismissed a similar claim. One has upheld it. What are the chances of your claim being upheld? 25%? What if the three dismissing courts rulings are older? What if they are more recent? What if the fourth, upholding ruling, is a higher instance? Will it render other rulings void? Should we dismiss them? Discount them?

On the output layer, the same problems prevail: If three out of four customers opt for a gift voucher instead of a reimbursement, this gives a 75% likeliness of the next customer to take the voucher. What about three out of four contracts being valid — would you rely on an algorithm predicting 75% validity for the next contract, too?

Yes, there are concepts for the formalization of legal terms. No, they are not successful, let alone in practical application. There are numerous reasons why this is so hard, with the easiest one being the lack of funding. There are some bigger projects (Estrella Community with OASIS Legal RuleML and the Legal Knowledge Interchange Format (LKIF)) but they lack broad content and mass adoption. You have probably never heard of them. Go figure.

Legal RuleML is part of a bigger framework. Source: LegalRuleXML / OASIS

17. We do NOT have the necessary data for training legal machine learning algorithms (Part 2: Labels)

After Fact №16 this case is kind of closed already, but for the sake of comprehensive arguments let’s push it further: Let’s say we do have a formalized structure, we just make one up. We will nevertheless have to apply this structure on legal content in order to allow machines to learn. Data Scientists call this labelling:

The algorithms that learned to distinguish a cat from a dog did not just have millions of cat pics. In fact, the algorithm was fed millions of cat/dog pics that were annotated. More important than the cat pic itself is the annotation of the pic with the information that it “contains cat”. This means that someone who knows what a cat looks like has looked at the pic, seen the cat, and labelled the image.

There are 164M labelled cat images on Instagram. How many labelled NDAs do you have?

The knowledge that the ML algorithm is fed is not the pic, but the know how contained in the pic, made expressive through the label. To be even more precise, the know how contained in the pic are factors that have resulted in the labeller labelling the data. Some aspects of the image have lead the labeller to see a cat and annotate the image with #cat. The challenge for the algorithm will be to find these determining factors individually.

An example: For humans, “pointy ears” could be such determining factor to recognize a cat. These need to be found by the algorithm independently.

Consequently, a good database for ML training is not just data but annotated data. Modern machine learning tries to automatically create labels from other data sets (e.g. in medical imagery this could be the medical report linked to the radiologic scan) or re-creates labels through manual methods (if you don’t know it, have a look at the Mechanical Turk).

It ‘s a bit ironic that Amazon names its labelling platform after an early automation scam.

Taking this to legal data, we need labels. In order to train an algorithm to automatically find “good” NDAs versus “bad” NDAs we need to explain, what good and bad actually mean. Hence, we need to annotate legal texts with the labels we later wish to use as a connection for the determining factors. As for an NDA clause being “good” or “bad” we have a number of legal considerations, the algorithmic model would have to discover these considerations on its own, too. Or at least mimic it.

If you think about your contract database as a valuable asset ripe for training, think different: you would have to go at each doc, probably to each clause, and label it, in order to really make use of it. Sounds complicated, expensive and slow? Yep. And knowing that one would have to do it for hundreds, maybe thousands of documents, this becomes even more unrealistic. It is almost impossible, actually.

Did you know that there is an industry of game developers focussing exclusively on incentivising medical doctors to participate in image labelling? Say you get a bonus-point for every tumor-outline you draw in an app? Yes, this is a thing and yes, this is what is necessary to do to get to auto-diagnosis. This is actually working poorly and AFAIK there are few companies actually getting there, but they are at least trying. And with the outlook to replace (expensive) radiologists, this makes sense. And the numbers (say: ROI) add up.

For law, the contrary is true: the ROI on this is really bad, too, since you will have to re-do the work for every language, jurisdiction and legal question. Unlike in the medical world were numeric representation of imagery is quite universal, law does not scale easily.

18. We do NOT have the necessary data for training legal machine learning algorithms. (Part 3: Data Quality)

So we have no (formalized) data, we have no formalized labelling concept and we have not the necessary resources to label data.

There are more caveats: We cannot use older versions of contracts. We cannot use documents that are used under different regulations. And of course, we have to re-do this once any provision changes. Why? Because the determining factors linking certain legal question to certain labels may change, once a law changes. So if you were hoping to just cram a decision database of your Higher Courts into an algorithm, this is bad news.

19. We do NOT have the necessary data for training legal machine learning algorithms (Part 4: Ontological Data)

The biggest obstacle in training algorithms with legal data is the ontological structure of the knowledge. Going through law school and legal professional education we acquire an immense amount of legal knowledge structured in a way to resemble a giant tree or cluster.

We use that structure constantly, knowing that an employee works for an employer that might be a company that might be a partnership or a corporation, with by-laws and a board and shareholders and that those shareholders might be individuals or corporations and so on. We are able to understand that the employment contract of that employee has nothing to do with the by-laws although they are both contracts. And we know that a clause although contained in both agreements might mean something very different. We apply this knowledge to cases and hence are able to structure extremely complex content into clearly defined bits and pieces that are inter-dependent.

Source: Judith Pratt, Cornell, 2011

As mentioned in the context of formalization, there are projects focussing on legal ontologies, too, but they are nevertheless limited. And the fact that there are, well, many approaches for the very same aim tells us something, right?

  • OpenCyc: an open source version of the Cyc general ontology;
  • SUMO: the Suggested Upper Merged Ontology;
  • the upper ontologies PROTON (PROTo Ontology) and DOLCE (Descriptive Ontology for Linguistic and Cognitive Engineering);
  • the FRBRoo model (which represents bibliographic information);
  • the RDF representation of Dublin Core;
  • the Gene Ontology;
  • the FOAF (Friend of a Friend) ontology.

Training algorithms requires teaching this knowledge graph, too, which is considered undo-able. Recent breakthrough applications like chess- or go-algorithms work in a clearly defined, finite space that is structured through (numeric) rules. There have been close to no semantic structures in truly powerful AI applications.

20. We do NOT have the necessary data for training legal machine learning algorithms (Part 5: World Knowledge)

Finally, we need world knowledge to truly automate legal reasoning. To keep it simple, this means to extend the aforementioned ontologic semantic structure to an even greater extend to anything else in the world. Cars. Companies. Coal. Cocoa. Calcium.

We could of course cheat by selecting specific areas and just model what is absolutely necessary —this was done in expert systems in the 1970ies and 1980ies. But as described by Jandach, this has its strong limits and ultimately funding will limit acquisition of knowledge.

21. The biggest challenge to Automation is ROI — this is what should worry you.

Summing this up, law is in a unique position — in a good and in a bad way: Most legal reasoning cannot easily be automated through machine learning. There are more and more fields where applications (e.g. for doc analysis) help, but most of the time, it is significantly cheaper and easier to proceed manually. Or through rule-based automation, which I will discuss some other time. For lawyers — or in fact, most regulatory experts applying rules as a job — this seems to be good news: jobs are safe.

On the other hand, this is rather bad news, really. It is the business dictating innovation. If lawyers are expensive and if the legal profession cannot keep up, tech and innovation will rather circumvent the legal services. We see increasing numbers of intelligent applications cutting legal fees and legal intervention wherever possible, without processing legal data at all. E-commerce has shown that metrics allow businesses to regulate themselves, leaving little to no dispute resolution to the legal system.

The biggest threat for in-house lawyers is not an inhouse-lawyer AI but better risk analysis tools and scenaric modelling. The biggest threat for traffic lawyers is not a traffic law AI but autonomous driving. The biggest threat for compliance is not a compliance AI but process automation.

22. Whatever you read in a non-tech newspaper/blog about AI is probably incorrect.

Much of what we just discussed is common knowledge for the tech people in the industry, but vastly inconceivable for anyone else, especially in the legal industry. I have often struggled to understand how intelligent and knowledgeable people — like big law firm partners or general counsels — are so easily fooled by exaggerated headlines. As much as it seems apparent for technologists, differences blur for lawyers that are savvy in their field but somewhat illiterate in others if they are professionally used to work with analogies. I get variations for these questions a lot: If this works for DeepL, why would it not work for Non-Disclosure-Agreements (NDAs)?

Well, because we don’t have the data. Or the Tech. Or the labels. Or the money. And because DeepL is a hybrid system using rule-based structures formally trained on their Linguee-based-semantics. And because there are millions using it making it better every day.

People don’t like complex formulas and scientific reasoning. That is why people don’t like to really dig into the technological foundations of tech. That is true for readers and for writers. Journalists want to appear on page 1 and they won’t if the title has some ML-jibberjabber. Or complex summaries.

However they have a decent chance at making a few clicks if they tap the primal responses of curiosity and fear. Even decent newspapers produce some form of clickbait, though if they would not admit it. On the other hand, in some cases they may have had a longer chat with a company`s PR firm before choosing the headline or just don’t get the context. It is a complex topic, with summaries of it being either correct and boring or wrong.

Being a kid of the 80s I grew up believing adults knew their stuff. And believed that news outlets would fact-check content. They (often) do not. Even in my rather short stint in the IT sector I witnessed innumerable fake headlines of fake companies and fake innovations. Everybody talked about IBMs supercomputer-like abilities. Almost nobody talked about all German Hospitals collectively terminating their contracts for “lack of performance to a degree that one is tempted to conclude that Watson was rather a marketing gag than an actual product”.

Der SPIEGEL, August 3, 2018.

They apply it to law now, but after the dubious success it had with the — numerically formalized, globally available and easily accessible— medical data, results in the legal field are far from likely. That is, if you don’t trust one of the many headlines touting it as your future co-worker. Of course, some pilot customers have experienced some aspect of it but to my knowledge, nobody has ever seen it working. Let me know if you do ;-)

23. Simple IT-Automation pulls in a lot more in sales than AI.

Like, 10–50 times more. And it does not look like this is going to change. This is true for almost every industry, of course with the exception of law, where neither tech has substantial market share. So why would you not start thinking about what IT-Automation would do to the legal profession? Why not start a company there? At BRYTER we are doing exactly that but we are just focusing on one thing: no-code automation. However, there are so many more things to tackle.

Global Automation Market (by segment), source: Statista, 2019

24. There are startups and companies that you have never heard of that are 100x more successful than most AI-Startups.

Don’t believe me? Ever heard of UI Path? Outsystems? Celonis? Airtable? AutomationAnywhere? Productive Mobile?

Ui Path went from a ten-person API-dev-team in Romania in 2013 to a global player with $7bn valuation in 2018 with a simple automation toolbox.

25. Smart Contracts are not smart.

To be clear: Smart Contracts have nothing to do with AI. But weirdly, they get regularly thrown into the mix. It is an even bigger hype with less practical relevance than anything else we have seen. You can easily become a speaker at a legal tech conference if you only mumble often enough that there is country like <insert name of Scandinavian/Baltic/Middle American country> who puts their <insert name of any public record database like company house/public records/land register> on a blockchain and that you could explain why this is great. It is not. But most of all: it does not work. I know this and you can quote me on that. Most of what I do since 2011 (with Lexalgo) and 2018 (with BRYTER) is formalizing legal decision making (remember: the prerequisite for any automated transaction) and although I count our team to be one of the biggest and best in legal automation who are also building the technological underpinning to do this fast and easily I am nevertheless very very sure that it is not technically feasible to formalize and digitize legal reasoning to the extent that it is only remotely executable through a distributed-ledger-technology (DLT)-framework to cover everyday problems.

In other words: All the problems attributed with expert systems in the 1980ies (Plateau-Cliff-challenge, Feigenbaum’s Bottleneck, etc.) come to play here several times as hard. The true benefits of DLT (anonymity, lack of trusted intermediary) are rarely in demand in the legal space and especially not in the public one, leaving smart contracts as a mostly hypothetical concept.

Of course the underlying Blockchain has its applications but they are outside the legal world. I’ll write about this some other time. For now, maybe read the long summary of Jimmy Song (link) why this is not a good idea or the gem of a (fierce and witty) summary by Zach Korman published in the Oxford Business Law Blogs.

People are still looking for use cases for smart contracts. Investors call this “a solution without a problem”.

If you are a visionary and want to change the world, why not focus on what we really and urgently need: VoIP for the public sector. Document Management Systems for the public sector. Digital Signatures for the legal sector. Digital, secure, scalable communication between law firms and institutions. Really good OCR. Something better than document automation. Sounds boring? Yes, I know. It’s much less fun than AI-infused fantasies and so much more burdensome. That’s why we need to tackle it all the more.

And if you want to work on something that is almost as cool as machine learning but actually works and changes the world, come work with us at BRYTER (we are hiring!)

PS: I was told that this fits the #bringbackboring claim. Happy to contribute.

Micha Grupp

Written by

Lawyer, Entrepreneur & Innovation Enthusiast. #becurious, #thinkbig & #workhard

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade