The Future of Authority and the Automation of Law

Jesse Hirsh
Aug 28, 2017 · 25 min read


The court of law is being challenged by the court of public opinion. In response, automation and algorithmic decision making are an attempt by the court of law to gain velocity and regain authority. Yet at what cost, and towards what outcome?


In the summer of 2016 I authored a Master’s Thesis on the rise and regulation of algorithmic media. Earlier in 2016 I wrote an article for the IEEE on predictive policing and civilian oversight. I’ve continued my research over the past year, in lead up to appearing at a symposium held this coming fall by the Supreme Court of Canada, and this essay builds upon those two earlier pieces and focuses on the intersection of algorithmic media and the law.

While the previous two were academic, this one is not. Rather it is intended to help you, the reader, understand how our society is changing. I’m working through these ideas and developments and want to share what I’ve got up to this point.

The Future of Authority

From a distance, historical change appears to be clear cut, but when you’re living through this kind of transition between eras, there is no clear end and start date. Rather, the new era and the old era often co-exist for an extended period, in which some members of society live in the new era, while many cling to the old.

In our society, many individuals like myself have been living in the new era for over two decades, yet many institutions, in particular the justice system, remain firmly entrenched in the old era.

What makes matters more confusing, is that almost all individuals at least visit the new era, when they use their smartphone, social media, or just surf the web. So even if they themselves do not live in the new, they see the new, visit the new, probably every day, maybe even every hour.

As we engage with the new, the institutions that remain in the old era become even more visibly dated, their relevance increasingly questioned. Taylor Owen in his book “Disruptive Power: The Crisis of the State in the Digital Age” describes this tension as the mechanisms of statecraft are challenged by new actors and institutions that are emerging as part of this new era.

I approach this contrast, this tale of two societies, within the question of what is the future of authority? What form will authority take in the new era, what relationship will the new authorities have with the old? Can the old authorities transition to become new authorities?

In particular what I’ve observed is a shift away from institutional authority and a move towards cognitive authority.

A clear example of this is who gets quoted, who gets to have a voice in our media. A few decades ago only people with institutional authority, only people who had been vetted by an institution and granted a senior position, only they would be in the newspaper or heard on the radio. Sociologists who study media described these people as “knowns”, as their institutional authority gave them special status in society.

Today anyone with a Twitter account will be, and often is printed, published, and invited onto radio and television.

Authority is no longer dependent upon your institution, but rather your ability to provide a signal, amidst the (informational) noise that marks our new era.

In fact the metaphor of signal to noise is a great way to understand how our media works. When you turn on any media, you begin with noise, and the onus is upon you, the user, to tune in your own signal. The role of the gatekeeper has been significantly diminished, those that still exist have far less control, and are more often than not automated, as they’re the algorithms that govern media platforms.

The cognitive authority takes the form of the the celebrity, the athlete, the twitter star who provides a consistent if not regular signal to their audience that earns their trust, maybe loyalty, and ongoing attention. We hope that these cognitive authorities are subject matter experts, yet they certainly don’t need to be, as authority is more a byproduct of attention and being known for being known, rather than for specific expertise.

However institutional authority and cognitive authority are not mutually exclusive. The smart person with institutional authority can and should also seek to become a cognitive authority. One means to do so is by becoming a communicator and offering a signal.

Another means is to be present in the new era, as often presence itself is a kind of signal, and can tell the new era that the authorities of the old era have arrived. For in their absence, new and competing systems have emerged.

The Court of Public Opinion

Certainly this competition is evident when it comes to the justice system. No longer is the court of law the sole place people can and will turn to seek justice. Witness the rise of the court of public opinion, which for many provides a faster and more accessible means of finding justice.

The court of law is slow, deliberate, and based on due process. The court of public opinion is fast, emotional, and based on the power of the crowd.

A great example of this contrast was the case of a disgraced Canadian radio host (I refuse to use his name) who was accused by many credible witnesses of sexual harassment and violence. While the court of public opinion quite accurately and effectively assigned guilt (and to some degree punishment), the court of law found this individual not-guilty. For many observers this either reinforced the failings of the court of law, the dangers of the court of public opinion, or the dangerous distance and difference that exists between the two.

However in recognizing the contrast between these two systems of justice, we also have to address, and dismiss the myth of the digital native.

Often among members of the old era, you’ll hear the telling of the myth of the digital native. The young people who intuitively understand technology, and will therefore be the future, since they just “get it”.

However this mythology (which has been widely debunked) offers two grave dangers:

First, young people are not intuitively savvy. They require education and learning opportunities the same way anyone of any age does. We need to recognize that access to education and learning resources is important and will only increase in importance. Keeping up with rapidly changing technology takes time to learn, time to play, and we need to make such resources available for people of all ages, or a large part of our society will not have the ability to literally interact with society and its institutions.

Secondly, the myth of the digital native ignores the political role, and the political power, that is invested into the creation of the new era and a new regime for that era.

Rather than talk of digital natives, we should talk about digital settlers. Settlers who have embarked onto a new frontier and are creating a new regime, without the ethical, moral, or democratic debates that should have happened and still need to happen.

The court of public opinion is a remarkable invention. While it may have existed for as long or even longer than the court of law, it has become incredibly empowered by social media platforms and the abilities they grant their users.

But the court of public opinion was not created by social media, it does not exist within social media, but rather within the hearts and minds of the public that uses these platforms. Those who are attracted to the court, help operate the court, are developing new skills while participating in the trials and tribulations that make the court of public opinion what it is.

The innovation is not in the technology, but in the people’s response to a changing culture.

Rather than allow that new system to gallop into the future, we need to recognize how to include elements of the new into the institutions of the old, as a balanced means of upgrading the elder, and moderating the younger.

Take for example the citizen journalist. As a practice, journalism no longer has any barrier to entry, nor requires any institutional prerequisite.

Anyone who can post in 140 characters or less can become a journalist on an amplified platform that offers them the potential to reach a global audience. Twitter and other social media services provide the citizen journalist with the capacity for real-time reporting as well as the incentive structure to build a growing and long term audience.

Further the instinct to become a citizen journalist if and when news happens near you is becoming universal. No need to chase ambulances as wherever the accident or incident will be someone will have the sensibility to capture and share whatever is in front of them with the super computer that resides in their pocket and can communicate with the world.

How could a court of law therefore only recognize or provide accreditation to some journalists and not all? Publication bans certainly do remain feasible, as do rules covering court conduct, however such rules and such privileges that would be applied to journalists must now be applied to all. The words citizen and journalist may even one day be regarded as synonyms.

To complicate matters we’re also now witnessing the rise of wearable computers that will make the act of journalism, not to mention the collection of evidence, increasingly easier and ubiquitous.

Smart glasses for example offer a glimpse into constant and subtle surveillance in which any pair of glasses may posses the capability to capture if not live stream whatever the wearer sees.

We certainly cannot deny those like myself who require corrective lenses the ability to enter into a courtroom. Therefore the ability to prevent a cyborg journalist with an instant online audience the same access may already be impossible.

The appeal is speed, but also personal access to justice, which was previously neither universally possible nor attainable.

Therefore automation becomes an appealing alternative to those who are part of or support the court of law. They hope that by embracing automation, the court of law might stand a chance of competing with or once again superseding the court of public opinion.

The Automation of Law and Robot Lawyers

In a surveillance society, we’re all outlaws.

We all make mistakes, and the moment we do there is a camera or sensor to ensure it becomes part of our permanent record. Speeding on the highway, making a mistake in our taxes, saying something we do not mean in a moment of passion or frustration. Are we prepared for a world where literally everything can and will be used against us?

The automation of law begins with the emergence of robot lawyers. The legal profession is slowly but surely embracing automation as a means of making their jobs faster, more effective, and potentially more affordable. What follows are some examples of companies and technologies that are attempting to achieve this.

There are already robot lawyers like which can help you fight a parking ticket, yet it is within larger law firms and the legal community that robot lawyers are starting to article and gain a foothold in the legal establishment.

For example, Toronto based Loom Analytics takes case law research and turns it on its head. Describing themselves as “litigation intelligence at your fingertips”, Loom takes a numerical approach to case law. They’ve added decades of court cases (initially from three Canadian provinces) into a cognitive system that is able to recognize and understand the context and elements of each trial.

Their desire is to radically speed up legal research, compressing five hours of search into five minutes. Their engine provides users with what works and what doesn’t according to case law, and automates the strategy, offering mathematical arguments as to what should be done in court.

By finding patterns among case law, they feel they can combine math and court precedent to determine potential outcomes and relevant strategies towards winning legal disputes.

The appeal to the lawyer is they no longer have to do mundane legal research. This will free them up to seek subject matter expertise. Humans cannot be thorough, regardless of the time. This system on the other hand can show you what you need to do in five minutes.

This is the software model: build it once, consume it a thousand times. Loom Analytics has lawyers that go through each case and input them into the system. They do it once, so you don’t have to. Humans training the machine. The human beings are still there at the end. They’re using the technology as an assistive tool, a hybrid of human and machine intelligence.

Mona Datt, President and co-founder of Loom Analytics describes the process as resource intensive: “A lot of human input. You can’t just build an algorithm and off you go. Lots of human work to prime the algorithm to recognize documents and possible data.”

Of course, once the system is built, there are all sorts of residual benefits. For example they even have a judge search tool that profiles the decisions judges have made, in particular the cases they have cited in their arguments, providing a blueprint by which to profile and reverse engineer them. You can analyze how a particular judge views case law and what the best strategy may be to argue a case to that judge. You can isolate the winning cases that the judge has ruled on.

This raises the concept that no judge is neutral, as it shows which biases the judge leans towards with regard to case law. What does this kind of data say about justices being impartial and free of bias? Certain judges rule a certain way due to their beliefs and this system will illustrate what those are and how to take advantage of them. Further, it takes literally seconds to run this report, so even if you hear about the judge the night before, you can sit in the back of the court and search that judge and see what their profile is.

Another example is Legalist, a Silicon Valley based startup, similar to Loom Analytics, but specializing in “data-backed litigation financing”. Like Loom, Legalist uses algorithms to analyze “millions” of court cases, in this case with an eye towards sourcing, vetting, and financing commercial litigation.

Unlike Loom, Legalist is not a search engine for lawyers, but a fund that will finance a litigation case when their algorithm makes a judgement call that said case will win. They claim their algorithms are able to determine the outcome of a trial, and based upon their prediction of outcome, are able to raise funds by promising a return on investment.

A third and similar company is ROSS Intelligence Inc., another legal search engine built using cognitive computing, that specializes in bankruptcy and insolvency cases. ROSS has been around a bit longer than Loom or Legalist, and as a result has found greater traction within large firms and law schools.

Blue J Legal is a fourth company that is also a legal search engine that combines cognitive computing with case law, however they focus on tax law. Their primary product, “tax foresight” is designed to predict the outcome of a tax dispute, which may empower accountants or businesses to alter their business practices accordingly.

ROSS, Loom, Blue J Legal, and Legalist offer us a glimpse of the advantages that cognitive computing offers, especially when adoption of the technology is uneven, and some parties have access to these kinds of tools while others do not. They also give hints of a cognitive marketplace where different systems focus on different areas of the legal sector.

However that’s not to say that cognitive applications in the legal world are only driven by competition and a desire for advantage. There are others that are designed towards making the law more accessible and generally empowering.

For example, Kira Systems is a Toronto based startup that develops machine learning contract analysis. They engage in data extraction and provision recognition from contracts, with a focus on mergers and acquisitions. They perform numerical analysis and cognitive computing to extract data from contracts at a high scale and quickly.

Essentially this involves feeding contracts into their cognitive system, which is then able to recognize the provisions in those contracts, placing them into different categories. They can presently recognize over 100 different provisions that are found in various contracts.

As a service, this allows clients to enter their own contract into Kira’s systems, and have the contract in question analyzed, highlighting elements that may need their attention, while cataloguing all the elements (i.e. provisions) within the contract.

Kira Systems is similar to another service called Beagle, that also uses cognitive computing to analyze contracts. Beagle analyzes contracts and facilitates real-time collaboration and review of documents. Their desire is to automate the review of contracts so that parties can focus on what is important, i.e. the intent of the contract and the desire to do business.

Whether these systems are about making the law more accessible to laypeople, or helping lawyers do their job faster and smarter, their developers assert this is not about making lawyers obsolete.

Algorithms are really good at doing boring and repetitive things that people are not actually interested in doing. Plus the machine never gets sick, or sleeps, or takes a vacation, and can work at all hours. These are tasks (like case law search and reading contracts) that lawyers don’t like to do, that they usually offload to interns who’d rather be doing interesting things.

Get the machine to do boring things, and let the humans do law, which is what your client expects when they pay you.

The tragedy is that most people don’t have access to lawyers. Yet lawyers claim to be the defenders of the average person, (which they’re not). What if technology can change that? These developers believe that they’re here to help you make better decisions by using technology to help make those decisions. Their hope is to free up lawyers to focus on proper lawyering, i.e. strategy and advice.

Algorithmic Decision Making and Automated Enforcement

The role of humans is particularly important to consider when contemplating the growing role and presence of algorithmic decision making and automated enforcement.

We are increasingly turning towards algorithmic media to make decisions on our behalf. As Frank Pasquale has detailed in his book “The Black Box Society”, the vast majority of these decisions are derived from proprietary systems that are inscrutable and unaccountable.

Therefore it becomes even more disturbing to see this practice apply to enforcement, punishment, parole, and everything that comes afterwards.

We already see this in our commerce and our culture. The web as we know it is governed by automated enforcement. Language online is often restricted by software. Behaviour can be identified and addressed without any clear intervention. People are flagged and banned from powerful platforms without due process or sense of what it is they’ve done wrong (like sharing a photo of breastfeeding).

One of the arguments in favour of algorithmic decision making is that it removes human bias and prejudice. Yet this is consistently being debunked, as bias becomes embedded in all human creations, algorithms especially. After all, why would an algorithm care about breastfeeding, it is the humans who programmed that algorithm that decided photos of mothers feeding their children were obscene and therefore should be removed automatically.

ProPublica, a US based nonprofit that engages in substantive investigative journalism, found that the algorithms currently used in the US to assist in decisions around the granting of parole, were racist and classist (much like the justice system itself).

The Right to Explanation

The prevalence and growing role of “black boxes” in the governance of our society is a challenge to the rule of law not just because it creates an alternate system of justice outside of existing institutions, but also because we have no way of knowing why these decisions are made.

There is an increasing chorus of voices demanding that any decision made by an algorithm also include an explanation of how that decision was made. If we are judged by an algorithm, if that system makes a decision that impacts us, then just as a human judge, it must explain why it came to the conclusion that it did.

Presently this does not exist. When you search Google, it does not explain why the results you receive are the ones they are. When you are given a quote on insurance, you are not given the reason why you have to pay more or less than your neighbour or colleague. If this practice is extended to the justice system then democracy cannot survive.

Facebook users are not given an explanation as to why they see the content they do, even if researchers have documented that this already impacts the outcome of elections and the forming of political positions.

The European Union is one of the only jurisdictions in the world which has proposed and adopted a “right to explain”, however it has yet to be implemented or tested. Other jurisdictions have yet to even show interest in such a necessary public policy.

There are however critics who argue that a right to explain will hinder the development of automated systems and others go further, arguing that it will be technically impossible.

Yet from the perspective of a human centric democracy, do we not deserve an explanation as to what decisions and judgements are made about us? Does a lack of explanation not prevent due process and the rule of law itself?

What about attribution? Where did the decision come from. In the case of an algorithm, who designed it, who programmed it, and where did the data that trained it come from?

One area where the role of explanation is essential and perhaps more frequent than other areas of justice and governance is in the larger role of regulation.

The Future of Regulation

Law is not just about mediating and resolving disputes, but also about governing and protecting society. Regulation plays an important and necessary role when it comes to fostering fairness, protecting society, the environment, and the broader democratic values we depend upon.

Early proponents of the internet zealously argued against any forms of regulation, fearing that such measures would stifle innovation. We’re finally sobering up and realizing that regulation is not only necessary, but also ripe for innovation as well.

Yet automated regulation may not be the best solution to mitigate the risks posed by rapid technological change. Nonetheless, it is being presented as a remedy to the challenges currently posed in an era marked by disruption.

Specifically, legal scholars argue that algorithms can be used to quickly test how new technologies or new developments within industry correlate to existing laws. These algorithms can predict the outcome of court cases and therefore infer how a regulator should approach a particular issue or challenge. This view sees regulation as the act of ensuring that industries or social activity adheres to established rules and laws.

This is a static approach, that regards algorithms as a means of enabling compliance, rather than a dynamic approach that regards regulation as serving a broader purpose. Further it could continue the trend of disempowering regulators, and denying them the resources they require, by instead asserting that their job can be automated and streamlined by machines.

It also begins to marginalize the role of a human judiciary. If algorithms overwhelmingly decide the outcome of a dispute that has yet to happen, does that not limit the ability of a human judge to come up with a different perspective? Will humans be able to challenge the authority of an algorithm, even if they’ve been empowered by the state to do so?

Who is responsible if the algorithmic regulator makes one decision, and then later the human judge rules otherwise? In embracing the desire for speed are we discarding the necessity of due process?

Proponents of algorithmic regulation also like to argue that it removes human bias from the equation, and yet this assertion is entirely false. There is no such thing as neutral technology, all algorithms have biases, whether reflected by their human creators, or embedded into the data they use to make their decisions.

The dangers of a dynamic approach to automating regulation revolve around this potential bias. The concept is to have a digital intermediary as regulator. Like how the Facebook newsfeed is a kind of digital intermediary that regulates what people see on the social network. Most social media platforms now employ this kind of regulation, relying upon algorithms to not only determine what people see, but more importantly what should not be seen and what should be immediately removed (or ejected).

Proponents of regulation by robot argue that said systems could evolve and respond to changing market conditions, social dynamics, or new technology. Yet if those algorithms are opaque and lack transparency, how will we know whose interests they serve? A dynamic regulator runs the risk of being corrupted or giving favour to some actors and not others. For whom is that robot regulating?

This raises the need to recognize that algorithms and automation itself requires regulation.

Robot Criminals and Algorithmic Consumers

Regulation is a field that acknowledges that not all actors are individuals, that corporations and organizations often require greater and stronger regulations to keep in line.

Corporations are recognized as persons under the law. While that continues to be a controversial status, it is also a gateway for robots to also be recognized as persons under the law.

Algorithmic consumers are already active in the market place. One example are scalper bots, which can be purchased by anyone, and are designed to get to the front of the line of any online ticket sale for an event, to ensure that you get the best or the most seats. Capital markets are also besieged by algorithmic consumers who buy up bonds or stocks milliseconds before others so as to mark them up in the same way a ticket scalper would. Both of these are examples where people who do not employ autonomous agents have a clear disadvantage, and end up having to pay a significantly higher price.

Some researchers argue that it will not be long before the entire marketplace is dominated and almost entirely comprised of robots acting on behalf of humans. To some extent we can see the start of this as we employ search engines, social media, and in particular our mobile devices to assist us in our purchasing decisions. The next logical step is to have an intelligent assistant acting on our behalf, searching for the best deals, and negotiating the best price and terms for all of our economic activity.

Some companies already employ algorithmic consumers to monitor their competition. Both Amazon and Walmart have even engaged in a sort of bot war where their respective automated agents crawl each other’s websites in search of pricing and yet deliberately try to disable or block each other from finding that information.

In a physical store it is illegal for us to fight with our fellow shoppers, but what about bots battling each other for limited inventory or access to a purchase (a la scalper bots)?

Current regulations that govern consumer activity and competition law do not anticipate that actors will be automated robots rather than humans.

This raises the larger issue of when algorithms break the law. What do we do about robot criminals?

Ying Hu, a research fellow at Yale University’s Information Society Project is working on a paper that argues robots could be morally responsible and be held criminally liable for their actions, and therefore be subject to “punishment”. This does not mean that the designer or owner of the robot would not also be held responsible and accountable, but it does suggest we will need a means of addressing the behaviour of the robot itself.

Presently a great deal of hacking is automated. A criminal can purchase software, which they can let loose upon the internet, and that software can crawl around looking for systems to compromise and infect. Given the speed and potential scale of autonomous systems, it’s not difficult to predict that in the near future, more crimes will be committed by machines than humans.

Combine this concept of the robot criminal, with the rapid rise of algorithmic consumers, and all of a sudden the justice system faces a whole new range of challenges when it comes to staying relevant and ensuring the rule of law continues.

It will not be long before entirely autonomous corporations emerge. One model being pursued presently is a DAO or Distributed Autonomous Organization. Essentially an algorithmic corporation, it is an economic entity that has no human staff, no human executives, only algorithms that govern and execute the company’s activities. While such an institution may still serve or be owned by human beings, how long before one emerges without any ownership whatsoever?

In contemplating the legal status and potential liability or responsibility of robots, we also need to acknowledge the role of hacking, both of robots, but also of the law itself.

Hacking the law is not new, and arguably has existed as long as the rule of law. A wealthy person or corporation can hire a lawyer who has specific knowledge and access to powerful networks that allows them to tip the scales of justice in their favour. Further, there are some people who are quite skilled at lying in a courtroom, which should be seen as another kind of hack. Then of course there’s just corruption, the most severe hack, which we hope rarely happens, but historically certainly has.

Yet what if a robot judge is hacked? Or if evidence is hacked so as to frame a case or conflict? Identity theft is a popular and profitable crime, yet how will its impact grow as justice embraces automation?

Hacking has become a frequent and pervasive threat to the integrity and security of the internet. The scale and severity of security incidents has only increased with our embrace and dependence upon technology. We are embracing automation and algorithmic decision making before we have found a means of actively protecting and understanding our digital tools and platforms.

The Democratization of Law

Yet we embrace this risk, and we have high hopes for the technology because algorithms could also democratize law by enabling greater awareness among lay people and the general public. As it stands, most people learn about the law via news and entertainment. An alternative would be to have automated systems that can be consulted by people so as to better understand their legal rights and obligations.

We don’t regularly consult lawyers as that would be expensive, and yet we could regularly consult an automated assistant, chatbot, or legal search engine anytime we’re curious about the law. Just as Google and Wikipedia have become authorities to settle arguments over dinner, so too could a similar legal service help educate people as to what the law actually entails and expects.

Humans are good at making judgements, computers are not. Therefore computers need humans to teach them how to make those judgements. This includes all potential failures or bad judgements. Cognitive systems don’t make decisions on their own, but rather because they’ve been told to do so. They are only as wise or smart as the humans who program them.

Humans have judgement, they can overrule parameters. Computers do not, as they cannot overrule the logic that governs them.

This emphasizes the collaborative nature of cognitive computing. There are always humans behind the curtain making this new great Oz come alive. And there are also always humans on the other end, benefitting from the power and intelligence that algorithmic decision making offers.

A Future for Humans

Which is not to argue that anything about technology is inevitable. There is always room to introduce moral, ethical, and social concerns.

As a society we can clearly recognize and reinforce the relationship between the rule of law and our democratic rights and freedoms. The need for due process, fair treatment, and universal access.

However what has been lost when it comes to the internet is just how we can translate the rule of law, how we can translate the justice system, so as to ensure not only its relevancy, but its effectiveness, and authority when it comes to resolving disputes and providing safety.

We should be wary as to how this new regime would assess and judge its citizens. If the law is automated, everyone can and will be found guilty by the machine of justice, which lacks an understanding of context and human circumstance. Preserving the humanity, leniency, and ability for a judge to make an exception is crucial. Yet the speed of the new system may not afford us that ability.

Mireille Hildebrandt argues that the rise of algorithms requires us to reinvent the rule of law, otherwise we risk the end of law. Specifically if we do not build our technology with the principles of democracy and the rule of law embedded into them, and adapt our institutions to this new era, then law as we know it will cease to exist.

The danger that arises from procrastinating this translation, the danger that arises from failing to be present and relevant to the new era, is the rise of online justice, and the court of public opinion.

The court of public opinion can only operate, can only be powerful, with an accompanying culture. Across social media the infrastructure for said culture is emerging, with the incentive structures and reward systems that both encourage people to organize around breaking news, while also taking action to become part of the news, to become part of the spectacle.

This happens in large national events like the hockey riots that occurred in Vancouver, but it also happens for petty and personal events like the theft or loss of an iPhone. The Internet is littered with stories of vigilantes rising and seeking justice out of either a belief that traditional authorities are not willing, or are not capable, especially at the speed that this culture demands.

While the rise of this culture is a serious social problem worthy of its own focus, I draw attention to it if only to demonstrate the real and tangible competition that our traditional justice system faces. The velocity by which the court of public opinion operates is mirrored by the speed in which this competition is becoming entrenched, perhaps due to a perceived lack of alternatives.

Therefore it is more important than ever for the traditional, institutional authorities of our society to embrace the challenge and opportunity of entering the new era and becoming cognitive authorities.

On a basic and personal yet entirely professional level, this can begin by learning how to use Twitter. Recognizing that there may be reasons why posting on Twitter is not always, perhaps not ever an option. However even just being present and being able to read and listen will help increase understanding and literacy of the new era.

This partly involves focusing on what kind of signal you want to receive. A signal that reflects your personal and professional interests. A signal that avoids noise and instead seeks a respect of the limited time and attention we all have to share.

However it also involves focusing on and understanding what kind of signal you want to offer. What kind of signal will help extend the authority and relevancy of the justice system into the new era. Twitter is just one example, and joining Twitter does not automatically grant the literacy necessary to thrive in this new era. It is however a good start.

We ask that the members of our justice system operate with the greatest respect and integrity. In particular we ask the justices who preside over the system to do so free from bias and with impartiality.

Yet what greater charge could there be than to face the accusation that you are against the new era, that you are biased against the internet, biased against social media?

What if an increasing constituency in our society chose to believe that the justice system was biased against the new world that they live in and thus chose reject it, and reject the rule of law? What happens then?

We can do nothing and find out.

Or we can engage our society as it changes, rather than wait or expect that change to stop. Instead of believing our children will figure it, we should recognize they won’t be able to if we do not actively help them to do so.

Stop procrastinating, join people like myself in the new world. Embrace the challenge and opportunity of becoming a cognitive authority, helping society understand and succeed in this new era.

Additional Reading and Resources

I use Zotero to save and manage articles and resources I find as part of my ongoing research. You can go to my Zotero profile and browse through the material I’ve come across.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store