Human after all

The evolution of work in the digital age

Chair in Digital Economy
QUT Chair in Digital Economy
15 min readJul 10, 2019

--

by Prof. Marek Kowalkiewicz and Dr Paula Dootson

The global economic and social landscape is changing the way we work. There is an ongoing debate on the actual meaning of the change, spanning from massive job destruction, to massive job creation, as well as exploring the changing nature of work in general. In this chapter, we focus specifically on understanding the evolution of work from being characterized as the transfer of capital (from those who have it to those who need it), to a purpose driven model where employees are in charge. This is not a discussion about “robots taking our jobs”. This chapter is also not a discussion about the gig economy being the future model of work. Here, we focus on the vast opportunities emerging in the future of work and the growing possibilities for human contribution: the future of work being human after all.

In this chapter, we propose six trends in the future of work that speak to the changing nature of work, and a blended workforce. These changes in the way we work also give rise to changes in the types of jobs that will emerge in the five-stage evolution of work. This chapter concludes by exploring what we as business, government, and society could do next to capitalize on the opportunities of this evolution.

1. Evolution of work as a concept

We tend to think about work as an immutable concept. However, just like any other construct in society, work is evolving. There was a time when there were no employers and employees — the concept did not exist, we were just “making a living”. Similarly, it is possible (and likely) that the idea of employment will disappear from our vocabulary at some point in future, being replaced by another approach. In our research, we think about the evolution of work in five stages.

Employment 1.0: emergence

The first phase of the evolution of work emerged as the concept of a paid job came into practice. The model worked such that those who owned capital hired those who needed capital (or other resources). Whether the model was one-to-one or one-to-many, employment was largely structured around a hierarchy of boss to worker(s). In this phase, the employee had limited power, with the employer holding most of the power.

Employment 2.0: industrialization

The second phase of work focuses on the optimization or industrialization of employment. Here, employers seek optimization gains and focus on the best utilization of the existing workforce. To achieve these optimizations, we witness the emergence of human resource (HR) systems. This phase of the evolution of work is characterized by hierarchies, top-down chain of commands, siloed and specialized teams, that are task driven.

Employment 3.0: automation

The third phase of work involves the automation of employment, where employers are focused on the centralization of skills. During this phase we witness the introduction of shared services models, outsourcing skills that are not critical for the organization, and seeking productivity gains in technology employment. Here, we also observe the increased flexibility and mobility of where work is done (e.g., remote collaboration), enabled using collaborative technologies.

Employment 4.0: digitalization

The fourth phase of work involves the digitalization of employment — the tech industry realizes the potential of changing the dynamics of employment markets. We witness a decreasing power of employers, with a corresponding increasing power of employees. This phase embraces a shift towards flexible work hours. The emergence of platforms helps to remove market frictions and reduce entry barriers for employees, giving rise to first waves of gig-economy employees.

Employment 5.0: individualization

The fifth phase of work encapsulates individualization of employment. Here, there is a further shift in power distribution, whereby the employee is at the center of everything. We witness first examples of employees “hiring employers” to pursue their goals. Platforms such as Kaggle are a good illustration of what to expect in other areas of the economy.

Given that the fifth stage of work — individualization of work — is imaginable in the coming years, we are expecting to see new trends growing in dominance. They may be brought together in two larger trend-groups: the changing nature of work and blended workforce.

2. The changing nature of work: how technology will reshape what it means to be an employee

Technological innovation is already quite visibly changing employment dynamics. The “gig economy“ has opened up a huge amount of flexibility, as gig workers pick up quick jobs and employers dip into a contingent workforce. But, perhaps more subtly and slowly, technology is also changing what it means to be a good employee.

At some point algorithms and devices could completely replace human employees. This is already happening with smart programs that can answer the phone or chat online with humans, and as robotic arms doing repetitive and defined actions inside a factory.

But despite and until full automation, technology will augment human workers in many ways. In fact, it always has. The “best” employees are often those who can leverage these technological innovations.

For instance, ever deepening workplace tools have made office workers more connected and collaborative, able to work in virtual offices or wherever they are in the world. It’s no coincidence that many of the biggest names in technology produce services aimed at white collar work — Microsoft, Google, Slack and Atlassian among them.

This is a process almost as old as time — as humans have invented wheels, writing, mathematics, engines and eventually computers, that have slowly augmented more and more of human life. Employees have also had to remain at the crest of the wave.

Once this was explicitly about labor saving — armed with levers and pulleys, humans can do more, lift more, and accomplish more with less. But it has equally been of the mind — the civilizations that developed writing were able to organize and trade over large distances, the renaissance banks that survived were best able to price risk.

Not long ago a “computer” was actually a person. We saw rooms full of workers performing repetitive and complex math. Nowadays that room has been replaced with one person and a personal computer. The people that epitomized the old “computer” were excellent number crunchers, but they would be next to useless in the modern context.

As before, the key with new technologies like artificial intelligence will be how both employees and employers react. Those that flourish in the new world will be like those that came before — they will leverage the potential in new and profound ways, leaving behind those that fail to innovate.

Emerging trends

The rise of Employer Resource Management (or Employee Managed Relationships). Employees will control relations with employers, rather than the other way around (Human Resource Management). A new breed of applications — one can call them Personal Resource Planners — will make it very easy to maintain many-to-many relationships between employees and employers. Currently, most relationships are many-to-one (an employer typically has many employees, but many employees typically have only one employer). Early signals of this trend are visible in many-to-many gig-economy platforms (Kaggle, Fiverr, Airtasker).

Employees will become proactive — faster than employers. Having access to more and more information about employers, employees will increasingly be able to offer their services even before employers realize they need them. Imagine a local government sharing some of their data using open data platforms. Individuals accessing the data may become aware of a potential future problem before the government does. This trend will lead to reimagining the “job offers” market, effectively flipping it. We can see early signals of this phenomenon: white-hat-hacker space is likely the most advanced in this area — employers are typically not even aware that they truly need help when white-hat-hackers reach out to them, after identifying critical cybersecurity flaws. Platforms such as Hacker One already allow employees to reach out to future employers proactively.

A shift from a focus on task to a focus on need. Digital startups have been continuously demonstrating that while improvement in processes might lead to gaining a competitive advantage, often truly transformational change results from applying a different perspective to the problem and using technology to create products and services driven by the new understanding. Similarly, jobs are commonly viewed as a description of tasks to be performed, and a true transformation can stem from applying a different lens. One such possible new lens is a focus on jobs to be done [1].

3. Blended workforce: working with an algorithm

As technology increasingly augments human workers, picking up more and more capability on the way to replacing them entirely, we all need to get better at working with it. This is more than spreadsheets or email. Algorithms can now sort through pools of job applicants, monitor competitors or employees. Algorithms that can do such routine tasks are increasingly powerful and accessible.

Soon, many companies will need to lean on such algorithms just to stay competitive, if not get ahead. But there can be unexpected pitfalls if algorithms aren’t managed correctly. Amazon, for instance, famously, inadvertently created a hiring algorithm that selected for male traits.

With the internet breaking down barriers to apply for jobs, and remote jobs multiplying, algorithms like this will soon prove essential for sorting through candidates. But successfully applying them will mean understanding when and how to slip automation into the workflow. For managers it also adds another wrinkle — how do you “manage” an algorithmic worker?

The first step is understanding the algorithm, how it works and itself makes “decisions”. Algorithms aren’t neutral tools but constructed using logic and code. For a human-written algorithm this means the conditions, use-cases and exemptions etc. baked into the code will govern how they act and react.

Making the best of an algorithm then takes a certain amount of understanding of what a particular algorithm is looking for and what it will then do.

For some machine-written algorithms, known as machine learning, how an algorithm will act or react may not even be understandable by the human programmer who set it all in motion. In this case it may help to understand the data that the algorithm was trained on, and the specific instructions that it was given at the start. Can you backtrack from there?

We are already starting to see the transition to algorithmic work at a lower level — with companies like Google introducing voice assistants that can book appointments or other low-level jobs independently. This means algorithms are moving away from the domain of the experts — IT teams who may have built them themselves, and so can pick apart and iterate if the needs require.

These new workplace algorithms are arriving prepackaged, plug and play, to workers who may be unable to tease them apart even if legally allowed to. While an algorithmically powered workforce could be incredibly powerful, it’s also dangerous without the necessary understanding.

Emerging trends

Algorithmic employees. Many skills will be growingly fragmented and disassociated from individuals (codified), offered as services by the very same individuals. Especially in highly specialized skills areas, we will see a growing codification of competencies, a trend currently emerging among software developers and knowledge workers. This trend is not science fiction. We can see early discussions concerning ethics of such approaches (Figure 1).

Figure 1. The ethics dilemma of automators, (screenshot from Workplace, 2017 [2]).

Algorithmic managers. Decision support systems are now used in hiring, managing, and firing employees. In most cases the algorithms are used to facilitate the work of humans, but in more and more situations, for instance in screening of job candidates, algorithms make decisions autonomously, without any involvement of humans. This trend is being increasingly questioned [3], and humans are being brought back into the process, but the decision support part is most likely going to stay.

Humans managing algorithms. There is an ongoing debate about appropriateness of use of algorithms in certain scenarios. For instance, “black box” algorithms (based on neural networks etc.), are considered not suitable in situations where a full explanation of the decision process might be required (for instance court proceedings or welfare payment decision). This gives rise to a new group of workers: managing algorithms, deploying them in the right scenarios, and deciding where and when they should not be used.

Case 1 of working with an algorithm: The human checksum

Eric L. Loomis was arrested in February 2013. He was driving a stolen vehicle that had been used in a drive-by shooting. The police tried to stop the runaway car before it ran into a snow bank. The driver and passenger ran away but were later arrested. Loomis was one of the two. He pleaded guilty to eluding an officer and operating a vehicle without the owner’s consent.

Loomis is not an angel. He is a registered sex offender, a result of a previous conviction. He had a sawed-off 12-gauge shotgun in the car, with two empty shotgun casings and some live rounds.

During the court proceedings, the judge in the case decided to turn to an algorithm, an application called COMPAS, to make a more informed decision. Before Loomis was sentenced, the judge generated a report that provided a risk-of-reoffending assessment. The score assessed Loomis as an individual with a high risk of reoffending. And the judge made it quite clear that the output of the algorithms helped in deciding on the 6-year jail term by saying: “you’re identified, through the COMPAS assessment, as an individual who is a high risk to the community.”

The case of Eric Loomis is not an exception. Quite the opposite: algorithmic assessment of the risk of reoffending is becoming a norm. Some are concerned.

When computer technology entered our workspaces in the last century, it acted as validation or enhancement of human activities. When Steve Jobs referred to computers as “bicycles for our minds” he was suggesting that they help make us more efficient. Better use our energy: whether it’s our creative energy, computing skills, or perhaps other forms of it. However, a bicycle does not decide where to go.

At some stage, however, it becomes less clear who is the rider and who is the bicycle. Are algorithms like COMPAS allowing us to be more efficient and hopefully less biased, or — in a perverse way — are we the bicycles, allowing algorithms like COMPAS to have more impact?

This is what Leanne Kemp, the CEO of Everledger and Queensland’s Chief Entrepreneur has been introducing recently. While we were discussing the impact of technology on society, she casually mentioned how computers used to be “checksums” for humans, and now, increasingly, humans are checksums for computers.

A checksum, used in technical context, is a digital “summary” or “signature” of a piece of data. It has traditionally been used in data transmission, prone to errors caused by the medium (phone lines, radio waves, or even human transcription). If implemented well, the checksum is always the same if the data is identical on both ends of the medium but will vary with even the smallest variations.

But we can also apply a slightly broader understanding:

A checksum provides an assurance that what we receive has been done without mistakes.

Can checksums only be provided to digital data? Can checksums only be done by algorithms? No. It is possible to have checksums that confirm no mistakes in human work. It is also possible to have humans confirm that what has been done by a computer algorithm is without mistakes.

There could be an algorithmic checksum and a human checksum. They can assure both human and computer outputs.

Remember the first “killer app” for personal computers? It was the spreadsheet. It allowed humans not only to perform calculations more quickly but — more importantly — it allowed them to be confident about the results. As long as their spreadsheets were correctly designed, and the correct data was entered, any calculations would always be error free. Spreadsheet became a computer checksum for humans.

Today, the equivalents of spreadsheets are everywhere. Forms ensure the data we enter is error free. Email clients remind us to attach files mentioned in the email. Car navigation reminds us to slow down and change lanes to make sure we arrive at the destination according to our preferences (safely, quickly, without traffic fines).

Everywhere we look, we see the emergence of algorithms that operate independently, often without a human in the loop. Government algorithms make automatic decisions in simple cases, such as extending driver licenses or approving age-triggered services. Banks proactively block credit cards if they notice suspicious behavior. These automatic decisions have various levels of independence. To continue the bicycle metaphor, some algorithms are like basic bicycles, allowing humans to be more efficient; some are like trikes, preventing humans from harming themselves. Some algorithms are like bikes equipped with navigation, recommending a human where to go. Finally, some are fully self-driving, seemingly not requiring humans at all.

And somehow the last group, the “self-driving” algorithms are all the rage. They are exciting, almost science-fiction. But they, just like science fiction characters, often go rogue, if not overseen by a human. “I am sorry Dave, I am afraid I can’t do that.

It might sound counterintuitive, that after so many years of trying to hand over all human activities to machines, we are now trying to get some back.

Case 2 of working with an algorithm: Self automation

In 2016, a Reddit user made a confession. FiletOfFish1066 had automated all of the work tasks and spent around six years “doing nothing”. While the original post seems to have disappeared from Reddit, there are numerous reports about the admission. The original poster suggested that he (all the stories refer to FiletOfFish1066 as male) spent about 50 hours doing “real work”. The rest — “nothing”. When his employer found out, FiletOfFish1066 was fired. This is possibly the worst mistake an employer can make.

Algorithms don’t simply power applications, scripts, or automate tasks in other ways. Increasingly, they become our personal agents and make decisions on our behalf. For instance, Boston-based Quantopian, an investment firm focused on crowdsourcing, allows people to submit simple algorithms that then make fund allocation decisions on behalf of investors. The algorithms are not yet as simple as Siri Shortcuts, but even beginner programmers should be able to write simple Quantopian algorithms. Imagine hundreds or thousands of such algorithms, each of them having access to information provided by Quantopian, deciding what to do with their creators’ money. It is not science fiction; it is happening right now. An army of algorithms is continuously deciding how to allocate the funds.

Artificial intelligence algorithms are better than humans at detecting skin cancer. Software predicts the risk of reoffending by criminals (even though it cannot explain how it comes up with the risk assessments and it doesn’t yet seem to be better than humans). But an algorithm cannot hold your hand when delivering news about melanoma. It doesn’t see circumstances that may impact reoffending that are beyond the standard questions asked. Algorithms are making humans more efficient, and they are not making them less relevant. We have seen it in the age of industrialization (Employment 2.0) too — tasks may change, but there’s always an essential place for humans. There are plenty of opportunities.

More and more organizations are trying to identify tasks that are currently performed by humans but should not be. These automated tasks need to be then orchestrated, and that’s where often humans play a crucial role. Introducing human-machine collaboration will not only provide better outcomes of existing processes but inevitably enable new value propositions too.

In this ongoing race for efficiency, organizations need to look for opportunities to team up humans with machines (algorithms or robots). This search for opportunities could involve reviewing your current business processes and finding steps to remove, automate, and enhance them. Where algorithms start to outperform humans — for instance in pattern recognition, data analytics or structured data management — they need to be hired there. Where humans are still better than machines — for example in creativity, inductive and deductive thinking or structured problem solving — more focus need to be put on having humans do these tasks. If you do it well, you will grow your business, not just automate it.

Conclusion

There is no doubt we are witnessing profound changes in the global economic landscape, triggered by the new trends in business, technology, and society. The digital economy is here to stay, and many aspects of the world we live in, including the concept of work, are going to change, possibly dramatically. We expect that the major trends in the changing future of work are going to be in the space of the changing nature of work and the emerging blended workforce — humans working jointly with algorithms. We see this world as full of opportunities, while recognizing some pitfalls that need to be avoided. To put a new twist on a popular saying, the future of work is already here, we just need to find it.

References

[1] Christensen, C. M., Hall, T., Dillon, K., & Duncan, D. S. (2016). Know your customers’ jobs to be done. Harvard Business Review, 94(9), 54–62.

[2] Workplace. (2017). Is it unethical for me to not tell my employer I’ve automated my job? Retrieved from https://workplace.stackexchange.com/questions/93696/is-it-unethical-for-me-to-not-tell-my-employer-i-ve-automated-my-job

[3] Walsh, M. (2019). When Algorithms Make Managers Worse. Retrieved from https://hbr.org/2019/05/when-algorithms-make-managers-worse

--

--

Chair in Digital Economy
QUT Chair in Digital Economy

QUT, PwC, Brisbane Marketing and DSITI have partnered to create the Chair in Digital Economy