Machine Learning and the Mechanization of Life

Horst Werner
7 min readJul 15, 2024

--

Although I despise the continuous hyperventilation that constitutes today’s public discourse, I can’t help sharing some thoughts on ML/AI, if only to find out whether I’m the last person in tech who hasn’t drunk the kool-aid. First a few words on Artificial Intelligence:

I have to concede that from an early age I was convinced of the possibility of intelligence outside the human brain. When I was about seven years old, a circus visited our village, their main attraction being a horse that was good at math: Whatever complicated problem we second graders threw at it (e.g. how much is 5+3), it solved right away and gave the response by nodding its head the correct number of times.

So why shouldn’t a machine be intelligent, too? I dabbled with semantic technologies in the early 2000s, got a patent or two, but we never achieved a real breakthrough. I used to say that Artificial Intelligence is the nuclear fusion of computer science — perpetually just ten years away. As it turned out, the ontology-based, deductive approach of the earlier years didn’t seem to work, so research turned to plain statistics and neural networks, using the humbler term “Machine Learning”, until a horse was created that was so good at giving seemingly intelligent answers that the term “AI” became fashionable again.

By now, enough voices have pointed out that a text generator such as ChatGPT cannot be truly intelligent (I particularly recommend Noam Chomsky’s opinion piece), so I won’t dive into this sub-topic. After all, real Artificial General Intelligence is only a few years away (right?) and by then today’s LLMs are history, anyway. As the title indicates, my thoughts revolve around the Machine Learning we have today, what we do with it and what it does with us.

The big promise of Machine Learning is automation, and who would deny that that’s a great thing? After all, automation made a million things affordable to everyone, things which in previous centuries were only accessible to the rich, if at all. Automation frees us from the insufferable toil of the olden days, giving us space to be more creative, more human (or so we’re told).

However, automation has an ugly flip side, which I call mechanization. By this term I mean the forced adaptation of all involved systems and people to the requirements of an automated process, never better illustrated than in Charlie Chaplin’s “Modern Times”.

Frederick Taylor, Frank Gilbreth and Henry Ford were the pioneers of automated processes. Yes, I even call a process “automated” when it’s mostly carried out by humans as long as an automatic element, such as the production chain, has control. This is because the needs of the automatic element will usually have priority over everything else. The production chain cannot stand still (even Toyota’s Andon cords only serve to eventually satisfy the needs of a perfectly running production line).

By the 1990s these rigid processes, now denounced with the derogatory term “Taylorism”, had gone a bit out of fashion. Car makers in particular started to make their production lines more flexible and give humans more control. But still, every single activity of every single human in such a process is determined by the constraints of the involved machinery, it is highly repetitive and leaves little room for specifically human strengths such as creativity or improvisation.

Now let’s look at automated business processes. SAP has been hugely successful because their software established standardized processes — best practices — in the companies adopting R/2, R/3, and their newer incarnations. Here, too, the needs of the system (first and foremost a consistent database) define and limit every interaction of the human with the system (I vividly remember my despair at not being able to complete a transaction before first creating a material). By the mid-2000s it dawned on us that once all the best practices had been cast in iron (i.e. coded into the ERP system) it was difficult for our customers to evolve the next best practices. The rigidity of the software would determine how a company could (and could not) conduct their business. It also created a highly lucrative market for SAP consultants who allow customers to stretch the limits of what the software can do by creative customizing, still…

So what does all that have to do with Machine Learning? Machine Learning is, and will be, used to create automated processes for activities that formerly have been the domain of humans and human judgment. And that forces humans to adapt to the behavior of the technical system, reducing us to being part of an inflexible process.

We already witness this in customer service, where outrageously stupid chatbots are placed as a barrier between us and real help, frustrating us by suggesting the obvious things, which we already tried long before even thinking of contacting customer service. That is not to say that chatbots that function as smarter search engines are not helpful — keyword search always had its limitations. But whenever action needs to be taken, such as updating a system of record, ML falls short, because it can not be reliable enough. So the chatbot will direct us to a form, which is a safe way of updating a system of record, but just doesn’t cover our specific problem. After a torturous struggle with the chatbot, the problem will eventually be solved by a human, because the mechanized process fails to deliver what ML is promising. In this case, the harm done is “only” the waste of time and nerves and the unnecessary emission of some more carbon dioxide.

In other cases, the damage is much more sinister, albeit not immediately visible. A good example is the increasingly mechanized hiring process: Machine Learning tools scan resumés, filtering candidates by job titles and keywords. People react by using LLM and prompt engineering to create the resumés that have a higher chance of being selected by the ML systems.

The next step, in the case of software engineering positions, are code interviews, testing how well an applicant has studied a couple of hundred LeetCode problems that may, or more likely may not, be relevant for the position in question. Why? To satisfy a mechanized process in which an ostensibly objective, quantifiable value is assigned to each applicant to facilitate the choice. It is easy to see that this enables the next stage of automation, i.e. removing humans altogether from the preselection process.

The answer? A friend of mine, after sitting through dozens of such interviews, and feeling like a code monkey, created a tool called Interview Monkey that feeds an LLM with a screenshot of the problem and generates a detailed response (undetectable by the interviewer as I’m told).

So the established process selects two types of employees: Those who don’t question orders and spend months learning mostly irrelevant LeetCode problems and those who game the system. Yes, these are the persons who are most likely to be successful in a company with such a culture, but will that culture produce amazing, innovative products or will it produce cars that you can get in every color as long as it is black?

In contrast, the inventors of the masking tape and the blue LED were people who defied their managers as they had set their minds firmly on problems they wanted to solve. Similarly, Steve “Spaz” Williams brought his idea of using computer animation instead of stop-motion into the film industry against the will of his superiors. Such people, who follow their own ideas, probably wouldn’t be selected by today’s Applicant Tracking Systems because they don’t check the right boxes. Coincidentally, we don’t see disruptive innovation by large enterprises any more — the last true innovation I remember was the introduction of the touch interface with the iPhone in 2007. Large companies seem to be content with small incremental improvements (and swallowing competing startups just as they become successful). Ironically, now they overcompensate for the lack of real innovation by cramming commodity “AI” into every aspect of their products, a pseudo-innovation with dubious benefits in most cases.

Mechanization has always had the effect of enforcing uniformity. However, the particular danger of ML is that these systems have become so good at pretending they think like humans that they created the misconception that all cognitive work can be automated, which is a trap: Because they are unable to truly understand, their shortcomings will likely lead to the mechanization of more and more aspects of our daily life, as illustrated in the example of the hiring process, forcing humans to adapt to these shortcomings and suppressing real innovation.

Think of formerly creative activities, creating artwork, composing music, copywriting: What happens if they are reduced to prompt engineering? The promise was that by automating work, humans could be more human. In reality, humans will have to behave more machine-like to accommodate the machine. And just like the encoding of business processes in an ERP system made it very hard for companies to evolve new business processes, the nature of generative ML — regurgitating existing content — makes it impossible to genuinely innovate. The results of the work will become more generic, uniform, and less expressive.

Is all of this inevitable? Is there, maybe, a better way of using computers? The pioneers of modern computing around Doug Engelbart were striving for the augmentation of the human brain. In my opinion, such an augmentation can only be achieved if we adapt the machine to the human, not the human to the machine. If we establish a symbiosis in which the human is in control, and the machine only does the heavy lifting. Think of a workshop full of power tools instead of a factory line.

Yes, a workshop won’t churn out uniform pieces at the rate a fully automated line does. And there is nothing wrong with fully automating activities where the problem is 100% predictable and uniform, such as ordering an item from an online shop or creating (sub-)layouts for integrated circuits. Empowering humans by making their work more frictionless and giving them faster access to information is a great thing. But we need to be careful with what we automate. For now, the human brain is not replaceable and we must not give away control to a seemingly intelligent horse.

--

--