Machines Are Changing the Software Landscape

Part 2 of 3 — Owning your high tech destiny in the new world

John James
Jan 15, 2019 · 8 min read

My youngest son was sitting on the couch playing a game on his iPad. I asked, “What is the name of that game?” He replied “ and I am playing AI mode”. Interested, I asked him, what is AI? He replied, “I am not sure what AI stands for, but I am not playing real players, it’s like a bot, they control themselves, no one really is playing, its the computer.” In part 1 of this blog series, we saw the evolution and speed of which technology has changed, influencing engineers of today. In this blog, we’ll explore how machines are changing the software landscape, with even more impact then we’ve seen in the past two decades. New interfaces, Machine learning (ML) and artificial intelligence (AI) are opening up new ideas and innovation that once seemed impossible to develop. As engineers, understanding the possibilities and being open to change, is key for your future career direction and success.

Image for post
Image for post
Photo by Jan Kolar on Unsplash

Hardware transforms the user interface

Hardware is changing the way we interface computers, and as a result, leads the evolution of software. Remember when computers were only accessed by a screen and keyboard in your home or office, with the mouse as our only means of visual navigation? The smartphone gave us touch screens and computing everywhere. IoT and wearables are now part of our daily lives. Today, new voice technology like smart speakers powered by artificial intelligence is one of the fastest growing consumer technologies. The Google Assistant demo during Google I/O 2018 demonstrated the ability for AI to make a real phone call for a haircut appointment, and a restaurant reservation on behalf of a user. The person receiving the call didn’t recognize that they were talking to a machine. AI also has great potential to help our aging population and others with accessibility — instead of feeling increasingly isolated, they can remain connected longer. I was recently told a story of an 86-year-old woman, who is blind and loves Alexa. Instead of being overwhelmed by technology, older generations are feeling empowered and have a sense of independence much longer now than before. All of these new interfaces were made possible due to advancements in hardware technology.

New user interfaces are powered by Machine Learning

A new user interface gaining popularity is voice, such as Siri, Alexa, Google Home, Cortana and many more. Speech and textual language processing are powered by machine learning. When any question or command is sent to the voice device, it’s processed into the intent so it can apply the correct action. All the while, its speech is being trained for improvement, leading to a more personalized experience. When I first started to use Siri on my iPhone several years ago, it was a frustrating experience, often incorrectly classifying the intent of my questions. But when my wife asked the same questions, she was understood. I blamed it on my Australian accent. Years later I rarely encounter that same problem thanks to improvements in Natural Language Processing (NLP) and Deep Learning.

This combination of new user interfaces powered by machine learning is opening up new innovation. As software engineers, this means adapting and learning new tools, technology and processes.

Frameworks and Machine Learning means coding less

As new frameworks and platforms evolve, a trend of catering to people with little or no coding experience is emerging. Simple responsive websites can be developed and hosted by just adding your credit card, picking a template and adding your content. Now, with ML, people are building voice apps and chatbots for their customers without knowing how to code. Blutag, Apiotbot, and DialogFlow are just some of these no-code, low-code frameworks. Usually, outputs are configuration files glued together with some JavaScript, deployed in the cloud, and pointed to a NLP algorithm. As with anything, if you want advanced, unsupported features, then coding is required. But as ML advances and new frameworks evolve, having coding experience is becoming less important. We no longer need humans to do calculations. Instead, we need to be able to pose questions and interpret results.

No-code, low-code future

With so many technology choices at an engineers’ disposal, the need to go deep and become language experts is no longer a necessity. Much of an engineers’ current job is about evaluating and integrating many technologies together to solve a life, or business problems with software.

Now frameworks are going another step forward to remove this complexity with low-code or no-code options. I was able to build a very simple weather voice/chatbot in hours using Google’s Dialogflow with no-code, — anyone with no prior coding skills can build their own application with tools like these. Granted, it wasn’t groundbreaking, and there’s still a long way to go, but this ability showcases the no-code, low-code future. Look at just how far we have come in the last 20 years and imagine what the role of a software engineer will look like only ten years from now!

Image for post
Image for post
Photo by Zamani Sahudi from Pexels

Programing in the age of Machines

Machine learning and AI are changing traditional software practices, but many engineers do not fully appreciate the changes coming, or refuse to believe it. Already, machines have begun coding themselves. In order to accommodate ML, engineers need to adapt their software processes of the past. Now, engineers must use a data-driven, contextual understanding process, mixed with experiments and hypothesis. Rooted in assumptions around accuracy and confidence in the results. It’s a very scientific approach that differs from traditional software development. HomeAway engineers Stephano Bonetti and Pavlos Mitsoulis give some practical examples in their blog; Software Engineering Practices Applied to Machine Learning.

Software that can program itself

As we move towards the concept of no-code or low-code platforms, engineers are still required to build the services to perform specific tasks or actions, such as weather updates, booking a hotel and more. Most of the no-code, low-code software is aimed at the user interfaces leveraging pre-existing services. But as software continues to improve, becoming more robust and specific, what is stopping a machine from eventually learning how to do our jobs? Today, by combining real-time data, a ML model, and CICD (Continuous Integration Continuous Deployment), new models can be deployed to production all without a human making any code changes.

While we have some time before machines start automating our jobs, that does not mean it will never happen. Google engineer and futurist Ray Kurzweil predicted at SXSW 2017 in Austin that technical singularity (where technology and machines surpass human intelligence) will occur by 2029. Many of us are already using ML to solve problems within our software programs. Computer scientists at Rice University are working on a research project called Bayou, which utilizes Deep Learning fed by millions of lines of source code to learn how to program. While it is easy to imagine a world where repetitive jobs become automated, it is hard to think of a world where machines will be coding.

Data is the true language

Understanding data is no longer just a post-analytic function for measuring success, but the key ingredient used to drive ML software. Data informs our decision making process both in business and our everyday lives. Imagine trying to build a self driving car without data around driving habits, weather conditions, road maps and real-time inputs. Could you code that logic in traditional ways? Where would you even start? To begin such a project, you must first capture, understand and format the data so the ML algorithms can be trained correctly. Is the data best represented as a time series, as text, or numeric features to help describe a known outcome? Sometimes the most time consuming part of building a new ML model is capturing the correct data features, and massaging them into the correct format. Imagine the interactive development environment of the future where the program is a list ML algorithms and the coding involves data manipulation tools.

Looking back on my own ML experience

My own journey with ML started back in the 90s at college where I ran neural network experiments with Matlab, that would take all night to converge on my 486 computer. Today, I have tried similar experiments that finish in seconds on my laptop. Fast forward two decades and the advancements in hardware have enabled a resurgence in ML. A couple of years ago, I was working with a team that started to use ML to test pricing predictions. Around the same time my company was offering a semester long introductory-level Data Science Academy. I didn’t fully understand some of the results from my own teams experiments, so I signed up and successfully completed the course. The course included coding homework assignments in R or Python, and was based on solving travel related problems with real data. This was the push I needed to look at problems more critically and through a data-tinted lens. Today I read ML blogs, have completed a couple online ML courses, and attempted my first Kaggle competition. R is now my go to tool to quickly analyze and model simple data sets. There are many online courses blogs around ML, don’t hesitate to dig in an explore the possibilities this technology has to offer.

Trust your intuition

While I strongly believe machines and data will play a huge role in the future of software and technology, remember that data and machines are not perfect, so trust your own intelligence. The machines rely on the right data, features, predictions, expert knowledge and analysis of the results, to learn and make the right decisions. Recently my iPhone gave me driving times to the local liquor store. Why did it assume from my driving history that I wanted to go to the liquor store at 8am on a Wednesday? Liquor stores in Texas do not open until 10am. The context the app is missing, is the local grocery store which I visit multiple times a week, is right next door and shares the same parking lot. To me this is more of an annoyance, but shows the level of detail and context required by engineers in the future to improve our lives with software. Melanie Mitchell wrote this New York Times article, Artificial Intelligence Hits the Barrier of Meaning, highlighting some of the interesting challenges we face with AI and human understanding.

Image for post
Image for post
Courtesy of John’s iphone.

A new mindset beyond software and machines

As we have seen in the first two blogs, technology has changed the software landscape in a relatively short time frame. When combined with ML, ideas that not so long ago seemed unattainable are now becoming a reality. Today, technology is no longer a blocker to creativity and innovation — it’s an accelerator. But just embracing and learning new technology is not enough.

In the third and final post of this series, we will explore a human understanding mindset, which will become an important part of all software engineers’ future. And finally, have you reflect on your own high tech destiny choices for the future.

HomeAway Tech Blog

Software and data science revolutionizing vacations

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store