The Future of Artificial Intelligence Relies on Humans: It’s up to Us to Not Mess It Up
Artificial Intelligence, or A.I, is one of the hottest topics in the tech world at the moment. The majority of the major tech companies, and a big handful of the smaller ones, seem to be developing and using A.I in their business strategies. From the digital assistants of Amazon, Apple, Google, and Microsoft, to Google maps suggesting the best way from you to get from start point to destination, artificial intelligence is becoming part of the fabric of our everyday lives.
Artificial intelligence is an extremely broad subject, with constant innovation and research happening all of the time. It’s always being talked about, with some of the biggest tech leaders and thinkers addressing the potential futures and risks of A.I, The Partnership on AI being the biggest consortium coming together to discuss the future of A.I, and some of the most popular films of the past few decades exploring the futuristic possibilities A.I could offer us, as well as the ethics behind the innovations and the potential boundaries that need to be set. But with all the potential for good, there are many, many risks associated with the growth and innovation in A.I, with some of the big tech leaders having come forward in the past few years to highlight these.
Elon Musk has spoken out on several occasions about his fears for the future of A.I, and what that means for society as a whole, presenting futuristic scenarios in which even the most benign forms of artificial intelligence could have catastrophic effects on humanity. There are fears, and ethical questions being raised, linked to A.I one day becoming smarter than we are, and consequently deciding that there is no need for humans. On the other hand, Jeff Bezos, Amazon’s founder and CEO, paints a different picture to many of the other big tech leaders and thinkers, and has spoken out about his belief that we should have more A.I, and that we’re currently living in the ‘golden age’ of machine learning, with new innovations and breakthroughs coming one after the other.
We are not yet at the levels of artificial intelligence displayed in popular culture, and nowhere near an AI takeover like the ones depicted in ‘Terminator’ and ‘Colossus’, which means there is still time for the tech world to decide where A.I is going in the future, and how it will be developed. There will always be potential there for the latest advancements in technology to be used in unethical ways, but there is also the potential for artificial intelligence to do a lot of good. A.I merely learns about how the world has been in the past, with the data currently available to it, but we as humans get to decide how the world should be, and that includes how we utilise A.I. For example, AlphaGo, developed by Google’s DeepMind, is one of the most advanced A.I systems, designed to beat human players at the ancient Chinese game of Go. It recently made headlines after defeating the world’s greatest Go player, sparking fears about A.I becoming smarter than humans and being able to someday take over. But there’s a flipside to that argument, and that’s that AlphaGo actually shows us how we can use A.I to complement and boost our current abilities, and shows us possibilities we might never have dreamt of before. Stephen Hawkings, whilst taking a cautious view of AI, has similar views and spoken out about how he believes that “we cannot predict what we might achieve when our own minds are amplified by A.I”.
But at the moment, realistically what is artificial intelligence? Can we actually define what we have at the moment as true A.I, or are we not quite there yet? The dictionary definition states that artificial intelligence is ‘the theory and development of computer systems able to perform tasks normally requiring human intelligence’. This is a broad spectrum encompassing everything from digital assistants that can switch your lights on and off for you, all the way to robots assisting in surgery or diagnosing patients in place of human doctors. Yet at the moment, despite all the blue sky thinking and thought pieces, our A.I capabilities are limited to the technologies and data available to us. Still, artificial intelligence does play a big part in our everyday lives, with most people possibly not even realising that some of the technology they interact with on a daily basis is using smart A.I capabilities to get the job done, such as the algorithms that Netflix use to suggest new movies and TV shows for you to watch.
Artificial intelligence is a big, broad field with plenty of heat around it, especially at the moment with the advent and rise of the digital assistants, and the money and research being poured into it across the globe. The UK Digital Strategy, published back in March of this year, even outlined how around £17 million worth of funding is being dedicated to the research and development of artificial intelligence. But is this broad concept almost too broad, and the phrase ‘artificial intelligence’ being used too loosely?
Machine learning is a big part, defined as the current application of A.I based around the idea that we should be able to give machines access to data and let them learn for themselves, which is what the majority of the products on the market are. The algorithms are programmed to learn from the data available to them to perform the task they need to, such as learning from our purchasing habits online and suggesting other items we might like to buy. Machine learning is grounded in the technologies we have available to us at the moment, but at the same time is driving A.I innovation as companies strive to find ways to make their machine learning capabilities smarter and smarter. Here at Connected Space, we utilise the machine learning technologies available to allow us to develop and optimise data-driven algorithms and processes into the logic elements of our own technology platform. But this isn’t A.I, and the machine learning process is only as smart as the algorithms and data we give it.
Artificial intelligence and machine learning are not fundamentally bad or biased, but because they are created by humans who do have a bias or a different view of the world to someone else, they learn from us and the data that we give them. This then gives A.I the potential to learn to be racist, sexist, and prejudiced in the same way a child does. And like all responsible parents, we’re in charge of our own A.I destiny. It’s up to us to raise ethical artificial intelligence that helps humanity, rather than hindering it.