When trained properly, artificial intelligence (AI) can perform tasks with a lot more precision and accuracy than humans can. This is not new, as it is seen in multiple industries, such as autonomous vehicles. The entire purpose of the development of self-driving cars is the emphasis on the safety aspect of the technology, all due to the precise movements calculated by AI.
Essentially, this AI acts like a human, asking questions like there is a left turn coming up ahead, are there any cars approaching? That green light changed to yellow, should I slow down or speed up? I have the right of way, but a pedestrian is walking in front of me, what should I do?
These questions answered by a human are not always accurate. For instance, when impaired drivers begin to operate a vehicle, their response time is drastically slowed down, as well as their decision-making ability. However, this can occur with non-impaired drivers too, because all it takes is one wrong decision, and humans are not perfect.
But with a combined effort and the ability to learn from our mistakes, what if we can build something that is perfect? In other words, something that can perform with near 100% accuracy, and not need to sleep or eat. Also, what if we can leverage this idea to help humanity with some jobs that require the most precision, such as surgery?
This article will go in-depth about all aspects of AI and later explain possible integrations into many fields, as these products are highly in demand, and have the potential to be very useful too, for example, doctors, our climate, businesses, and even the average consumer.
Classifications of AI
To understand how AI can be applied, however, it is essential to understand how it works. AI is not a term that references anything specific at all, in fact, the term AI is like an umbrella, which spreads across various categories, such as ML (machine learning), and NLP (natural language processing). Every category has different applications and associated algorithms, which provide different results.
ML is one of the most common terms used when referencing AI. ML has many different sub-categories, such as deep learning, supervised learning, and unsupervised learning. ML is classified as a way to train a neural network using algorithms to do a specific task without additional human interaction. This is optimal for completing a narrow task, such as predicting a likely outcome from a certain piece of data, after being trained from similar data points.
For example, if someone wants you to tell them the likelihood of a certain team winning a baseball game, what would you reply with? It is likely that you would look to previous data from this team, such as how many games they have won in the past, or the strength of this team’s players compared to their opposition.
From this data, you can make somewhat accurate predictions, because you have the evidence to support your answer.
The same logic applies to ML models. ML takes a big data set containing many previous outcomes and trains itself to predict future outcomes with very high accuracy. As a beginner, it is good to gain exposure to some ML processes, in this case, called linear regression and backpropagation.
Linear Regression (ML)
Linear regression is one of the simplest ways to implement ML learning within a model. It consists of having your inputs (such as the strength of the baseball players) multiplied by certain weights to scale down the data. Then a constant bias is added to the equation, which ultimately determines if the point on the y-axis lies under or over the line. The idea here is that the linear equation would be trained to identify and follow the trend for the data.
The equation for linear regression is shown below. As you can see, you can introduce an infinite amount of inputs, which will all have different weights but will always be impacted by the same bias. This makes scaling and plotting data very effective, which makes this a great model for narrow tasks with a lot of data.
Linear regression is just one of many different machine learning algorithms, which are all suited to achieve different results with different amounts of data. ML algorithms are a lot like medication, in the sense that one type is not always better than the other, they are just different options available for whichever need.
But how can models really understand if they are predicting the most accurate data? What is the process behind training them when they make a mistake or miscalculation of data? For classification models that use gradient descent (discussed later), this process would be called backpropagation.
Backpropagation is a relatively simple idea to understand. When the model makes a mistake, oftentimes it understands this because it looks at the outcome it supplied, then compares it to the labeled data, or the correct classification, and figures out if it was successful or not in classifying that point. If it was not successful, it attempts to update itself, which “trains” the model to update its weights and bias to potentially classify that point better next time.
After many rounds of backpropagation and the model continuously updating itself, it can now classify every point within the data set with high accuracy. Of course, we can’t have models being extremely accurate, because if a new data point was introduced, the model would have no idea what to do with it since it only understands where the training data set points go.
Natural Language Processing
NLP (Natural Language Processing) ultimately tries to bridge the gap between human communication and computer interpretation. The most common example of NLP is Siri, or Alexa, in which an NLP bot processes and reacts to various human languages. Although these systems rely heavily on deep learning and ML as well, NLP is a crucial part of understanding different syntax and regulations of language, no matter how complex they may be.
NLP works by firstly, cleaning your data, which is the same step needed for all AI ventures, including ML. This includes converting all words to lowercase letters, removing punctuation, etc. The second step is feature extraction, or extracting the most valuable parts of the data to be analyzed by the model.
Clustering vs Classification Models (NLP)
If the data you use is unlabelled, you will most likely use a clustering model. These models take a piece of data and judging by it’s similarities and differences to other pieces of previous data, tries to group similar ones together. To the left is an example of what a cluster model will look like with certain data.
As you can see, it tries to group similar data together, so that any new input data can easily be added to the existing clusters, if it is not an outlier. The model can use clustering to group together similar sounds or terms, then analyze them to form an intention, as an overly simplified example.
If the data is labeled, classification models usually work the best for NLP. Classification models divide the data by separating it by borders, shown on the left. This way, if a new point is introduced, it can easily be classified just by the model understanding if it belongs under or over the line (also an overly simplified example).
Classification models rarely have only linear barriers, unless the task if very simple. Often, the barriers can have a parabolic shape in two dimensions, and an even more complex shape in multiple dimensions.
After the model is trained, the last step for NLP (along with all other models), is to test it with a testing data set, and verify that it is classifying inputs well with high accuracy. NLP is a great tool for building chatbots and humanoid robots because every language has many rules that need to be learned and implemented.
Adaptations of AI
There is much more to learn about the specific mechanisms of AI. This includes activation functions, different types of neural networks, and other lessons that were only touched on somewhat within this article. The most important aspects of AI, however, do not stem from how it works, but what we can do with it as a society.
Although the most common uses of AI and their models have already been described, there are many others that do even more specific tasks. For example, image recognition is also a widely used form of AI. A good example of this is seen in smart calculators, in which the user takes a picture of a math equation, and the app scans the written numbers, then outputs the answer.
A more complex use for this exact same technology is seen when these models are adapted to screen images for signs of tumors, bone fractures, or even signs of infections disease within biopsies. This is a very good example of how we can leverage classification algorithms to solve immediate health problems within the world today.
A good example for those who envision that the future of the world can be changed with AI is the mechanisms of Neural Lace. Neural Lace is the technology that the company Neuralink (bought by Elon Musk) is using to potentially revolutionize humanity. This technology works by first learning more about the human brain through the use of small electrodes, which accurately analyze brain signals. After, the plan is that these signals will be interpreted by AI then answered by AI.
To be more specific, people believe that this technology has the potential to do amazing things, such as allowing for unlimited memory through storing and navigation performed by AI. This is assuming that we can capture the essential mechanisms of the brain that allow for memory to happen in the first place, which we have a good idea about. However, this work is still mostly theoretical, and will not hit the market anytime soon.
More recently, as mentioned at the very beginning of this article, autonomous vehicles rely heavily on many aspects of AI. This is a good example of AI being used in the near future, as we are expecting access to self-driving cars becoming more widespread in the next couple of years. These cars use imaging technologies and sensors combined with ML to map the area around them within a certain radius, in order to drive safely.
Personally, however, I have always been interested in the potential surgical adaptations of AI. This includes new imaging technologies and even surgical robot assistants! For these ideas to be adapted in the medical field, the ML algorithms must prove to have higher accuracy rates than humans. It is though this technology that we can increase human longevity and even have a much better chance of finding the cure to cancer one day.
AI has the potential to change humanity as we know it. Just enhancing a simple project, like a cat and dog classification model, to solve a bigger challenge in society, such as classifying tumors, can make a huge impact. Most of the projects that will impact us in the future with AI have not even been thought of yet, which is an interesting concept to think about. The key takeaway in general here is that AI can be adapted to solve problems in any industry – all that needs to be done is first identify a problem, then chose the best AI model to solve it. And from there, your model can only improve, as there will never be a shortage of data.