Everyone is developing AI: But who’s going to win the race?
As we discuss, there are hundreds of researchers working on developing various AI models. The scope of AI is unlimited and the potential use covers almost every industry. While it is at a nascent stage in some industries, for others the algorithms and software have already progressed and disrupted the domains.
Artificial Intelligence or “AI” as a short abbreviation is used to indicate a program that can replicate intelligence.
Intelligence is the ability every life form is born with. In laymen’s terms “intelligence” is the ability to comprehend a situation and act based on understanding. “No action” is also a part of intelligence if it is decided after comprehending the situation.
We, humans, have been trying hard to inculcate the same feature into the machines but we are yet not successful and I don’t think we will ever be. But there are promising results that promote the development of AI. The benefits that banking, transport, material handling, healthcare, financial industries have seen are a testament to how AI can improve our lives. Thus, there are more and more researchers working on developing a perfect “AI” model.
Artificial Intelligence: Three core pillars of program
The first pillar of intelligence is data acquisition without which there can be no intelligence. Every life form is born with receptors that help it to intake data from its surroundings. Humans have eyes, ears, nose, skin, and tongue via which we receive data. There are tons of data of different kinds that we receive and then process with our brains.
Data acquisition is the part where machines can beat us because they can collect a lot more data than any life form.
The number of sensors available in the market is way more than any life form can support. Thus, machines have an edge on data collection.
Data Processing — Situational Awareness
This is the pain point that decides AI performance. Every form of life is born with some sort of situational awareness but machines and software are dumb on this part.
Unfortunately, there is no insight into how this feature works and how can it be imparted into the software. The existing solution that is adopted by engineers is to think about possibilities and events that may arise due to surroundings and provide a tentative output for each possibility. The program continuously screens for the possibilities and when it identifies a possible mapping, looks for the decision that is already embedded for that possibility by the program developer.
Thus, the effectiveness of AI is subjective to the smartness and experience of AI developers. The more the developer can imagine the better the program can function.
The number of possibilities and events that may arise due to certain surroundings are infinite and cannot be fed at the time of developing an AI algorithm. But there is a silver lining in form of remote monitoring of anomalies.
Researchers understand the limit of incorporating every event at the time of development and thus a better alternative is to remotely monitor the surroundings in real-time. Every time an anomaly happens that the software/ algorithm is unable to comprehend, it can flag the remote monitoring individual to make a decision and can note the decision for future references.
The data of these decisions are collected and averaged to decide output for similar future events. Once the machine has proper data, it can start making decisions but asks for operator authentication. The operator sitting at the remote end verifies for a few times and then the machine understands the authenticity of output. Based on it, the decision is fixed for all such events.
Purely AI is a disaster due to the inability of comprehending a large number of possible events that may arise in the future but when the AI is aided by a remote person who can take lead in an event of an anomaly, it becomes an effective man+machine combination which has potential to target each event.
Decision Making & Output Optimization
The last part of the AI algorithm is to make a decision and optimize outputs. This is dependent on the second part and once the program knows “what decision is to be made?”, it can save the decision-making model for future reference.
The more decisions any program will make, the better it will be in terms of making decisions and arriving at an optimized output. That’s the reason your virtual assistants work well when you use them for a longer period.
Machines have an edge on this part because they can store a lot of data and then average out the decision. They are not biased with emotions, pain, and other surroundings. This makes them neutral observers and gives them an edge in making better decisions when there is a similar event that happened multiple times in the past.
However, there is an inherent flaw in this step and it arrives in form of fake similarity. Life forms are born with intuitiveness. At times it looks as if the situation is similar but your gut tells you to act differently and you go with your gut feeling. Machines don’t have that and thus whenever a fake similarity appears, the machine being unable to generate an intuitive feeling uses the same decision that was used earlier. It can be both beneficial or backfire depending on a lot of other parameters.
Now when we discuss our primary question: “Who’s going to win the race of developing perfect AI?”
My bet would be on the researchers who are more experienced and more equipped with thoughts towards the possible events that may arise. The better the developer, the better the AI.
Also, at the same time, if a program is adopted by a large population, it will have more chances to grow into a successful program.
Thus, better AI development is biased toward big corporations who have money to hire the best minds and promote their programs to a large population that can train the program for multiple possible events. Early adoption and arrival would also be important factors.