Artificial Human’s Intelligence

Vaishnave Jonnalagadda
Fnplus Club
Published in
7 min readJul 1, 2018

“The world’s first trillionaires are going to come from somebody who masters AI and all its derivatives and applies it in ways we never thought of.”

Wow, isn’t it? It could be you so get to know about it now.

Multi-billionaire tech investor Jim Breyer, the founder, and CEO of Silicon Valley-based Breyer Capital believes that there will a CEO and/or investors steeped in AI who will far outpace the wealth of Bill Gates (over $80 billion) or Mark Zuckerberg ($68 billion).

Well, there is more to deep dive into AI stories just as deep as the subject itself is.

Currently, there are ’n’ numbers of movies and series related to AI and is buzzing word all over the internet. So what made AI such interesting, fascinating and infinite featuring thing?

Let’s See

Firstly What is Artificial Intelligence?

Artificial Intelligence is to a machine as Natural Intelligence is to humans.

It is the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem-solving”. It has become an essential part of the technology industry. Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as:

  • Knowledge
  • Reasoning
  • Problem-solving
  • Perception
  • Learning
  • Planning
  • Ability to manipulate and move objects.

AI technology is a crucial lynchpin of much of the digital transformation taking place today as organizations position themselves to capitalize on the ever-growing amount of data being generated and collected.

On the same wavelength.

Research and development work in AI is split between two branches. One is labeled “applied AI” which uses these principles of simulating human thought to carry out one specific task. The other is known as “generalized AI” — which seeks to develop machine intelligence that can turn their hands to any task, much like a person.

Research into applied, specialized AI is already providing breakthroughs in fields of study from quantum physics where it is used to model and predict the behavior of systems comprised of billions of subatomic particles, to medicine where it is being used to diagnose patients based on genomic data. In industry, it is employed in the financial world for uses ranging from fraud detection to improving customer service by predicting what services customers will need. In manufacturing it is used to manage workforces and production processes as well as for predicting faults before they occur, therefore enabling predictive maintenance. In the consumer world more and more of the technology we are adopting into our everyday lives is becoming powered by AI — from smartphone assistants like Apple’s Siri and Google’s Google Assistant, to self-driving and autonomous cars like Tesla which many are predicting will outnumber manually driven cars within our lifetimes.

Generalized AI is a bit further off — to carry out a complete simulation of the human brain would require both a more complete understanding of the organ than we currently have and more computing power than is commonly available to researchers. But that may not be the case for long, given the speed with which computer technology is evolving. A new generation of computer chip technology known as neuromorphic processors is being designed to more efficiently run brain-simulator code. And systems such as IBM’s Watson cognitive computing platform use high-level simulations of human neurological processes to carry out an ever-growing range of tasks without being specifically taught how to do them. The best example is to set my most popular and thrilling series of Netflix “Altered Carbon”.

A loop in itself.

Now not going technically into AI, let’s turn towards Machine Learning

Machine Learning at its most basic is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. So rather than hand-coding software routines with a specific set of instructions to accomplish a particular task, the machine is “trained” using large amounts of data and algorithms that give it the ability to learn how to perform the task. Machine learning came directly from minds of the early AI crowd, and the algorithmic approaches over the years included decision tree learning, inductive logic programming. clustering, reinforcement learning, and Bayesian networks among others.

“AI is any technology that enables a system to demonstrate human-like intelligence,” explained Patrick Nguyen & also “Machine Learning is one type of AI that uses mathematical models trained on data to make decisions. As more data becomes available, ML models can make better decisions.”

Machine learning is based on what is known as “neural networks.” If it sounds complicated, that’s because it is. But in a nutshell, neural networks are built for training and learning. They rely on certain factors of importance to determine the probable outcome of a situation and need to be programmed by humans first.

Neural Networks another new term What’s with it?

Neural networks are a set of algorithms, modeled loosely after the human brain, that is designed to recognize patterns. They interpret sensory data through a kind of machine perception, labeling or clustering raw input. The patterns they recognize are numerical, contained in vectors, into which all real-world data, be it images, sound, text or time series, must be translated.

This leads to Deep Learning

Look at the graph how it developed over the years before proceeding to Deep Learning.

The growth

Deep learning is one of many approaches to machine learning. Other approaches include decision tree learning, inductive logic programming, clustering, reinforcement learning, and Bayesian networks, among others.

Deep Learning — A Technique for Implementing Machine Learning as DL is a subset of ML. Deep is a technical term. It refers to the number of layers in a neural network. A shallow network has one so-called hidden layer, and a deep network has more than one. Multiple hidden layers allow deep neural networks to learn features of the data in a so-called feature hierarchy, because simple features (e.g. two pixels) recombine from one layer to the next, to form more complex features (e.g. a line). Nets with many layers pass input data (features) through more mathematical operations than nets with few layers, and are therefore more computationally intensive to train.

Definition by Arthur Samuel same as he did to machine learning is a “field of study that gives computers the ability to learn without being explicitly programmed” — while adding that it tends to result in higher accuracy, require more hardware or training time, and perform exceptionally well on machine perception tasks that involved unstructured data such as blobs of pixels or text.

Isn’t it a lot? Actually, there is a lot to it!!

Now on offside Real fears that development of intelligence which equals or exceeds our own, but has the capacity to work at far higher speeds, could have negative implications for the future of humanity have been voiced, and not just by apocalyptic sci-fi such as The Matrix or The Terminator, but respected scientists like Stephen Hawking.

What took tech cover pages was the two tech titans were arguing about AI’s impact on humanity.

Elon Musk and Mark Zuckerberg are having a spat about whether or not artificial intelligence is going to kill us all.

Early Age to Robotic Age

Musk, the chief of Tesla and SpaceX who has longstanding worries about the potentially apocalyptic future of AI, recently returned to that soapbox, making an appeal for proactive regulations on AI. “I keep sounding the alarm bell,” he told attendees at a National Governors Association meeting this month. “But until people see robots going down the street killing people, they don’t know how to react.”

Zuckerberg, Facebook’s CEO, offered a riposte. He called Musk a “naysayer” and accused his doomsday fears of unnecessary negativity. “In some ways, I actually think it is pretty irresponsible,” Zuck scolded. Musk then retorted on Twitter: “I’ve talked to Mark about this. His understanding of the subject is limited.”

When figures like Musk and Zuckerberg talk about artificial intelligence, they aren’t really talking about AI — not as in the software and hardware and robots that might produce delight or horror when implemented. Instead, they are talking about words and ideas. They are framing their individual and corporate hopes, dreams, and strategies. And given Musk and Zuck’s personal connection to the companies they run, and thereby those companies’ fates, they use that reasoning to help lay the groundwork for future support among investors, policymakers, and the general public.

How the future of AI would be?

Did we make it control ourselves?

Think Think Think and Rethink.

And to conclude this wonderful and powerful magic is having an equal hand either on successfully developed future or maybe leading to calling future destructions. Except from all that it has many brilliant applications and is a growing and demanding field in the IT industry, so to all the tech enthusiasts good luck for pursuing in this field of work and for doing future wonders and also to be one with biggest bank balance as stated earlier.

--

--

Vaishnave Jonnalagadda
Fnplus Club

Hello, Feel free to read my content on Data and how it’s impacting you and how you can create an impact using data.