Image Credit: malawiheiko at Pixabay

The ABCs of Artificial Intelligence (AI): How Computers Think?

Phani Kambhampati
ABCsOfAIbyPhani
Published in
14 min readJun 27, 2024

--

The previous article explored the ABCs of Artificial Intelligence (AI). Now, let’s delve deeper into the fascinating world of computers and uncover how they think, process, and understand what’s happening. This journey will help demystify the inner workings of these powerful machines and provide a solid foundation for understanding AI and other advanced technologies.

The Basics of Computer Language

Imagine you’re in a country where everyone speaks a different language. To talk to them, you need to learn their language. Computers have their own special language, too. It’s called binary code.

Image Credit: Geralt at Pixabay

What is Binary Code?

Binary code uses only two numbers: 0 and 1. Just like we use letters to make words, computers use 0s and 1s to make instructions. Think of it as the computer’s alphabet. Here’s a fun fact: the word “hello” in binary looks like this: 01101000 01100101 01101100 01101100 01101111. Cool, right?

Why Binary?

Computers use binary because they are made up of tiny switches called transistors. These switches can be either on (1) or off (0). By combining these on and off states, computers can perform all sorts of tasks, from simple calculations to playing your favorite video games.

How Binary Code Works

To understand how binary code works, let’s break it down further. Each 0 or 1 in binary code is called a bit. A group of 8 bits is called a byte. For example, the letter “A” in binary is 01000001, which is one byte. Computers process these bytes at incredible speeds, allowing them to perform complex tasks quickly.

Beyond Binary: Other Number Systems

While binary is the fundamental language of computers, other number systems like hexadecimal (base 16) and octal (base 8) are also used. Hexadecimal, for example, is often used in programming because it’s more compact and easier for humans to read than long strings of binary numbers. For instance, the binary number 1111 1111 can be written as FF in hexadecimal.

Imagine you’re at a bakery that sells cookies in different packaging sizes. Binary is like buying cookies one at a time (0 or 1). Hexadecimal is like buying cookies in packs of 16, and octal is like buying them in packs of 8.

  • Binary (Base 2): Each digit is either 0 or 1. It’s like counting with just two fingers.
  • Octal (Base 8): Each digit ranges from 0 to 7. It’s like counting with eight fingers.
  • Decimal (Base 10): This is the number system humans use on a day-to-day. Each digit ranges from 0 to 9.
  • Hexadecimal (Base 16): Each digit ranges from 0 to 15, but since we run out of single digits, we use letters A to F to represent 10 to 15. It’s like counting with sixteen fingers.
Image Credit: geeksforgeeks.org

Computer Programs — How Programs Enable Computers to Perform Tasks

Now that we know computers speak in binary let’s talk about how they use this language. This is where computer programs come in.

Image Credit: StockSnap at Pixabay

What is a Computer Program?

A computer program is like a recipe for the computer. It tells the computer what to do step by step. Just like a recipe tells you how to make a cake step by step, a computer program tells the computer what to do step by step. As ingredients are required for a recipe, a computer program requires ready-for-use or user-entered data, leading to a (hopefully) expected outcome, typically called output in technical circles.

  • Ingredients (Data): A computer program uses data like a recipe lists ingredients. This data can be numbers, text, images, or any other information the program needs to work with.
  • Recipe (Code): The steps in the recipe are like the lines of code in a program. Each step tells you what to do with the ingredients, just as each line of code tells the computer what to do with the data.
  • Outcome (Output): The final cake results from following the recipe and using the correct ingredients. Similarly, the output of a computer program is the result of executing the code, whether it’s displaying a webpage, calculating a sum, or running a game.

Recipe (Code)

While humans understand many languages, computers only understand binary code. Binary code is not only difficult for humans to write the instructions in, but it is also very difficult to understand (potentially years later) and make changes over time. This is where another innovation called programming languages comes into play. Programming languages are more human-readable and are translated into binary code by a compiler or an interpreter. Some popular examples of programming languages include Python, Java, and C++.

Programming languages can be categorized into high-level and low-level languages:

  • Low-Level Languages: These are closer to machine code and provide more control over hardware. They are more complex and require a deeper understanding of the computer’s architecture. Examples include Assembly language and C.
  • High-Level Languages: These are closer to human languages and are easier to read and write. They are abstracted from the hardware, meaning you don’t need to worry about the underlying machine code. Languages like Python, Java, and JavaScript fall into this category.

Types of Programs

Now that we understand what a computer program is and how we code instructions for the computers to understand and follow let's look at the different types of computer programs and their specific use:

1. System Software, like rice or potatoes, is the staple of your meal, providing the essential base for all your computing needs. Examples include Operating Systems (Windows, macOS, Linux), Device Drivers, and Firmware.

2. Application Software is the main course, offering a variety of flavors and functions to satisfy your computing appetite. Some examples of application software are Productivity Software (Word, Excel, PowerPoint), Web Browsers, Media Players, Graphics Software, and Communication Software (email clients, messaging apps, and video conferencing tools).

3. Utility Software is the palate cleanser and digestive aids of the software world, keeping your meal experience smooth and enjoyable. Examples of utility software include Antivirus Software, Backup Software, Disk Cleanup tools, and File Compression Tools.

4. Development Software is the chef’s secret ingredients and experimental recipes used to create new and exciting digital dishes. These include Integrated Development Environments (IDEs), Version Control Systems, and Database Management Systems (DBMS).

5. Specialized Software is gourmet spices and exotic ingredients, adding unique flavors to cater to specific tastes, requirements, and dietary restrictions. These include Computer-Aided Design (CAD) Software, Enterprise Resource Planning (ERP) Software, and Customer Relationship Management (CRM) Software.

How Computers Solve Problems and Make Decisions

Imagine you’re faced with a complex puzzle that would take a human hours or even days to solve. Now, picture a computer tackling the same puzzle and arriving at a solution in just seconds. This lightning-fast problem-solving ability is at the heart of what makes computers so powerful and revolutionary.

But how exactly do these silicon-based machines outperform the human brain in specific tasks? The answer lies in their unique approach to problem-solving and decision-making. Unlike humans, who often rely on intuition and experience, computers use a combination of logic, algorithms, and data processing to navigate challenges and make choices.

The scale of problems computers can solve is genuinely mind-boggling. From predicting complex weather patterns across the globe to analyzing vast genomic datasets to uncover the secrets of life itself, computers are tackling challenges that were once thought impossible.

Let’s explore how these machines break down complex issues into manageable pieces, apply rigorous logical processes, and arrive at solutions quickly and accurately. From the basic building blocks of computer logic to the sophisticated decision-making algorithms powering artificial intelligence, we’ll uncover the secrets that enable computers to be the ultimate problem-solvers of the digital age.

As we delve into these concepts, we’ll also see how this fundamental problem-solving ability forms the backbone of artificial intelligence and machine learning, technologies reshaping our world in ways we’re only beginning to understand.

The Foundation: Boolean Logic and Logic Gates

Boolean logic, named after mathematician George Boole, is at the core of a computer’s problem-solving ability. This system of logic deals with true/false values, which in computer terms translates to 1 (true) and 0 (false). The basic operations in Boolean logic are:

  • AND: True only if both inputs are true
  • OR: True if at least one input is true
  • NOT: Inverts the input (true becomes false, and vice versa)
Image Credit: Python Beginners at Quora

Logic gates are tiny electronic circuits that implement logical operations in hardware. They are the building blocks of all computer processing. By combining these simple gates, computers can perform incredibly complex calculations and decision-making processes.

Making Decisions

Computers make decisions using if-then statements. These are like little questions the computer asks itself. For example,

If (user presses spacebar) then (make character jump)
Else (keep character running)

For more complex scenarios, computers use nested if-then statements and switch-case statements. These allow the computer to handle multiple conditions and make more sophisticated decisions. For instance, a self-driving car might use a series of nested if-then statements to navigate traffic:

If (traffic light is red) then (stop)
Else if (traffic light is green) then
If (pedestrian is crossing) then (wait)
Else if (road is clear) then (proceed)

Algorithms: The Recipe for Problem-Solving

For even more complex scenarios, computers rely on algorithms. An algorithm is a step-by-step procedure for solving a specific problem or accomplishing a task. For example, an algorithm for making a peanut butter and jelly sandwich might look like this:

1. Take two slices of bread
2. Spread peanut butter on one slice
3. Spread jelly on the other slice
4. Put the slices together

Computers use various types of algorithms to solve different kinds of problems. Some examples include:

  1. Sorting Algorithms: Arrange data in a specific order (e.g., Bubble Sort, Quick Sort)
  2. Search Algorithms: Find specific data within a larger dataset (e.g., Binary Search)
  3. Optimization Algorithms: Find the best solution among many possibilities (e.g., used in route planning)

Advanced Problem-Solving: Machine Learning and AI

In the realm of artificial intelligence, computers use more sophisticated methods to solve problems and make decisions:

  • Machine Learning Algorithms: These allow computers to learn from data and improve their performance over time without being explicitly programmed. For example, a spam filter learns to identify spam emails by analyzing patterns in previously identified spam.
  • Deep Learning and Neural Networks: Inspired by the human brain, neural networks are used in deep learning to recognize complex patterns. They consist of layers of interconnected nodes (neurons). Each connection has a weight-adjusted during training to minimize the prediction error. Deep learning has been used in applications like image recognition, natural language processing, and autonomous driving.
  • Genetic Algorithms: Inspired by natural selection, these algorithms evolve solutions to complex problems over many iterations.

By combining these advanced techniques with their inherent speed and precision, computers can tackle problems of staggering complexity, from predicting climate changes to discovering new drugs. Here is an example of how a computer could learn to make a peanut butter and jelly sandwich might look like this:

Data and Inputs — The Role of Data in Computer Processing

Now that we know how computers think let’s learn about what makes them work: data and inputs. In our increasingly digital world, we constantly interact with data, often without realizing it. Every time you use a smartphone app or browse a website, you provide inputs and consume data.

But how do computers transform our everyday actions into usable information? Inputs can range from keystrokes on a keyboard to voice commands picked up by a microphone or even visual information captured by cameras. These inputs are then transformed into data that computers can process, analyze, and use to make decisions or perform tasks.

Data is often called the new oil in the information age, powering everything from simple applications to complex artificial intelligence systems. From the tiniest sensor to the largest supercomputer, the data flow keeps our digital world turning.

In this section, we’ll explore how computers receive and interpret inputs and the different types of data they work with. We’ll also touch on big data, data analytics, and the importance of data storage and security in our increasingly connected world.

Types of Data Inputs

Computers and other digital devices receive information from the outside world through various input devices, much like a child eager to learn about the world. These inputs can be categorized into several types:

  • Manual Input Devices are the most commonly used input devices worldwide, and they include Keyboards, mice, and touchscreens.
  • Audio Input Devices are like the ears of the computer. They allow the computer to receive voice commands and convert them into data.
  • Visual Input Devices are the computer's eyes. These cameras capture images and videos and convert them into digital information.
  • Smart Sensor Devices: The advent of modern devices like Smart thermostats and refrigeration enabled the creation of a new data category. These devices are usually called Internet of Things (IoT) devices.

From Inputs to Data: The Digital Alchemy

Once inputs are received, computers perform a kind of digital alchemy, transforming raw inputs into data that can be stored, processed, and analyzed. In today’s digital age, we’re generating unprecedented amounts of data — a phenomenon often referred to as “Big Data.” Let’s break down the different types of data:

  • Structured Data: Structured data is highly organized data stored in databases with a clear schema (which defines how data is organized). Examples include spreadsheets, customer information in a CRM system, or financial records.
  • Unstructured Data: Unstructured data doesn’t have a predefined model or organization and is a collection of many varied types of data. It includes email content, social media posts, or video files.
  • Semi-Structured Data: Semi-Structured data does not fit into the formal structure of a database, but it retains some structure by employing tagging systems and other identifiable markers, separating different elements, and enabling search. Sometimes, unstructured data is known as data with a self-describing structure. Examples include JSON and XML files, which are often used in web applications.

Big Data, on the other hand, could be Structured, Unstructured, Semi-Structured, or a combination and is characterized by the three V’s —

  • Volume (the sheer amount of data)
  • Velocity (the speed of data creation)
  • Variety (the different types of data)

Processing and Analyzing Data

As the volume of data grows, so does the need for sophisticated processing and analysis techniques:

  • Data Processing involves transforming raw data into a more useful format, such as converting temperature readings from Celsius to Fahrenheit or aggregating sales data by region.
  • Data Analysis goes a step further, using various techniques to extract insights from the data. It can involve statistical analysis, pattern recognition, or predictive modeling. Data Analytics, particularly when applied to Big Data, can uncover hidden patterns and correlations, providing valuable insights for businesses and researchers.
  • Machine Learning is an advanced form of data analysis where computers use algorithms to learn from data and improve their performance on a specific task over time.

The Critical Triad: Data Storage, Management and Security

As our digital world expands, three interconnected elements form the backbone of our data-driven society:

  • Data Storage ensures data is accessible and preserved. Cloud storage and advanced compression techniques help address scalability challenges as data grows exponentially.
  • Data Management helps keep data organized, up-to-date, and retrievable. Data governance policies and management tools help maintain data quality and consistency.
  • Data Security helps protect sensitive information from breaches and ensures compliance with data protection laws. A multi-layered approach, including encryption, privacy, and access controls, safeguards against evolving cyber threats.

The interplay between these elements is crucial: effective storage and management enhance security, while robust security protects the integrity of storage and management systems. As data becomes increasingly central to our lives, understanding this triad becomes more critical.

Limitations of Current Computer Thinking Compared to Human Cognition

While computers have made incredible strides in processing power and problem-solving capabilities, it’s important to recognize that they still have significant limitations compared to human cognition.

Image Credit: ScienceAbc.com

Here are some key areas where computers fall short:

  • Lack of True Understanding: Computers process data and execute algorithms but don’t truly “understand” the information as humans do. For example, a computer can analyze text and identify patterns but doesn’t comprehend the meaning or context behind the words.
  • Limited Creativity: While computers can generate content based on patterns and data, they lack the ability to think creatively or generate original ideas. Human creativity involves intuition, emotions, and experiences, which are beyond the reach of current AI systems.
  • Emotional Intelligence: Computers are excellent at logical reasoning and data analysis but lack emotional intelligence. They can’t understand or respond to human emotions, a critical aspect of human interaction and decision-making.
  • Adaptability and Learning: While machine learning algorithms allow computers to improve over time, they still require vast amounts of data and training. On the other hand, humans can learn and adapt quickly from a few examples or experiences.
  • Common Sense and Intuition: Humans use common sense and intuition to make decisions in uncertain or ambiguous situations. Computers, however, rely on predefined rules and data, which can limit their ability to handle unexpected scenarios.
  • Ethical and Moral Reasoning: Computers cannot make ethical or moral judgments. While they can follow programmed rules, they don’t have a sense of right and wrong, which is essential for making complex decisions in real-world situations.

Conclusion: The Future of Computer Thinking

We’ve learned how computers have grown from simple machines to amazing problem-solvers. But they’re still getting better and smarter! Emerging technologies promise to push the boundaries even further:

  • Artificial General Intelligence (AGI) aims to create machines with human-like cognitive abilities.
  • Neuromorphic Computing seeks to mimic the human brain’s neural structure for more efficient processing.
  • Quantum Computing has the potential to solve complex problems exponentially faster than classical computers.
  • Edge Computing brings data processing closer to the source, enabling real-time decision-making for IoT devices.

These advancements are exciting, but it’s crucial to remember that computers remain tools created by humans. They excel at processing vast amounts of data and performing complex calculations but still lack the nuanced understanding and creativity that characterize human cognition.

The future of computing isn’t about replacing human intelligence but augmenting it. By understanding how computers think, we can better harness their capabilities and work towards a future where human and machine intelligence complement each other in solving global challenges.

As we stand on the brink of these technological advancements, the next chapter in computer thinking is yet to be written. It promises to be exciting, limited only by our imagination and ability to innovate. In the following article, “Big Data — The Lifeblood of AI,” we’ll delve into the critical role that massive amounts of data play in powering the AI revolution. Stay tuned!

--

--

Phani Kambhampati
ABCsOfAIbyPhani

Data, Analytics, and AI Executive | Data, AI Monetization & Ethics Champion | Digital Transformation Catalyst | Driving Digital, Data Fluency, and Innovation