Intelligent Agents

Maleesha Lionel
7 min readDec 7, 2022

--

An intelligent agent is a program that can make decisions or perform a service based on its environment, user input, and experiences. These programs can be used to autonomously gather information on a regular, programmed schedule or when prompted by the user in real-time. Intelligent agents may also be referred to as a bot, which is short for robot.

User input is collected using sensors, like microphones or cameras, and agent output is delivered through actuators, like speakers or screens. The practice of having information brought to a user by an agent is called push technology.

Alexa and Siri, are examples of intelligent agents as they use sensors to perceive a request made by the user and automatically collect data from the internet without the user’s help. They can be used to gather information about its perceived environment such as weather and time.

Infogate is another example of an intelligent agent, which alerts users about news based on specified topics of interest.

Autonomous Vehicles could also be considered intelligent agents as they use sensors, GPS, and cameras to make reactive decisions based on the environment to maneuver through traffic.

Agents can be classified based on various features:

Simple reflex agent

Is the most basic of the intelligent agents out there. It performs actions based on a current situation. When something happens in the environment of a simple reflex agent, the agent quickly scans its knowledge base for how to respond to the situation at hand based on pre-determined rules.

It would be like a home thermostat recognizing that if the temperature increases to 75 degrees in the house, the thermostat is prompted to kick on. It doesn’t need to know what happened with the temperature yesterday or what might happen tomorrow. Instead, it operates based on the idea that if _____ happens, _____ is the response.

Simple reflex agents are just that: simple. They cannot compute complex equations or solve complicated problems. They work only in environments that are fully observable in the current percept, ignoring any percept history.

Goal-based agent

The knowledge of the current state environment is not always sufficient to decide an agent what to do. The agent needs to know its goal which describes desirable situations. Goal-based agents expand the capabilities of the model-based agent by having the “goal” information.

They choose action so that they can achieve the goal. These agents may have to consider a long sequence of possible actions before deciding whether the goal is achieved or not. Such considerations of different scenarios are called searching and planning, which makes an agent proactive.

Utility-based agent

These agents are similar to the goal-based agent but provide an extra component of utility measurement which makes them different by providing a measure of success at a given state. Utility-based agent act based not only on goals but also on the best way to achieve the goal. The Utility-based agent is useful when there are multiple possible alternatives, and an agent has to choose in order to perform the best action. The utility function maps each state to a real number to check how efficiently each action achieves its goals.

Learning-based agent

Learning agent in AI is the type of agents that can learn from their past experiences, or it has learning capabilities. It starts to act with basic knowledge and then is able to act and adapt automatically through learning. A learning agent has mainly four conceptual components, which are:

Learning element: It is responsible for making improvements by learning from the environment

Critic: The learning element takes feedback from the critic which describes that how well the agent is doing with respect to a fixed performance standard.

Performance element: It is responsible for selecting external action

Problem Generator: This component is responsible for suggesting actions that will lead to new and informative experiences.

Hence, learning agents are able to learn, analyze performance, and look for new ways to improve performance.

Rational Agent

An agent who has complete knowledge, clear preferences, models uncertainty, and behaves in a way to maximize its performance measure via all feasible actions is said to be acting rationally. A rational agent will always perform the right thing.

Autonomous Agent

An agent which can decide autonomously the actions that need to be taken in the current instance to maximize progress towards its goals.

PEAS

PEAS is a representation system for AI agents which caters to measuring Performance with respect to Environment, Sensors, and actuators. To design an agent, we must know our task environment. PEAS system helps specify the task environment. PEAS is a short form for Performance, Environment, Actuators, and Sensors. Identifying PEAS can help write optimum algorithms for AI.

Sensors: Sensors help agents perceive their environment by giving them a complete set of Inputs. The action of agents depends on the past history and the current input set. Examples of sensors include cameras, GPS, odometers, various sensing tools, etc.

Actuators: Actuators help agents operate in the environment. Actuators include display boards, object-picking arms, track-changing mechanisms, etc. Actions performed by agents can bring change to the environment as well.

Environment: The surrounding of the agent at a particular instant in which the agent works is called the environment. It can be static or dynamic based on the motion of the agent. A small change in the environment will also change the required sensors and actions of the Agent.

As per Russell and Norvig, an environment can be classified on various factors:

Fully observable vs. Partially Observable

Static vs. Dynamic

Discrete vs. Continuous

Deterministic vs. Stochastic

Single-agent vs. Multi-agent

Episodic vs. sequential

Performance measure: Performance measure is the unit to define the agent’s success or accuracy in achieving its set goals.

Ex:

Agent: Tomato classification system.

Sensors: Weighing sensors, Cameras for visual input, color sensing, etc.

Actuators: Track changing mechanism for segregation, display boards, or a Y-belt for quick classification into ripe and unripe tomatoes.

Environment: Our environment can be a moving walkway through which the tomatoes are passed on for segregation. It should have a good source of light for better camera input.

Features of Environment

As per Russell and Norvig, an environment can have various features from the point of view of an agent:

Fully observable vs Partially Observable

Static vs Dynamic

Discrete vs Continuous

Deterministic vs Stochastic

Single-agent vs Multi-agent

Episodic vs sequential

Known vs Unknown

Accessible vs Inaccessible

1. Fully observable vs Partially Observable:

If an agent sensor can sense or access the complete state of an environment at each point in time then it is a fully observable environment, else it is partially observable. A fully observable environment is easy as there is no need to maintain the internal state to keep track history of the world. An agent with no sensors in all environments then such an environment is called unobservable.

2. Deterministic vs Stochastic:

If an agent’s current state and selected action can completely determine the next state of the environment, then such an environment is called a deterministic environment. A stochastic environment is random in nature and cannot be determined completely by an agent. In a deterministic, fully observable environment, the agent does not need to worry about uncertainty.

3. Episodic vs Sequential:

In an episodic environment, there is a series of one-shot actions, and only the current percept is required for the action. However, in a Sequential environment, an agent requires memory of past actions to determine the next best actions.

4. Single-agent vs Multi-agent

If only one agent is involved in an environment, and operating by itself then such an environment is called a single-agent environment. However, if multiple agents are operating in an environment, then such an environment is called a multi-agent environment. The agent design problems in the multi-agent environment are different from the single-agent environment.

5. Static vs Dynamic:

If the environment can change itself while an agent is deliberating then such an environment is called a dynamic environment else it is called a static environment. Static environments are easy to deal with because an agent does not need to continue looking at the world while deciding on an action. However, for a dynamic environment, agents need to keep looking at the world with each action. Taxi driving is an example of a dynamic environment whereas Crossword puzzles are an example of a static environment.

6. Discrete vs Continuous:

If in an environment there are a finite number of percepts and actions that can be performed within it, then such an environment is called a discrete environment else it is called a continuous environment. A chess game comes under a discrete environment as there is a finite number of moves that can be performed. A self-driving car is an example of a continuous environment.

7. Known vs Unknown

Known and unknown are not actually a feature of an environment, but it is an agent’s state of knowledge to perform an action. In a known environment, the results of all actions are known to the agent. While in an unknown environment, the agent needs to learn how it works in order to perform an action. It is quite possible that a known environment to be partially observable and an Unknown environment to be fully observable.

8. Accessible vs Inaccessible

If an agent can obtain complete and accurate information about the state’s environment, then such an environment is called an Accessible environment else it is called inaccessible. An empty room whose state can be defined by its temperature is an example of an accessible environment. Information about an event on earth is an example of an Inaccessible environment.

--

--