Agents in AI : An Overview

Swetha Murali
4 min readMar 29, 2023

--

Agents in Artificial Intelligence are things that understands or perceives the current environment and acts accordingly. The AI task environment consists of four components namely Performance, Environment, Actuators and Sensors (PEAS) . These components are responsible for an AI agent to work successfully.

The agent uses sensors to sense the environment and uses actuators to simulate action which acts upon the environment and then the performance is evaluated.

An agent is said to be rational if it selects an action which maximizes its performance measure.

There are different types of agents. They are

Simple reflex

This type of agents are the most simplest ones. They work based on the Condition-Action policy. This type of agent does not have any record of the past percepts. The term “percepts” refers to the insights the agent gets from the environment through the sensors. This agent selects which action to be performed only based on the current percept.

The schematic diagram of this agent is depicted below,

Simple reflex agent

This uses simple if-then strategy to perform actions. This agents work only in a fully-observable environment.

Model based

To work efficiently under partially observable environments, we need to maintain the previous percept records. Beyond having the percepts , the agent needs to know what happens when it takes an particular action. It must know how its action influences the environment. It maintains a theory or model on “how the world responds” and such models are used by these agents to select an action.

The structure of a model based agent is shown below,

Model based agent

This agent maintains an internal state where it tracks the earlier percepts. The current percept is combined with the internal state and the updated current state is obtained. It also uses Update-state function to update the internal state every time. To update this internal state, the information about how the world evolves and how its action affects should be known.

Goal based

The goal based agents are different from the other agents discussed. In addition to the current percept knowledge, they need some information about the ultimate goal of the agent. It tries to reach the destination by the actions it takes.

Goal based agent

Goal based agents are dynamic and flexible. If there is a modification in the goal, the action changes can easily be done in these type of agents whereas in the simple reflex agents ,the whole condition-action rule set has to be changed.

Utility based

These agents maintains a measure called utility ,i.e. it says how happy or satisfied the agent is . It uses an utility function internally to evaluate score.

Every time , the internal utility score is calculated . When the external performance measure of the agent and the internal utility score is in agreement, then the agent chooses an action that meets the expected utility.

Utility based agent

Expected utility is computed by averaging over all the possible outcome action states, weighted by probability of the respective action to occur.

Learning

This agent has number of elements,

i) Performance element ii) Learning element iii) Problem generator

The performance element is a simple agent where it takes percepts as inputs and decides on actions.

The learning element uses critic feedback from the critic and decides how to improve the performance in the future.

The problem generator is useful for suggesting new actions that will lead to informative experiences rather than trying over the same set of actions that are considered best.

The structure can be seen below,

Learning agent

Enjoy AI!

--

--