Intelligent Agents in A.I.

Tushar Dhyani
5 min readApr 20, 2020

--

Artificial Intelligence is all about creating a solution that solves a specific purpose. This solution is the Intelligent Agent. An intelligent agent is a program that can make decisions or perform a service based on its environment, inputs and experiences. These programs can be used to autonomously gather information on a regular, programmed schedule or when prompted by the user in real time. At times, you will also read these agents referred as a bot, which is short for robot.

An AI system is composed of an Agent and its Environment. This agent perceives its environment through sensors and acts through the actuators. A presentation of the an AI Agent is given in fig 1.1. An agent always work in their defined environment, but a single environment can contain multiple agents.

Figure 1.1: A simple agent overview

To understand the structure of these agents, we should be familiar with Agent Architecture. Architecture is the structure of the machinery that operates over it and which the agent executes. For example, Pacman and its coloured ghosts which had different functions in the same environment. In simple terms, an agent is the one which maps the precept input to the defined output or action — f: P* → β.

The AI agents could be categorised into the following classes:

  1. Simple Reflex Agent
  2. Model based Reflex Agent
  3. Goal based Reflex Agent
  4. Utility based Reflex Agent
  5. Learning Agent
  1. Simple Reflex Agent:

As the word states, simple agents are most basic functioning agents that ignores the history and takes actions only on the basis of current perceptions. The history is defined as the collection of all the information that is prevalent till date. This agent is based on the Condition action rule, which maps a state to a predefined condition and acts accordingly. For these simple reflex agents, operating in partially observable environment and infinite loops becomes often unavoidable.

Problems with simple reflex agents are:

  1. Operates on a very limited amount of knowledge.
  2. No track of history and non-perceptual parts of state.
  3. They are slow and big to generate
  4. If changes are required to be carried out, rule set must be updated.
Figure 1.2: Simple Reflex Agent

2. Model based reflex agents

These type of agent works by finding a condition that matches the condition of the current situation as a result of this. It also happens to handle the situation unavoidable in Simple Reflex Agents operating in partially observable environments. The agent keeps a track of the internal state which is adjusted by each precept and its history. The current state is stored inside the agent which the describes about the part of the environment that cannot be observed.

Figure 1.3: Model-based reflex agents

3. Goal based Agents

Is the agent takes decisions based on the fact how far they are from the required goal, then such agents are called as goal based agents. In such agents, every action is intended to lower the distance of the agent from the goal which allows the agent to choose from multiple possibilities and selecting the one that finally lands it into the goal. The knowledge of these agents can be explicitly be modified to make such agents more and more flexible. This means that they require searching and planning.

Figure 1.4: Goal-based agents

4. Utility based Agents

The agents which are intended to solve a problem based on a utility as the building block are said to be Utility based Agents. When there are more than one possibilities, then to decided the best among them, utility agents are useful. The action they choose are based on the utility of each state. Whenever a desired goal is not enough, the agent looks for faster and safer route to the required goal, reducing the cost and time. The utility function maps a state onto a real number which describes the associated degree of happiness of the agent.

Figure 1.5: Utility based agents

5. Learning Agent

Different from all the agents mentioned above, a learning agent is the one that takes consideration of the past experiences and taking them into account, learns from them. Due to its capabilities of learning from the past experiences, it can be ported to adapt to newer goals by generating newer and newer experiences. It start to act from the basic knowledge and builds upon the same through automated learning.

It has mainly the following component units:

  1. Learning element: The element which is responsible for making improvements by learning from the environment.
  2. Critic: The feedback component that describes how well the agent is moving forward with its learning with respect to a fixed standard.
  3. Performance element: The component which is responsible for selecting external action.
  4. Problem generator: The component responsible for suggesting actions that will lead the agent to a new experience.
Figure 1.6: Learning Agent

Examples of Intelligent agents around us

Daily assistants such as Google Assistant, Alexa and Siri are perfect example of intelligent agents. These agents use sensors to perceive a request made and based on the data-streams, collects data from the databases without the user’s help to perceive the world around and generate information such as weather and time.

--

--

Tushar Dhyani

Explainability in Natural Language Processing and Understanding | AI best practices