AI: Right Structure of Agents For your Business

Knoldus Inc.
Knoldus - Technical Insights
6 min readOct 4, 2020

In the previous post, we discussed the environment in which the agent operates and the characteristics of those environments. In this post let us talk about the types of agents and challenges of data set for the agents.

All agents have the same skeletal structure. They get percepts as inputs from the sensors and the actions are performed through the actuators. Now the agent can either just act on a percept as a reflex for example if you throw a ball at me and I try to catch it (or duck from it given that I am bad at baseball) than that is a quick reaction to the percept. On the other hand if you throw a ball at me and tell me to arrange it by color or count the number different colors that you are throwing at me then I would have to maintain a state to do the counts correctly. So this would involve some state but is still ok. Now, if you want to trouble me further and you tell me to jump twice if you throw red ball at me and do a burpee if you throw a green ball at me apart from catching and counting then you got me for sure :)

This would involve a complex logic of me maintaining a mind table of what needs to be done on what percept and this is called condition-action rule. Now if all these percepts were to be indexed then this would become a significant data set.

Consider the automated taxi: the visual input from a single camera (eight cameras is typical) comes in at the rate of roughly 70 megabytes per second (30 frames per second, 1080 × 720 pixels with 24 bits of color information). This gives a lookup table with over 10 600,000,000,000 entries for an hour’s driving. Even the lookup table for chess — a tiny, well-behaved fragment of the real world — has (it turns out) at least 10 150 entries.

This can become a lot of information.

The key challenge for AI is to find out how to write programs that, to the extent possible, produce rational behavior from a smallish program rather than from a vast table.

So this brings us to 4 documented types of agent programs

Simple Reflex Agents

As the name suggests, these agents behave on reflex. Someone passes by a sensor on a motion detection light, the light goes on. It does not hold any state, any intelligence. Super cool! This agent has a list of condition rules. As soon it gets a percept (1) , it acts as per the condition rules (2) and then it responds back to the environment with the actuators (3).

In this scenario the agent does not claim to know more about the environment than what is presented as an event to the sensor.

Model Based Reflex Agents

These agents handle the scenario where the observability is incomplete. It would get events from the sensor but in order to determine the next step of action, it must consult its internal state.

The internal state of the agent is a combination of 2 things, transition model and sensor model.

Transition model is further divided into (i) How does the external environment change on the basis of agents action (ii) How does the external change over time. For example, a motion detector camera knows that when it asks the head to move 30 degrees right then it would see a different view of the environment in which it expects some objects. On the other hand given the changes in seasons, it would expect snow in winters and leaves in fall.

Sensor model is how the environment is represented in the percept of the agent. For example for this motion detector camera, if it has a big tree on the right then it knows that most of the motion that it needs to track would be concentrated on the left side of the area.

A combination of data from transition model and sensor model, further combined with the current event received from the sensor allows it to create a current state model.

The new percept coming from the environment (1) is matched with the results from the internal state i.e transition model + sensor model (2). This results in a new current state being generated (3). This current state is then checked with the condition-action rules to determine the next action (4).

Goal Based Agents

Many a times just getting to understand the current state based on the new percept and the internal state is not enough. The agent needs to determine whether it is getting closer to the goal or not. The goal for our motion detector camera could be to find the red car and send an alert as soon as it finds it. Now this would be very different from just an condition-action scenario. The condition-action might want the camera to turn left on the basis of internal state but the goal condition might want it to keep facing right for extra 10 seconds if it found a hint of a red car.

Notice that decision making of this kind is fundamentally different from the condition–action rules described earlier, in that it involves consideration of the future — both “What will happen if I do such-and-such?” and “Will that make me happy?” In the reflex agent designs, this information is not explicitly represented, because the built-in rules map directly from percepts to actions.

Goal based agents might be less efficient and costlier but they are more flexible.

Utility Based Agents

Goal based agents are not always high quality. The goal could be to capture the number of red cars but to achieve that goal, the agent might do constant movements from left to right, up and down but it would be costly, consume more power and probably reduce the life of the agent.

Utility takes the performance characteristics into account and refers to the quality of being useful.

There might be several ways for the camera to detect a red car but it tries to find the most performant, least costly way to do that. That is the utility angle to the equation. It must try to maximise the utility. Choosing the
utility-maximizing course of action is also a difficult task. A utility-based agent has to model and keep track of its environment, tasks that have involved a great deal of research on perception, representation, reasoning,
and learning.

Utility based agent uses a model of the world, along with a utility function that measures its preferences among states of the world. Then it chooses the action that leads to the best expected utility, where expected utility is computed by averaging over all possible outcome states, weighted by the probability of the outcome.

What kind of agents are you setting up for your AI practice. Knoldus would be excited to help you on the journey.

--

--

Knoldus Inc.
Knoldus - Technical Insights

Group of smart Engineers with a Product mindset who partner with your business to drive competitive advantage | www.knoldus.com