AI Agent Types – And Memory

Posted by John Werner, Contributor | 3 hours ago | /ai, /innovation, AI, Innovation, standard | Views: 4


In the big conversation that companies and people are having about AI agents, one of the major points is around the various different types of agents that we classify into different categories.

In other words, there are AI agents, and there are AI agents. Some are fairly rudimentary – while others are extremely sophisticated and skilled.

Another way to think about this is that neural networks are not the same as human brains: they’re much more heterogenous. They didn’t evolve collectively over millions of years, so they may not look like each other in the same ways that human brains do.

That said, one of the biggest differences between AI agents is their memory.

Stateful systems have some sort of recollection of data – it provides ongoing context for their work. By contrast, stateless systems just start over every single time a user session begins. You’ll see the difference in a chatbot or AI agent that either remembers your history, or sees you as a brand new person each time you interact.

Seven Types of Agents

It also helps to think about AI agent memory within the framework that has developed to distinguish agent types.

Experts like to classify AI agents in these seven categories:

  • · Simple reflex agent
  • · Model-based reflex agent
  • · Goal-based agent
  • · Utility-based agent
  • · Learning agent
  • · Multi-agent system
  • · Hierarchical agent

In terms of memory, perhaps the best distinction is between the first two types – simple reflex agents, and model-based reflex agents.

An author simply named Manika at ProjectPro describes an example of a simple reflex agent this way:

“An automatic door sensor is a simple reflex agent. When the sensor detects movement near the door, it triggers the mechanism to open. The rule is: if movement is detected near the door, then open the door. It does not consider any additional context, such as who is approaching or the time of day, and will always open whenever movement is sensed.”

And a model-based reflex agent this way:

“A vacuum cleaner like the Roomba, one that maps a room and remembers obstacles like furniture (represents a model-based agent). It ensures cleaning without repeatedly bumping into the same spots.”

(Manika actually cites input by Andrew Ng at Sequoia, someone we’ve had on Imagination in Action forums and interview panels).

Essentially, the stateful AI agent relies on having that consistent memory for specific capabilities.

Daffodil provides these characteristics of a stateful agent:

  • These agents can recall prior inputs, user history, or task progress, allowing them to respond more naturally and maintain coherent conversations.
  • Because they remember user preferences, behavior, or goals, stateful agents can tailor their responses to individual needs.
  • They often involve more advanced sessions or memory management, which increases design and implementation complexity.
  • Stateful agents can dynamically adjust their behavior based on new information, feedback, or a shift in user intent.

You can see how having the framework and context drives things like perceiving a shift in user intent, or leveraging a task or purchase history to predict a future outcome or preference.

Acting Like Humans

In a recent TED talk on the subject, Aditi Garg began with the idea of reconnecting with an old middle school friend:

“That’s the beauty of human relationships, the fact that we don’t have to reintroduce ourselves,” she said. “We don’t have to explain our inside jokes or our favorite stories. We just pick up where we left off. It’s effortless, it’s personal. It’s what makes friendships so meaningful.”

Contrast this with the current capabilities of an AI system that doesn’t have vibrant memory…

AI today, it can unpack physics, it can summarize books,” Garg added. “It can also … compose some symphonies, but the moment you open a new chat window, it resets. It’s like talking to a brilliant mind, but with amnesia. Machines can reason, but they still cannot remember.”

Reimagining Memory

Garg went over some of the ways that we are used to thinking about memory, with a suggestion that changing the framework will be useful in adding memory to AI systems.

“On a very fundamental level, we think of data as like a vast digital library with bytes and bytes of information that you can access,” she said.

That idea, she noted, may need to be worked on. The memory of AI will need to be accessible in real-time, flowing through the system in the same ways that our own memory is instantly recalled by our biological brains.

Making the analogy to a Ferrari that need to be refueled every lap of a race, Garg talked about how AI operations will waste enormous amounts of time trying to access these parts of an AI agent’s system.

On the other hand, she said, new systems will have immediate, transformed statefulness.

“If an AI system can access any piece of information, it can literally never forget. If it can maintain context across conversations (and) projects … the same storage breakthrough that keeps GPUs fed is the breakthrough that will keep your AI memory alive.”

That goal, Garg suggested, has to do with locating the memory and the compute in the same place.

Data Centers and Colocation Design

I’ve seen this played out in data center plans where engineers actually put the data and the operations in the same place, along with the energy or power source.

You can think of a mini data center sitting next to a nuclear power plant, with the storage banks tied directly into a centralized LLM that will use that data to its advantage.

What do you get with these systems?

We stand at the threshold of AI that remembers,” Garg concluded. “When the speed of remembering finally matches the speed of thinking, we enable AI that transforms from a brilliant mind with amnesia, (to) your digital twin.”

That might be the next big innovation in machine learning and artificial intelligence – you’ll see the same models that you interact with today, endowed with better memory, and they’ll seem smarter and more “with it”, because they will know a lot of the things that you would expect them to know if they had the memory of a human brain. By the way, it’s a really good idea to know those seven kinds of AI agents, since they’re going to remain part of the conversation for a long time to come. What do you see as the next major advance in AI?



Forbes

Leave a Reply

Your email address will not be published. Required fields are marked *