Architecture of AI Framework: Comparing AI Agent Memory to Human Brain

Jarosław Wasowski
springchain.ai

--

“The true sign of intelligence is not knowledge but imagination.” — Albert Einstein

As Albert Einstein noted, intelligence emerges from the ability to make connections and imagine new possibilities, beyond just memorizing facts.

When designing artificial intelligence systems, we can draw inspiration from the most intelligent system we know — the human mind.

In particular, understanding how human memory works can provide valuable insights into building more capable AI. Like the brain, artificial intelligence requires memory to learn effectively over time. Memory grants continuity across experiences, anchoring the history of interactions. With this foundation, AI systems can construct more sophisticated behaviors and decision-making.

Human memory has evolved not just to record the past, but to simulate and predict potential futures.

“The main function of memory is to predict the future.” — Miguel Nicolelis

A Comprehensive Exploration of AI Use Cases

The following sections will unravel various AI use cases and explore how they harness information and intelligence to create a robust business environment.

AI Sales Agent: Navigating the Business Landscape

“An AI tool designed to identify potential clients, analyse the unique aspects of their businesses, and draft sales pitches for human verification. These AI agents should seek out meaningful relationships and initiate the first sales touchpoints.”

Memory and Knowledge Requirements:

  • Memory for Human-Guided Instructions: Stores guidelines from humans defining the AI assistant’s task.
  • Action State Memory: Keeps track of the current state, completed actions, and actions to be taken.
  • Work Data Information Storage: Houses information about found companies, their details, and employees.
  • Summary Reports: Contains summary reports of operations.
  • Company Knowlage Base Access: Needs access to the company’s current offers, sample proposals, sales, and marketing materials.

AI Slack Discussion Summarizer App: Distilling Conversations

“A savvy application capable of parsing through Slack discussions, extracting key conclusions, assigning tasks, pinpointing responsible individuals, and outlining the next steps.”

Memory and Knowledge Requirements:

  • Access to the conversation history within the designated Slack thread.

Service Desk AI Assistant: Frontline Troubleshooting

“An AI solution serving as the primary tier for handling Service Desk inquiries. It can process user requests, pose further questions, and recommend solutions from the company’s knowledge repository. If needed, it can escalate the issue to a human staffer.”

Memory and Knowledge Requirements:

  • Service Requests History Access: To view the history of service requests.
  • Knowledge Base Access: To provide user support.
  • User Information and Privileges: Knowledge about users and their permissions.

Additionally, for this scenario, it might be beneficial to fine-tune a linguistic model to grant it specialized abilities and incorporate static information directly into the model.

Webpage LiveChat AI Application: Modern Customer Interactions

“A rising trend in the digital realm, these chats situated on website pages empower AI to handle initial interactions. They can furnish responses based on an organization’s knowledge and procedures, and when needed, reroute the query to a human representative.”

Memory and Knowledge Requirements:

  • Access to Company Offers: To offer products and promotional materials.
  • Knowledge Base Access: To give valuable information to users.

The Miracle of Human Memory: Lessons for AI Architecture

In traditional programming, there are many established architectural and design patterns. However, building AI applications is a relatively new area where ready-made, proven solutions are hard to find.

When looking for inspirations for AI system architectures, it is worth turning to the natural world and the human brain. The human brain is an extremely complex and efficient information processing system. Analyzing it can provide valuable clues for designing artificial intelligence.

The human brain’s architecture and principles of operation can inspire building efficient AI systems with high information processing performance. However, it is important to remember that the human brain is significantly different from computers, so not all solutions can be directly implemented. Still, understanding the neurobiological foundations of cognition can aid designing better AI architectures.

Three Steps of Memory Functioning

To understand how memory works, we need to learn about three key processes:

Encoding — transforming sensory information into a form understandable to the brain. It occurs at the visual, auditory and semantic levels.

Storage — recording encoded information. It takes place in three types of memory:

  • Sensory memory — very short-term, retains sensory data for fractions of a second.
  • Short-term memory — stores information for up to about 30 seconds.
  • Long-term memory — can “hold” unlimited amounts of data and retain it for a lifetime.

Retrieval — recovering information from long-term memory and bringing it back into consciousness.

Memory Types in Detail

Sensory Memory

Sensory memory is the shortest-term memory that holds sensory information (visual, auditory, tactile etc.) for fractions of a second after it is received. Thanks to it, we perceive the world as continuous, not as a series of images. For example, blinking does not cause interruptions in vision.

Short-Term Memory

Also known as working memory. It can “hold” 5–9 pieces of information and retain them for about 30 seconds. It allows us to remember e.g. a new phone number long enough to dial it, or a short message we want to convey. Information from short-term memory quickly decays if it is not repeated or associated with what we already know.

Long-Term Memory

Long-term memory can store practically unlimited amounts of information throughout life. It is divided into declarative (conscious) and non-declarative (unconscious) memory.

Declarative memory includes:

  • Semantic memory — stores facts, concepts, meanings.
  • Episodic memory — records specific events and situations from the past.

Non-declarative memory includes:

  • Procedural memory — memory of motor skills like swimming or riding a bike.
  • Emotional memory — records feelings associated with experiences.

Information in long-term memory is durably encoded thanks to consolidation and can be retained for a lifetime. However, retrieval slows down with aging.

Processes Related to Memory Management in the Human Brain

Memory management in the human brain is a fascinating process that mirrors the sophisticated design patterns we find in software architecture. It’s a dynamic structure, comparable to a well-architected database, consisting of processes like encoding, learning, linking knowledge, forgetting, and retrieving knowledge. Let’s dive into the details.

Encoding

Encoding acts as a translator, converting real-world sensory experiences into a form that can be stored in the brain. Whether visual, auditory, or semantic, this process is akin to an algorithm that translates data into code.

  • Visual Level: Pictures and images.
  • Auditory Level: Sounds and voices.
  • Semantic Level: Words and meanings.

Learning

Learning moves information from the ephemeral realm of short-term memory to the more permanent long-term memory. This is akin to a data backup process.

  • Repetition: Repetition helps cement the information in long-term memory.
  • Meaning: The more meaningful the information, the easier it is to remember.
  • Sleep: Sleep plays a crucial role in this transfer, almost like running a background task to organize and save your day’s work.

Linking Knowledge

“All learning is understanding relationships.” — George Washington Carver

Knowledge in the brain is all about connections. These connections form a network, where information links to existing knowledge.

  • Encoding and Transfer: Then encodes and transfers it to long-term memory.
  • Stronger Connections: The stronger these connections, the better the new information sticks.

Forgetting

Forgetting is a natural part of the brain’s “memory management system.”

  • Failure to Recall: Sometimes the information is there, but we fail to access it.
  • Interference Between Memories: Similar memories can interfere with one another.
  • Insufficient Encoding: If the encoding is weak, forgetting is more likely.
  • Weakening of Memory Traces: Memory can fade over time like outdated data in a computer system.

Retrieving Knowledge

“Memory is deceptive because it is colored by today’s events.” — Albert Einstein

Retrieving knowledge from the vaults of long-term memory requires the right keys. Associations and cues help to unlock the desired information, and well-encoded and repeated information becomes easier to recall.

The Structure and Functions of the Human Brain in Relation to Memory

The human brain, often compared to the most advanced supercomputer, plays an intricate role in memory storage and retrieval. Let’s embark on a journey to explore the structural complexity of the brain and how different parts contribute to various types of memory.

The Key Architectures: Exploring the Landscape of the Mind

A marvel of nature, the brain’s structure can be broken down into critical areas, each responsible for unique aspects of memory:

  • Hippocampus: Part of the limbic system, it’s the foundation of autobiographical and episodic memories.
  • Cerebral Cortex: Divided into lobes, each responsible for different memory types:
  • Frontal Lobes: Handling working memory.
  • Parietal Lobes: Engaged in short-term memory.
  • Temporal Lobes: Vital for semantic memory, storing general knowledge.
  • Amygdala: This adds emotional context to memories.
  • Basal Ganglia: Deeper structures aiding procedural and motor memory.
  • Cerebellum: Controls precise movements, contributing to muscle memory.

The Hippocampus: The Heart of Declarative Memory

The hippocampus is more than a structure; it’s a key player in forming declarative memory. It’s here where experiences become part of who we are.

The Emotional Touch: Amygdala’s Role

Have you ever wondered why some memories are laden with emotions? The amygdala assigns emotional significance to memories, making certain experiences unforgettable.

The Knowledge of Large Language Models

Language models possess a unique learning process that combines pre-training, fine-tuning, and in-context learning. Each stage contributes a certain type of knowledge that is crucial for the model’s performance. The key at every stage of model training is maintaining high-quality data.

During pre-training and fine-tuning, higher-quality data increases the models’ average efficacy and allows using smaller, more specialized models tuned for specific tasks instead of large, expensive solutions. With in-context learning and prompt engineering, it’s vital to limit the context window size, forcing us to include information very precisely.

Pre-Training

Pre-training constructs broad linguistic knowledge based on massive volumes of text. The data used at this stage typically comes from sources several years old, e.g. Wikipedia, books, articles. This provides a solid foundation but can make the knowledge outdated.

We can compare this knowledge to what someone learns in primary school — it builds a basic understanding of the world but doesn’t make one a valuable employee yet. For AI, this knowledge can also be outdated or low-quality, depending on when the pre-training happened. It won’t contain any information about our organization either.

Building and training such a model costs on the order of tens of millions of dollars and takes several months.

Fine-Tuning

Fine-tuning tailors the model for a specific task using up-to-date training data. We can leverage fresh texts thematically related to the task, updating the model’s knowledge.

We can compare this to sending a student/employee for a multi-year college education to gain the required expertise for a more specialized role. For models, this can take from days to weeks and cost tens of thousands of dollars.

Just like people pursue hobbies, interests, or side jobs during studies to gain additional skills, language models can be fine-tuned for several tasks. Common specializations nowadays include text classification, QA, summarization, text generation, instruct abilities, and chat.

In-Context Learning

In-context learning relies on examples provided in the prompt. Their relevance depends on the user — we can provide current data. However, the example amount is heavily limited by the context.

This mechanism allows supplying the most up-to-date information from our organization’s knowledge base or even the internet to each query. Here the burden of managing the process and choosing data to include lies on our software communicating with the language model. We must also remember that models currently have very small context windows limiting the characters we can provide. This makes the process even more challenging.

Summary

We now understand what knowledge comes with a language model out-of-the-box, what’s missing, and how to supply additional organizational knowledge so that the AI performs its tasks better. Careful data selection and prompt engineering are key to unlocking the models’ potential.

Requirements for the AI Framework

We have already analyzed together example applications and the knowledge required for the described AI Applications and Agents. We discussed very generally the structure of the human brain and learned what knowledge language models possess. It is time to start analyzing the requirements for the AI framework.

All functionality will be divided into modules that will be responsible for storing and handling different types of memories. This is consistent with the structure of the human brain, where different information is stored in different parts of the brain.

The Communication Module will play the role of sensory memory, responsible for receiving messages from the user or via the API from another system.

Semantic memory in AI is embedded within language models through pre-training, where they absorb vast amounts of general knowledge. After this, fine-tuning refines this knowledge, focusing on specific tasks or domains. Together, these processes equip AI models with a rich and specialized semantic understanding.

Data storage

“Data is the new oil. It’s valuable, but if unrefined it cannot really be used.” — Clive Humby

  • Memory Module — responsible for short-term memory
  • Knowledge Module — responsible for long-term memory
  • Data Module — module storing structured application and AI Agent data
  • Tools Module — this module can invoke any tools e.g. search the internet, it will also provide data from external sources in the response.
  • Communication Module — the module responsible for communication with the outside world.

Memory Module — AI’s Working Memory: A Quick Look

Just like people need short-term memory to remember small tasks, AI has something similar called working memory. This memory holds temporary information and refreshes often.

“Think of AI’s working memory like a computer’s RAM — it briefly holds data and then updates.”

The memory’s role working memory in AI is like a bridge. It connects the AI’s big pool of information (its knowledge) with the task it’s doing right now. When AI interacts with people or things around it, this memory helps it understand and respond.

Why different AIs meed different memories not all AI is the same. Just like different apps on your phone might need different things, AI has different needs too. Some might need a lot of memory, while others don’t. So, when someone is building an AI, they can choose what kind of memory to give it.

Handling memory: Adding, Bringing Back, and Removing Working memory also has a job to do. It makes new memories for new tasks, brings back old memories when needed, and deletes memories that are no longer useful.

The Knowledge Module: Diving Deep into AI’s Long-Term Memory

Imagine walking into a vast library filled with records of everything you’ve ever learned, encountered, and felt. This is the essence of the Knowledge Module in AI systems. Drawing parallels from the human brain, this module holds the key to enhancing the continuous evolution of AI applications and agents.

Types of Information in the Knowledge Base

Analyzing the structure of the human brain and the demands of AI applications and agents, the knowledge base can be segmented into:

  1. Organizational Knowledge: Information sourced from internal systems, wikis such as Confluence, documents, messaging platforms, and other available resources. This knowledge is typically read-only in the model.
  2. Episodic Memory: Chronicles specific past events and situations. These insights are garnered during the operation of AI applications and agents. This memory can be both read and written into.
  3. Procedural Memory: Comparable to how humans remember motor skills like swimming or cycling, it encompasses procedures outlining how AI applications and agents should operate. Accessible for both reading and writing.
  4. Emotional Memory: Catalogs feelings tied to experiences. It delves into user relationships, preferences, reactions, and data that renders the AI more human-like and aligned in user interactions. This memory can be read and written.

Categorizing Information Storage

Information in the knowledge base falls into two brackets:

  • Externally Stored: Stored in outside systems, these are solely readable.
  • Internally Stored: Housed within the module, these can be read and written.

Related Modules in an AI Framework

Developing artificial intelligence (AI) capabilities requires more than just training algorithms on data. A robust AI knowledge framework incorporates additional modules that work together to enable flexible reasoning, knowledge accumulation, and communication.

Data Module

The data module is responsible for storing data in the database, enabling data writing, reading and editing.

It allows for storing structured relational information that is both produced and utilized by AI applications and agents.

Imagine it as a vast library, where every book, journal, and manuscript contains invaluable information ready to be accessed and used by AI applications.

Tools Module

An AI system does not evolve in isolation. The tools module grants access to external resources that can feed additional data into the knowledge base. These might include search engines, PDF converters, web scrapers, and more.

Just as humans continuously learn through books, discussions, and life experiences, AI agents can leverage tools to ingest knowledge from the outside world. This helps populate the knowledge base with a greater diversity of data to build more capable reasoning.

“The most powerful tool we have as developers is automation.” — Scott Hanselman

Communication Module

Humans receive a constant stream of sensory input through our eyes, ears, and other perception systems. The communication module mimics this capability, listening for external stimuli across various channels and relaying the information to other AI components.

Memory Flow in AI Software

Encoding: A Critical Process in Data Handling

In the fast-paced world of information technology (IT), data presents itself in various formats. From simple, standardized formats originating from APIs and databases to the intricate lists of texts acquired through web scraping or platforms like Confluence, data’s diversity poses a challenge.

Encoding is a critical process where the Memory Module translates raw data into structured information that can be utilized by AI Agents or Applications. The transformation may seem simple, but it is far from it.

Dividing and Grouping

Encoding divides and groups these disparate data forms, preparing them in an accessible format for inclusion in prompts or knowledge bases. Think of encoding like organizing a library; each book is placed in the right section, making it easier to find.

Technical Architecture

“The only way to handle the ever-changing data formats is by allowing flexibility in our encoders’ architecture.”

From a technical perspective, architecture should allow various implementations of Encoders. These Encoders must be versatile, able to handle different input data formats, and different algorithms for data organization. Imagine them as translators, each fluent in different languages and dialects, converting complex texts into a universal language.

Lerning

Learning, or the data migration process, happens when information moves from the Memory Module to the Knowledge Module and/or the Data Module. It’s the moment an AI Agent or Application confirms and secures the acquired information.

This information is stored in two distinct places:

  1. Data Module: This retains data in a standardized format for further analysis, reports, etc.
  2. Knowledge Module: Here, the information becomes part of the collective wisdom for use by AI Applications and Agents.

During this process, a selection of information is transferred. The framework should support the delivery of various algorithms for learning and knowledge persistence.

Linking: Unraveling Unstructured Data

The Linking phase is a mysterious yet vital part of the information process. Most of the data within this module doesn’t conform to the rigid structure we usually associate with relational databases. Instead, the information often takes the form of documents (divisible into chapters, headings, paragraphs), plain texts, tasks with descriptions, comments, etc.

But don’t be fooled by its apparent lack of structure; it’s far from chaos. This module serves as a bridge, connecting disparate forms of knowledge. It’s the glue that binds the scattered pieces of a puzzle.

Enable Different Implementations

Representing the form in which knowledge will be stored is not a one-size-fits-all solution. It must be adaptable, providing different implementations.

Indexing: The Key to Quick Retrieval

In IT systems, rapid data retrieval demands effective indexing. Familiar in the realm of popular databases, this mechanism works similarly with the fields used in queries. The situation is equally relevant for knowledge used by artificial intelligence, which must be interconnected and indexed.

  • Using Language Model Mechanism (LLM): Embedding mechanisms from models like LLM enable efficient indexing.
  • Full-Text Search in Databases: Utilizing a database that allows full-text search provides another way of indexing.
  • Other Methods: The architecture must allow for delivering multiple solutions, responsible for indexing and preparing data for further processes.

Architectural Flexibility

The solution must be architecturally capable of supporting various resolutions, implementations responsible for indexing, and preparing data for subsequent processes. This aspect ensures that the system remains agile and adaptable to the ever-changing demands of data management.

Retrieving Information: A Comprehensive Insight

Retrieving and searching for the necessary information mark the final stage in the data processing journey. In this phase, a careful selection and correlation of essential details are carried out, using information stored in the Memory Module and extracting relevant knowledge from the Knowledge Module.

Architectural Solutions

Architecturally, the solution must be designed to accommodate various implementations of this mechanism. From cloud-based applications to local server configurations, the architecture must be robust and flexible, offering diverse pathways to execute the retrieval process.

In Context Learning Integration

The information gleaned from this module is seamlessly integrated with linguistic models through a solution known as In Context Learning. This sophisticated approach ensures that the models remain relevant and adaptable to various linguistic requirements.

A Focus on Privacy

The ultimate goal of the retrieval process is to provide current and private organizational data required to accomplish the assigned task. The emphasis on privacy ensures that sensitive information is handled with care and integrity, adhering to the various legal and ethical standards.

Graphs and Memory Networks in AI Systems

Diving deeper into the architecture of AI systems, especially in the intriguing realm of the Knowledge Module, the sheer brilliance of graphs and memory networks unfolds. Their integration sets forth a dynamic connection, one that resonates with the synapses in our brains, opening doors to an unparalleled depth of understanding and intelligence.

The Role of Graph Structures

Unlike traditional data structures, which might often appear linear and siloed, graph structures stand out, flaunting a flexibility that echoes the natural workings of human cognition. Here’s how they do it:

  • Interconnected Nodes: Just as neurons in our brain interlink, nodes in graph structures symbolize connected data points, building relationships, dependencies, or associations.

“The beauty of graphs lies in their ability to create dynamic connections, much like the human mind.”

  • Dynamic Learning: Graphs can expand with each new data or knowledge piece, thus continually evolving the AI system.
  • Semantic Understanding: Mapping out semantic relationships through graphs aids the AI in perceiving the context and meaning behind data, a cornerstone for applications requiring a profound understanding of language and context.

Memory Networks and Deep Learning

Memory networks, a fascinating subclass of neural networks, add an exciting new dimension to AI systems’ processing abilities:

  • Attention Mechanisms: These networks pinpoint which stored knowledge parts are relevant for a task, reflecting how our brain recalls pertinent memories.
  • Continual Learning: Memory networks ensure that AI retains and integrates new information with existing knowledge, setting up an uninterrupted learning curve.
  • Relational Reasoning: Excelling at tasks that require understanding relationships between various data points, memory networks are indispensable for intricate tasks.

“Memory networks breathe life into AI, allowing it to evolve and relate information in a human-like manner.”

Integrating Graphs with Memory Networks

The synergy of graph structures with memory networks propels the AI system to new horizons. Imagine an AI system, while interacting with a user, not only recalls the recent interaction history but comprehends the sentiment and context behind it. Such an ability greatly enriches the user experience.

“Integrating graphs with memory networks is like giving AI the power to understand, relate, and evolve, just like humans.”

In Conclusion

To conceive an AI system mirroring human intelligence requires delving beyond algorithms and codes. The fabric of our cognition inspires this pursuit, filled with connections and associations. By leveraging graph structures and memory networks, we craft AI systems that don’t merely process information but understand, relate, and evolve with it.

Data Security in the AI Era

In a world constantly reshaped by innovation, data security has become a pressing issue, especially in the realm of artificial intelligence (AI). Privacy concerns, security measures, and the use of data by AI present primary apprehensions for businesses and other professional groups.

Security Vulnerabilities in LLMs: Prompt Injection, Prompt Leaking, and Jailbreaking

Large Language Models (LLMs) such as GPT-4 have become incredibly powerful tools, but they are not without their security vulnerabilities:

  • Prompt Injection: This method involves manipulating the model’s output by adding malicious content to the prompt. Attackers can inject text into a tweet, causing it to generate unintended responses.
  • Prompt Leaking: Here, the attacker attempts to unveil secret prompts, posing risks to intellectual property.
  • Jailbreaking: Targeting safety and moderation features, jailbreaking includes techniques like alignment hacking, and even simulating a Linux terminal with elevated privileges.

These vulnerabilities pose real threats to organizations using LLMs, leading to potential information leaks, abuse, or other malicious activities. To mitigate these risks, designing prompt-based defenses, monitoring unusual model behavior, and applying fine-tuning techniques become imperative. The security of LLMs requires continuous vigilance and proactive measures against these unique forms of attack.

Protecting Our Data Sent to Public LLM Models

Another area to scrutinize while designing a solution and selecting a language model provider is ensuring legal and technological security for the data transferred via APIs to external companies. It’s vital to make sure that this information is safe and won’t be used for training subsequent iterations of models.

Communication: Not Just Security, But How the Information is Conveyed

Think about how carefully we choose words in various contexts. A leader might shield employees from harsh realities or positively portray failures to maintain morale.

Collective Brain of AI: Blessing or Curse?

The “collective brain” of AI represents a paradox. Unlike our unique wisdom and experience, AI offers a single consciousness shared by all. The beauty of this unification comes with a risk.

Designing a solution requires attention to access control over data, ensuring that one consciousness, one brain, and one thought do not become the source of vital information leaks.

Conclusion

The AI era brings forth a complex landscape of data security. From prompt manipulations to collective intelligence, the challenges are multifaceted but not insurmountable. Protecting our intellectual property and personal information is paramount.

Challenges in Engineering Artificial Memory

Artificial memory mimics the fascinating and complex structure of human memory, but tailoring it to the requirements of artificial intelligence is filled with challenges. In this pursuit, we face specific demands and untrodden paths that make the task intriguing and challenging. Let’s delve into some of the main hurdles and explore possible solutions.

Handling Rapidly Evolving Real-Time Data

In today’s fast-paced digital age, information changes at an unprecedented rate. Designing AI systems capable of adapting to this rapid evolution is a Herculean task.

Identifying Relevant Information within Massive Datasets

The enormous volumes of data available today can easily overwhelm AI systems. Identifying what’s relevant is a fundamental challenge.

Updating Knowledge Bases Consistently

Maintaining an updated and accurate knowledge base is critical.

Embedding Organizational Processes and Policies

Incorporating organizational guidelines is essential for AI to function within ethical and procedural boundaries.

Conclusion

In wrapping up, the sheer length of this article is testament to the complexity of designing architecture and functionality for software that integrates artificial intelligence into our “current” world. But through this challenge, I’ve managed to translate my thoughts and analyses into a more approachable form, something that personally satisfies me.

This journey has not only been about detailing technicalities but about breaking down complex problems and building them back with the reader’s understanding in mind. The integration of artificial intelligence into various aspects of our lives is no small feat, and this article’s size reflects that complexity.

The story doesn’t end here; the exploration continues. Whether you’re a seasoned professional or a curious enthusiast, you’ll find something to pique your interest in the articles that preceded this one, and those that will follow.

--

--