Member-only story
Agentic AI
Deep Dive into LlamaIndex Workflow: Event-Driven LLM Architecture
What I think about the progress and shortcomings after practice
Recently, LlamaIndex introduced a new feature called Workflow in one of its versions, providing event-driven and logic decoupling capabilities for LLM applications.
In today’s article, we’ll take a deep dive into this feature through a practical mini-project, exploring what’s new and still lacking. Let’s get started.
Introduction
Why event-driven?
More and more LLM applications are shifting towards intelligent agent architectures, expecting LLMs to meet user requests through calling different APIs or multiple iterative calls.
This shift, however, brings a problem: as agent applications make more API calls, program responses slow down and code logic becomes more complex.
A typical example is ReActAgent, which involves steps like Thought, Action, Observation, and Final Answer, requiring at least three LLM calls and one tool call. If loops are needed, there will be even more I/O calls.