How to Use Human as Tool with crewai Agents on chainlit UI

Pratyush Ranjan Dalai
3 min readMar 10, 2024

--

an image depicting human as tool in crewai with chainlit UI

what is chainlit and Why chainlit?

Chainlit is an open-source async Python framework that simplifies the process of building scalable Conversational AI or agentic applications. Its key features include simplified development, data permanence, quick iteration tools, and quick build times. Chainlit offers a basic skeleton app configured with the OpenAI API, making it easy to integrate with any LLM that supports the OpenAI API. This makes it easy to create chatbots that can generate text, translate languages, answer questions, and more. Chainlit also offers features like authentication, monitoring, data streaming, and multi-user support, making your application secure, scalable, and reliable.

What is crewai

CrewAI is a multi-agent framework that leverages collaboration and roleplaying to streamline complex workflows with Python. The main concepts of CrewAI revolve around three core entities: Agents, Tasks, and Crews. Agents are standalone units programmed to perform tasks, make decisions, and communicate with other agents. They can use Tools which can be simple search functions or complex integrations involving other chains, APIs, etc.

Now let’s say you have an Idea, where you want to use crewai to build an AI tool, and you want to use human as tool for some essential feedback of flow of execution control.
you could very well use langchain load_tools and load human tool , as mentioned in the following github page

This will ask the user input on your default terminal, in your IDE its your console.

but, what if you want to take the user input from chainlit UI ?

below code sample will help you do that

Here we will use two Ideas

  1. Use Crew AI’s function as a tool, guide here
  2. Write a function that will ask user using Chainlit’s AskUserMessage

Key idea here is the name of your tool function and description of tool function should be intuitive enough for LLm to know that this is what it must use when it wants to ask Human.

example:

Import tool from creawai_tools for annotating your custom function as trool

import chainlit as cl
from chainlit import run_sync
from crewai import Agent, Task, Crew
from crewai_tools import tool

name : (“Ask Human follow up questions”)

description: “””Ask human follow up questions”””

are the important things here for LLM

@tool("Ask Human follow up questions")
def ask_human(question: str) -> str:
"""Ask human follow up questions"""
human_response = run_sync( cl.AskUserMessage(content=f"{question}").send())
if human_response:
return human_response["output"]

then this is how you can add this tool to your agent

extractor = Agent(
role='Expert Data Extractor',
goal='Extract key data from human interaction',
backstory="""You work at a Data Retrieval Office.
Your expertise lies in extracting facts and data of your need from short incomplete human conversations.
... and ask important follow up questions to extract key data while talking to humans.
You talk with human ... you ask follow up questions to human based on what you need next.
.....


""",
# verbose=True,
allow_delegation=False,
llm=llm_chat,
tools= [ask_human]


)

# Create tasks for your agents
task = Task(
description="""Extract key information from User communication and represent the final extracted data as JSON""",
expected_output="All Extracted data in JSON with no extra text only JSON",
agent=extractor
)

crew = Crew(
agents=[extractor],
tasks=[task],
verbose=2, # You can set it to 1 or 2 to different logging levels
)
crew.kickoff()

and voila , this is how it would work !

--

--