BitAgent and Subnet 20: How to Use Decentralized AI in Your Day-to-Day

RizzoNetwork
10 min readAug 21, 2024

--

What is Subnet 20? What is BitAgent? What is Tool Usage in AI?

I’m excited to share with you what these terms mean and how they can revolutionize your day-to-day tasks!

Subnet 20 (also known as BitAgent) is an AI capability provider under Bittensor, offering Agentic Tool Usage (aka function calling) — an advanced generative AI feature that merges the impressive abilities of large language models (LLMs) with the specific functions of your own tools. These tools include any function, method, or endpoint you control.

Why Equip LLMs with Tools?

Imagine transforming LLMs into powerhouses capable of tackling complex problems specific to your domain. By enabling AI to leverage your existing tools, you unlock autonomous capabilities that dramatically enhance the LLM’s potential. This integration not only amplifies the LLM’s effectiveness but also revolutionizes the way you approach and solve challenges, making your efforts more efficient, innovative, and impactful.

How’s the performance?

BitAgent v0.3.2 recently completed the 4 Tool Usage Benchmarks provided by LangChain — https://blog.langchain.dev/benchmarking-agent-tool-use.

Scoring 100% across the board, BitAgent’s performance was incredibly competitive and in most cases, notably the MultiVerse Math Challenge, outperformed the other models by leaps and bounds.

BitAgent outperforming other models on LangChain’s Tool Usage Benchmarks (vs 4/2024 results — https://langchain-ai.github.io/langchain-benchmarks/index.html)

Tell us what these task types are …
This benchmarking dataset has 4 key task types:
- Typewriter with 1 tool — 1 function call to type words, letter-by-letter
- Typewriter with 26 tools — 26 function calls to type words
- Multiverse Math — problem solving with arithmetic
- Relational Data — tools to work with locations, users, food

You can dive into our results and more details here.

Alright, I’m sold — how do I use it?

Let me show you how this works and how you can apply it to make your everyday tasks easier and more productive.

Our Objective — sorting images into directories:

Maybe you’re like me and you have a bazillion photos and need some organization but it’s a TON of work to review photo by photo.

Let’s use BitAgent to review our photos and sort them into directories for us. We’ll showcase this capability with a set of random sketch images I found on the internet. Some of the images have animals and some don’t.

I want to organize these photos into two categories, those that contain animals and those that don’t.

Setup — What you need …

API Key — You’ll need an API Key for GoGoAgent, you can get it here — https://gogoagent.ai/. GoGoAgent provides an API endpoint for you to talk to Subnet20 ‘s Tool Usage & Agent capability — https://gogoagent.ai/api.

You’ll also need Python 3, a few images and a few libraries:

# The libraries we need, you may need to run: pip install pillow
from PIL import Image
from io import BytesIO
import glob, os, requests, base64, shutil, json

Here are the 10 images I’m working with and need to sort between 2 directories (animals and non_animals) so I can be happily organized:

You can see that 5 of these images are animals (butterfly, ladybug, bird, cat and dog) and the rest are non-animal images. Now to write code to use LLMs + Tools to automagically organize our images.

Let’s setup the basics — setting our API key, setting our images directory and setting our GoGoAgent endpoint:

# the images I want to work with
images_dir = f"{os.environ['HOME']}/RandomImages/"

# API Endpoint for SN20 / BitAgent
api_endpoint = "https://api.gogoagent.ai/query_subnet"

# Headers with API KEY
hdrs = {"Content-Type": "application/json",
"Authorization": "YOURAPIKEYGOESHERE"} # TODO USE YOUR API KEY

Working with Tools

We’ll work with 3 Tools -

1 — A tool to move an image into the Animals directory
2 — A tool to move an image into the NonAnimals directory
3 — And just in case, a tool to move an image into Unknown directory

Now let’s define our tools — we need to write the method and write the definition for the LLM.

# Define the tools
# one for moving to animals directory
# one for moving to non animals directory
# and one for moving to unknown (just in case)
move_to_animals_dir_tool = {
"name": "move_to_animals_dir",
"description": """Move the Image that contains something animal related
to the animals directory""",
"arguments": {}
}
move_to_non_animals_dir_tool = {
"name": "move_to_non_animals_dir",
"description": """Move the Image that does NOT contain anything animal
related to the non_animals directory""",
"arguments": {}
}
move_to_unknown_dir_tool = {
"name": "move_to_unknown_dir",
"description": """Move the Image to the unknown directory because it
can not be determined if it contains an animal or not""",
"arguments": {}
}

tools = [move_to_animals_dir_tool,
move_to_non_animals_dir_tool,
move_to_unknown_dir_tool]

We’ve just defined the tools we’ll be sending to Subnet 20. Couple things to note —

  • Each tool is a dict of “name”, “description” and “arguments”
  • In this case, we don’t need arguments, since each method is defined to take the full action we need with no alterations
  • None of our actual method logic is being sent, just definitions of what each method does

Let’s look at our actual methods, the business logic that will remain behind the scenes -

# our underlying business logic
# this code does not go to subnet20
def move_image_file(file, dirname):
the_path = os.path.dirname(file)
full_dirname = the_path + "/" + dirname + "/"
if not os.path.exists(full_dirname):
os.mkdir(full_dirname)
#print(f"Moving {file} to {full_dirname}")
shutil.move(file, full_dirname)

# we know to pass in the image file, don't need as tool argument
def move_to_unknown_dir(file):
move_image_file(file, "unknown")
def move_to_animals_dir(file):
move_image_file(file, "animals")
def move_to_non_animals_dir(file):
move_image_file(file, "non_animals")

Here you can see that we’ve defined our 3 methods to be called and run locally as guided by Subnet20. You can see that the names of these methods and the names of the tool definitions above are the same.

One important note is that the name of the methods and the names of the tools should not be the same, they would overwrite each other. For example, we have a method named “move_to_animals_dir” and we have a tool definition named “move_to_animals_dir_tool”.

Final Building Blocks

We need the final building blocks that handle images, calls to the API and calls to our methods. Let’s work with images real quick -

# this is just a helper method to encode an image for passing via JSON
def get_img_str_from_image(image):
with open(image, 'rb') as im:
img_str = base64.b64encode(im.read()).decode('utf-8')
return img_str

# call to gogoagent API to get a description of the image as it relates to
# our needs
def get_details_about_image(image):
# base64 encode the image for sending to the API
img_str = get_img_str_from_image(image)

# Setup the messages / prompt to get details related to animal or not
messages = [
{"role": "user",
"content": """Tell me about this image, I am most interested in knowing
if it contains anything related to an animal or not"""}
]

files = [{'content': img_str, 'type': "Image"}]

# Send the request to the API
data = {"messages": messages, "files": files}
result = requests.post(api_endpoint, headers=hdrs, json=data)

return result.json()[0]['response']

GoGoAgent API takes a few parameters that are described here, in GoGoAgent’s API Docs. In the code above, we’re passing in two of the parameters — files and messages. We’re using “messages” to prompt the LLM with our need and we’re using “files” to pass in an attachment of type “Image” — our image.

Let’s take a look at an example of what the response looks like when GoGoAgent responds to our query about one of the images:

It sees that we’ve passed in the first image (the ladybug image) and recognizes that it contains an image of an animal.

Let’s do one more, one of the other images that is not an animal:

For the 3rd image in our set, we have a rain cloud and GoGoAgent correctly tells us that the image does not contain an animal.

Ok, we’ve got images nailed down, let’s move on to tool calling …

def get_tool_call_based_on_response(response, tools):
# Setup the messages for tool call
messages = [
{"role": "user",
"content": """Tell me about this image, I am most interested in knowing
if it contains anything related to an animal or not"""},
{"role": "assistant",
"content": response},
{"role": "user",
"content": """I want to organize my photos, which tool should I use
for the image details provided?"""}
]

# Send the request to the API
data = {"messages": messages, "tools": tools}
result = requests.post(api_endpoint, headers=hdrs, json=data)

return result.json()[0]['response']

def handle_agent_tool_call(agent_response, image_file):
method_to_call = None
for resp in json.loads(agent_response['response']):
if resp['role'] == "tool call":
method_to_call = resp['content']['name']
break

if method_to_call:
globals()[method_to_call](image_file)
else:
print("no method to call")

Here we’ve defined two methods for tool calling and handling. The first method “get_tool_call_based_on_response” interfaces with the GoGoAgent API to figure out which tool to use and the second method “handle_agent_tool_call” makes the call to our defined methods to interact with our images as a result of the tool call response from the API.

Let’s check it out!

Here we’re working with the first image which we know is a ladybug (from up above) and we’ve called to our tool calling method to use the GoGoAgent API to figure out which tool we should be calling of the 3 tools we defined.

Take a look at the output:

{'response': '[{"role": "tool call", 
"content": {"name": "move_to_animals_dir", "arguments": {}}},
{"role": "assistant",
"content": """Based on the information provided, the image
contains an animal (a ladybug), so you should
use the move_to_animals_dir tool to move the
image to the animals directory."""}]',
'citations': [], 'axon_status_code': 200, 'dendrite_status_code': 200}

The most important part is the “tool call” response which tells us the name of the method and the arguments to pass it. In this case, we see that it says to use the “move_to_animals_dir” method. The assistant response is just icing, we don’t need it right now.

Let’s look at our rain cloud image and what it returns in that case:

Check that out — it says to use the “move_to_non_animals_dir” method.

Great! We have GoGoAgent looking at our images, reviewing our tools and telling us which tool to use to meet our objective.

Piecing it all together

Let’s loop through our images, one-by-one and see how it goes.

# loop over each image
for img_file in glob.glob(f"{images_dir}*.png"):
# get detail about the image
agent_response_about_image = get_details_about_image(img_file)
# get the tool to use for this image selecting from our list of tools
tool = get_tool_call_based_on_response(agent_response_about_image, tools)
# call to that tool
handle_agent_tool_call(tool, img_file)

Here are our images before we run the script to organize them:

After we run the script, we’re left with 2 directories: animals and non_animals:

Let’s see what’s inside these directories …

Here’s the non_animals directory:

Here’s the animals directory:

Voila! Very cool — it all worked and as you can see my animal images are all nicely separated out from all of my non animal images.

For more information about BitAgent and Subnet20, please check out our living document and come see us in our discord channel.

And if you want to play with this notebook, you can find it in our git repo, here.

About Rizzo Network

Our journey within the Bittensor ecosystem is one of dedication and innovation, deeply rooted in the core values of transparency and community trust. As a validator, we pride ourselves on maintaining a wholesome and honest approach, ensuring that our contributions are both reliable and beneficial to the broader network.

We own and operate two subnets within the Bittensor ecosystem: SN20, known as BitAgent, and SN45, branded as Gen42. These subnets represent the foundation of our commitment to advancing AI-driven solutions that empower everyone from developers to IT professionals and those around the edges and in between.

Our flagship products showcase the power and potential of our subnets. The Gen42 code assistant seamlessly integrates into your IDE and developer environment, providing intelligent suggestions and automating code completion to enhance productivity and streamline the development process. This capability is designed to support developers in crafting high-quality code efficiently, allowing them to focus on more complex and creative aspects of their projects.

Our MSPTech AI agent, built on our BitAgent subnet, revolutionizes the way IT support teams handle their ticket flow and management. By automating the closure of IT tickets, MSPTech enables tech professionals to allocate their time and expertise to more pressing and value-added tasks that require the human touch. This not only improves operational efficiency but also contributes to a more agile and responsive IT support environment.

Together, our subnets and products exemplify our growing integration into the Bittensor ecosystem and our unwavering commitment to building honest, effective, and transformative AI solutions.

For more information, visit us at Rizzo Network (https://rizzo.network/).

--

--

RizzoNetwork
RizzoNetwork

No responses yet