DSPy Essentials: Mastering Model Tuning with Optimizers

Frederick Ros
11 min readApr 4, 2024

Picking up where we left off in our DSPy series, we’ve covered how to articulate our goals and instruct an LLM to achieve them. It’s time to dive into a crucial component: Optimizers. Fun fact: these were once known as teleprompters, a nod to their function of "prompting from afar," but they've since been rebranded.

First things first, let’s set up our environment using the Modules we introduced last time and revisit the examples we've crafted.

from dotenv import load_dotenv
load_dotenv()

import dspy

gpt3 = dspy.OpenAI(model='gpt-3.5-turbo', max_tokens=4096)
dspy.settings.configure(lm=gpt3)

stablelm = dspy.OllamaLocal(model='stablelm2', max_tokens=4096)

class ActionItems(dspy.Signature):
"""Extract action items"""
text = dspy.InputField(desc="A transcript from a discussion.")
action_items = dspy.OutputField(desc="a comma-separated list of action items")


class ActionItemsExtractor(dspy.Module):
def __init__(self):
super().__init__()
self.tp = dspy.ChainOfThought(ActionItems)

def forward(self, transcription):
return self.tp(text=transcription)

class Facts(dspy.Signature):
"""Extract facts that are not action_items"""
text = dspy.InputField(desc="A transcript from a discussion.")
action_items = dspy.InputField(desc="Already extracted action items")
facts = dspy.OutputField(desc="A…

--

--