PinnedDan ClearyLatency Benchmarks and Comparisons for OpenAI, Azure, and AnthropicWe’ve launched a newsletter where we update the latency numbers for the major model providers every month. Free to sign up…Oct 13, 2023Oct 13, 2023
PinnedDan Cleary10 Best Practices for Prompt Engineering with Any ModelWriting effective prompts that drive consistent results can feel more like an art than a science.Jun 30, 20231Jun 30, 20231
PinnedDan ClearyinBetter ProgrammingUnderstanding Prompt Injections and What You Can Do About Them.From chatbots to virtual assistants, AI models are transforming our interactionsJun 30, 2023Jun 30, 2023
Dan ClearyUsing LLMs to enhance your prompts with tailored knowledge without any technical set upLarge Language Models (LLMs) have an extensive knowledge base, having been trained on virtually all text available on the internet. When…3d ago3d ago
Dan ClearyHow small changes in a prompt can lead to wildly different results, and what you can do about itIf you’ve spent any time writing prompts, you’ve probably noticed just how sensitive LLMs are to minor changes in the prompt. For example…Jul 8Jul 8
Dan ClearyUsing LLMs for Code Generation: A Guide to Improving Accuracy and Addressing Common IssuesLLMs are great at generating text, which makes them pretty good at writing code. But, in their current state, using LLMs for code…Jun 21Jun 21
Dan ClearyPrompt Engineering for Content CreationOne of the first use cases I tried with an LLM was generating content. The prompt was probably something along the lines of “write a blog…Jun 10Jun 10
Dan ClearyPrompt Patterns: What They Are, Which You Should Use, and Free TemplatesThe main reason we frequently publish on our blog is to help spread helpful information about prompt engineering and make it easier for…May 23May 23
Dan ClearyFine-Tuning vs Prompt EngineeringIn general there are currently three methods to get better outputs from LLMs.May 14May 14
Dan ClearyFew shot prompting: What it is, when to use it, examples, limitations and biasesOne of the best ways to get better outputs from LLMs is to include examples in your prompt. This method is called few-shot prompting (a…Apr 27Apr 27