LLM driven human — like capabilities, Critiquing, Planning and Reasoning

Raunak Jain
2 min readApr 6, 2024

Scaffolding and critiquing LLMs for “autonomous” problem solving skills

In production applications of LLMs, specificity and control is more important than blatant creativity. Conditional generation and guardrails become more important than seemingly eloquent text outputs.

Before we dive into the practical problems with putting LLMs into production, like context length, brittle prompts and costs, let’s ask ourselves, why are we trying so hard to make LLMs work? One possible answer is to reduce the human effort in building and maintaining automation systems. The promise around the “autonomous” capabilities is what is driving so much interest and investment from enterprises.

Autonomous abilities to do what? To look at past data, past interactions and data repositories, and solve well-defined repeatable patterns of problems (maybe through a conversational interface) with the correct reasoning and explainability. In effect, increase the value of an individual engineer, service agent, marketing plan etc. As is evident by increasing adoption of CoPilots, a well integrated human-in-the-loop driven LLM ecosystem can increase a human’s output by automating repeatable tasks — see this video for an example of Microsoft CoPilot’s capabilities. Also, this to see Github CoPilot deep dive.

With this goal in mind, we will try to see how we can use LLMs and provide it a framework of critiqs, scaffolds and enrichment to take autonomous decisions or build forward looking plans. See this tutorial series to blow your mind away ;)

source

What is actionable knowledge?

What do I mean by scaffolding?

What do I mean by critiquing?

Planning and reasoning systems

Plansformer: Generating Symbolic Plans using Transformers

--

--