Member-only story
In Today’s Episode of Unintentional Humor
A Deep Dive Into Problem-Solving With Large Language Models
The jug problem, but simplified
UPDATE: As of the release of GPT o1-preview
, LLMs have a way of reasoning that’s quite effective. They can now solve problems requiring logic, including the one they were unable to in this article. Ex:
Given Chain of Thought reasoning, ChatGPT is now able to think through the steps and give the most logical (hopefully) answer. You can also view the thought process:
Interestingly, the thought process is completely different from the answer. I find myself doing this in my head sometimes. I’ll over-engineer the solution to a problem, then just before sharing it, realize my mistake. Anyway, back to your regularly scheduled program! (which now is only true with older models it seems).
I love LLMs and I think they’re part of our path to AGI if not the answer. Many believe we’re nearly there…