Photo by Susan Q Yin on Unsplash

LLMs for Optimization

Berk Orbay
Published in
3 min readSep 14, 2023


Specialized LLMs for specific tasks are known to perform much better than generalized applications (which are, to be honest, still quite impressive). For instance LLM specializations ([1] and [2])in medical domain are rapidly taking over. Even though I’m not actively looking for, I occasionally see several LLM implementations for optimization tasks.

Here are two example papers.


Diagnosing Infeasible Optimization Problems Using Large Language Models by Chen et al. (2023) (Arxiv link) propose a product called “OptiChat”. OptiChat is a GPT agent and its main objective is to help you address defects in your mathematical model (i.e. constraints) causing infeasibility in an interactive way. Under the hood, it connects GPT-4 with solvers (Gurobi, Mosek) and AMLs (Pyomo).

Below is an example screenshot taken from their work. OptiChat assumes the role of an optimization expert and holds up a conversation with the problem owner in natural language.

OptiChat example prompts and responses (taken from the paper)

The paper also reports user feedback about the user experience of OptiChat.

User feedback (taken from the paper)

LLMs as Optimizers

Large Language Models as Optimizers by Yang et al. (2023) (Arxiv Link) is another fresh attempt to use LLMs for optimization tasks. Their approach is called Optimization by Prompting (OPRO).

OPRO flowchart (taken from paper)

If I understand correctly, OPRO takes the task description in natural language (meta-prompt) as a starting point, generates a solution and iterates using more LLM prompts.

Example meta-prompt and solution-score pairs (taken from the paper)

They test the algorithm with well-known optimization tasks such as linear regression and traveling salesman problem (TSP). Then they use benchmark datasets GSM8K and Big-Bench Hard (BBH). They are realistic about the current results but optimistic about the future. They have a detailed appendix section elaborating on goods and bads.


Recent works are popping up combining optimization algorithms with LLMs both for model building and getting better solutions. These two works mentioned are not the only ones, naturally. But they are really good showcases. You are welcome to send links of more relevant studies. I expect the number of works to increase exponentially and quality of solutions as well.

So, will machines take away optimization jobs? I surely hope so, for many applications; then we can get on with something more exciting. For the immediate future, it is not probable. Same with coders, designing the process and describing the problem properly is still the human domain.



Berk Orbay

Current main interests are #OR and #RL. You may reach me at Linkedin.