Where AI Falls Short: Why Optimization Still Needs Mathematical Muscle

Artificial Intelligence (AI), particularly Large Language Models (LLMs), has seen explosive progress in recent years. From natural language understanding to code generation and image synthesis, AI now powers a broad range of capabilities across industries. Organizations are embedding generative AI into chatbots, document analysis, and decision support, making knowledge work faster and more accessible.
However, there exists a class of problems known as Non-deterministic Polynomial-time hard (NP-hard), where the complexity of finding the best solution grows so rapidly that even supercomputers struggle as the problem scales. These challenges are common in supply chain planning, sequencing, and scheduling. Despite the remarkable progress of LLMs, they are not suited for solving NP-hard optimization problems that demand guaranteed feasibility and true optimality. Such problems require specialized mathematical and heuristic approaches, far beyond the generative strengths of language models.
LLMs excel at pattern recognition, probabilistic reasoning, and generating plausible outputs based on past data. However, real-world supply chain optimization involves:
- Combinatorial Complexity: Scheduling 100 jobs across 10 machines with constraints on timing, capacity, and precedence can create billions of feasible sequences. LLMs cannot search this solution space efficiently or deterministically.
- Feasibility Checking: A good plan isn’t just one that “sounds right”, it must obey hard constraints like machine capacities, maintenance windows, and delivery deadlines. LLMs cannot guarantee feasibility or validate constraint satisfaction.
- Optimality Guarantees: LLMs lack mechanisms to mathematically evaluate and improve plans based on cost, time, energy, or other KPIs. There is no objective function or optimization loop inherent in LLM inference.
- Inability to Iterate and Improve: Real-world optimization requires iterative search, neighborhood exploration, and adaptive memory, elements found in metaheuristics like Genetic Algorithms, Tabu Search, or Simulated Annealing. LLMs have no such capabilities.
- No Control over Decision Variables: Planning problems require precise control of variables and logic. LLMs can describe or imitate planning logic, but they cannot internally simulate or evaluate the effect of changing one variable on the overall outcome.
Tackling NP-hard problems requires mathematical programming (e.g., MILP, constraint solvers) and metaheuristics tailored to the domain. These tools:
- Explicitly model constraints and objectives
- Systematically search large solution spaces
- Provide repeatable, auditable, and scalable solutions
- Integrate domain-specific knowledge into the optimization process
In contrast, LLMs are non-deterministic and non-reproducible, often suggesting plans that appear logical but fail under real-world scrutiny.
LLMs still play a valuable role, as front-end assistants, scenario explainers, or constraint translators. For example, they can:
- Convert natural language into a mathematical model
- Explain an existing schedule, perform various output analyses, and what-if analyses
- Suggest alternatives or what-if scenarios and execute them
But the heavy lifting must be done by robust optimization engines.
In conclusion, even though it is true that AI has forever changed how we interact with technology, when it comes to solving hard, industrial optimization problems, mathematics and domain expertise remain irreplaceable. Organizations that pair LLMs with state-of-the-art optimization engines will lead in efficiency, accuracy, and agility.