Research Reveals the Optimal Way to Optimize - WIRED
A chance homework mix-up in 1939 led George Dantzig to two famous results—and soon after to the simplex method, the workhorse of modern linear programming used to optimize supply chains, logistics, and resource allocation under constraints. For decades, practitioners have trusted simplex because it’s fast in real-world use. Yet theory long cast doubt: classic results suggested it could take exponentially long in the worst case, creating a mismatch between practice and proof.
A new advance helps close that gap. In work to be presented at the Foundations of Computer Science (FOCS) conference, Sophie Huiberts (CNRS) and Eleon Bach (Technical University of Munich) sharpen the leading theoretical analysis of simplex and show it’s essentially as fast as possible within that framework. Building on the influential 2001 “smoothed analysis” by Daniel Spielman and Shang-Hua Teng—which explained why tiny randomness in data tames worst-case behavior and guarantees polynomial-time performance—the new paper both accelerates the best known runtime bounds and proves those bounds are optimal for that approach.
Why this matters: the simplex method navigates the edges of a polyhedron defined by linear constraints to reach the best solution. Without guidance, you could take an exponentially long path. Smoothed analysis says real-world noise prevents that. Spielman and Teng’s guarantee was polynomial but with a hefty exponent (including a term around the 30th power). Huiberts and Bach introduce more refined randomness into the analysis to cut that exponent dramatically—and then show you can’t do better with smoothed-analysis-style simplex. In short, the model is now tight and fully understood.
Experts praise the result as a masterful synthesis that clarifies why simplex is reliably fast in practice. While the work is primarily theoretical and won’t immediately change off-the-shelf solvers, it strengthens confidence that today’s linear programming software delivers polynomial-time behavior on real data—not exponential blowups.
What’s next? The aspirational “North Star” is linear-time performance that scales directly with the number of constraints. Achieving that would likely require a fundamentally new idea, and researchers don’t expect it soon. For now, the field has a crisp explanation for simplex’s practical efficiency—and a proof that, within smoothed analysis, the current approach is as good as it gets.
Originally reported by Quanta Magazine and republished by WIRED.
Source: https://www.wired.com/story/researchers-discover-the-optimal-way-to-optimize/
Back…