Proven Paths for Supply Chain Maths
Generative AI, the latest sensation in the AI space, is all the hype these days. But many challenging business problems, especially those involving quantitative tasks, still rely on traditional methods to be fully solved. Just because there’s a new “cool kid on the block” doesn’t mean we should overlook these time-tested tools that have consistently delivered real-world results.
A perfect example of this is how Peak has solved a purchase order optimisation problem, using simple linear programming. This allows our customers in the CPG & Manufacturing space to place orders that achieve the optimal stock holding at each location in their supply chain, required to meet target stock levels, whilst adhering to various ordering constraints.
Supply Chain Chaos
What do things look like without “AI”? What problem needs solving in the first place? A key process to keep supply chains running smoothly is the act of raising purchase orders. These are like shopping lists that a company sends to a supplier when they need to buy things. It tells the supplier what they want and how much they need, so everyone can plan to get the items delivered on time. Consider all of the different data sources that a supply chain planner must draw upon during this process:
- The existing stock levels and any stock that is due to arrive over the planning period.
- The expected demand over the same time, derived from a demand forecast.
- The target safety stock levels needed to offset any demand volatility for achieving the desired service level (read more about how Peak forecasts safety stock here).
- Order parameters like lead time and review period, which can differ at a product level.
- Both product and supplier level ordering constraints, such as minimum order amounts.
Accounting for all such data sources, even with the help of ERP (Enterprise Resource Planning) software, is no small feat. What’s more is that planners often work in silos and build up location/sector specific knowledge, which could result in inconsistent ordering methods across the same supply chain. This in turn leads to suboptimal ordering decisions that neither maximise fulfilled demand nor minimise stock holding costs. Throw in a few legacy systems with poor user interfaces, and an inability to adapt to ever changing business conditions, and you have the perfect cost (and time) saving opportunity.
Establishing Order
How do we bring order to the chaos? We need some method of consistently recommending optimal purchase orders to locations across a customer’s supply chain that make use of all of the data sources listed above, whilst adhering to various business constraints at both the product and supplier level.
Enter linear programming
Linear programming is a method used to find the best outcome in a situation with limited business resources, like time or money. It involves deciding how to allocate these resources most effectively to achieve a desired result, like maximising profit or minimising cost, within certain limits defined by business constraints. Assuming all such constraints are linear, mathematical equations can be used to help make these decisions efficiently, and arrive at the globally optimal result. For this reason, linear programming (LP) lends itself naturally to problems where business requirements do not break this assumption of linearity. Learn more about it here.
From “Biz Talk” to “Code Walk”
One of the most important steps in the overall process is to ensure that all business requirements are translated correctly into constraints that the linear program must enforce (think of this as the requirements gathering step of any applied data science or software application). In our case, these come in the form of various ordering requirements and constraints:
“I can only place an order every R days.”
“Any order I place, takes L days to arrive.”
“I need at least Q stock to meet my forecasted demand and safety stock requirements. “
“I need orders of each product to be at least M for the supplier to accept it.”
“My products come in packs or pallets of fixed size P.”
“The total order quantity in my purchase order has to be at least S for the supplier to accept it.”
“I want to hold the bare minimum stock to achieve all of the above.”
It’s worth noting that we assume a given product at a given location is supplied by just a single supplier, and so the requirements above can be considered for each supplier-location pair separately.
At this point we recommend formalising the problem mathematically, before writing any code so that all such requirements are listed concisely and explicitly, like so:
From here, encoding the above into a form that is interpretable by popular linear programming solvers like CBC, is easily facilitated through frameworks like PuLP.
The figure above, displays a high level design of the solution showing how data flows from input source, through the optimiser in which optimised purchase orders are derived, and on towards consumption by downstream components. Read on, to find out more about the solver relaxation step!
Going deeper
We now have a solution that translates disparate sources of data into a set of purchase orders that meet all business requirements, and ensures that the minimum amount of stock required to do so is held over the planning period.
But as is usually the case when it comes to applied data science efforts, deploying such a solution in production, and deriving value in a robust and sustainable way comes with its own set of challenges:
- How can we make sure that our optimiser runs in sufficient time to meet target SLAs for delivering outputs ?
- How do we handle cases where problems for certain supplier-location pairs are too difficult to be solved on time ?
- How can we build something that is specific enough to be useful, but general enough to easily adapt to different supply chains and their requirements ?
Runtime and Scalability:
The total runtime of the optimisation will grow non-linearly with the number of decision variables (the optimal order quantities). To combat this, we can include only those decision variables that can be non-zero based on what we know for certain. For example, based on the ordering behaviour represented by the final two constraints in the problem formulation:
- An order can only be placed every review period days over the planning horizon, so we can exclude all decision variables of products in between these days.
- Any order placed will take lead time days to arrive and have an effect on the available stock, so decision variables for days in the final lead-time of the planning horizon can also be excluded.
In addition to the above, recall that we assume supplier-location separability for this problem i.e. that a product X is supplied to a location Y from just a single supplier. Crucially, this allows us to consider the products associated with each unique supplier-location item as a separate problem to be solved, and do so in parallel:
This significantly reduces the total runtime across all supplier-location pairs.
Solver relaxation:
Problems for certain supplier-locations can take much longer than average to run e.g. ones that contain significantly more products (and so more decision variables) than others. In these cases, we can apply some “relaxations” to certain parameters of the solver and re-attempt the problem under these new conditions.
As the solver explores different solutions, it keeps track of the best solution found so far and uses mathematical techniques to calculate bounds for what the optimal solution could be. If the relative gap between the current best solution and these bounds falls within some tolerance, the solver will stop searching for a better solution. So, one such relaxation we apply in scenarios like the above, is to increase this relative gap tolerance:
Alongside this, the total runtime allocated to the solver to find the optimal solution within this tolerance is also increased. This means we are guaranteed to produce a solution that meets all business requirements, even if it is not always the optimal from a stock holding perspective. Cases like this should be clearly communicated to the end user, so that the correct tradeoff between output SLA, and solution efficacy is achieved.
Customisability
We’ve defined the general set of requirements for problems of this form, but these will vary depending on the supply chain in question. Peak itself has a Professional Services team that have deployed our generalised Inventory AI applications for a range of customers in the CPG & Manufacturing space. How have we ensured that customisations required to satisfy arbitrary combinations of business requirements are easy to implement ?
- Constraints and objective function terms are encapsulated within separate functions, so that these can be easily added or removed from the problem setup for each customer.
- A preference for composition over inheritance when separating the object that builds the problem from the one that solves it. This allows for a nice separation of responsibilities while also making modifications less cumbersome. Read more about this principle here.
- Relying on data structures that strike the correct balance between performance & ease of code readability, and which allow extensions to further inputs sources.
We won’t go into detail on any one of these, since that is outside the scope of this work. Note also, that these are intended to be points for consideration rather than hard and fast rules for such solutions.
The finished product
We’ve talked through how optimal purchase orders are derived, and gone over some practical considerations needed for production grade deployments. Now let’s move onto surfacing these to the end user in the most useful way.
The purchase orders are materialised as a list in the user interface. Each is displayed alongside summary stats like the total number of SKUs in the order, and an indication of the proportion of these SKUs having different stock statuses:
The “quality of order” metric card in the purchase order view below is especially useful. It immediately indicates to the end user, how much of the order is actually needed to meet target stock levels i.e. in the absence of ordering constraints, and how much has been topped-up specifically to meet the supplier’s minimum order quantity (MOQ).
This traffic light system allows them to focus their efforts on situations where they might want to start a dialogue with suppliers in order to avoid ordering significantly above the required amount just to meet the MOQ. In this way, the solution empowers the end user to become more efficient.
The action of topping-up involves telling the optimiser how we want it to decide which products to add to the purchase order to meet the Supplier MOQ. This will vary from one supply chain to another based on the rules in place to prioritise additional products in these scenarios. Stay tuned for a follow up article where we delve into a few different methods for this!
The bottom line
While advanced techniques like generative AI have their place, linear programming remains a robust and practical approach for addressing supply chain challenges in the CPG and Manufacturing space. By focusing on precise representation of business requirements, and having a user centric approach to solution design, this time-tested method allows businesses to drive cost savings and efficiency gains across their supply chains.
We hope this piece has inspired you to be on the lookout for cases where “old-school” still does the job, and to appreciate the full breadth of AI tools that are available for solving real-world problems.