Optimizing Mining Supply Chain Bottleneck… Twice
(or Tell me what you want, what you really really want¹)
In my previous time working for IBM Australia in Perth, we were approached by a local client with an optimization problem.
The client was a 2nd tier iron ore mining company, and for the sake of this article we will call them company ABC. The ore mined by ABC was of significantly poorer quality than the ore mined by industry giants like Rio Tinto or BHP-Billiton. ABC needed to enrich their ore significantly at a processing plant before they could truck it to the local port to be loaded onto ships for delivery to their overseas customers.
The Background
ABC had a handful of mines, each producing ore of its own grade i.e. specific %’s of iron, silica, water content and other chemical/physical characteristics. There were also multiple stockpiles of ore near the processing plant — these consisted of ore from the mines extracted at different times in the past, and each stockpile had roughly the same ore characteristics. Since the grade of ore produced from each mine could change considerably over time based on the depth it was mined from, the stockpile characteristics could vary considerably from each other.
Trucks would depart in a steady stream from each mine towards the processing plant. Each truck would either be diverted to one of the ore stockpiles, or merge on an FCFS² basis into a waiting line to dump their loads as input to the processing plant. For each truck diverted to one of the stockpiles, a replacement truckload chosen from one of the other stockpiles could be sent to merge into the line of trucks waiting before the plant. The plant processed the ore in batches of ten truckloads each — waste material was removed during processing, and enriched ore emerged as the output from the plant. The incoming grade of each batch was an average of the grades of the truckloads of raw ore that made up the batch.
When successive batches had significant variations in their grade, ABC needed to change the settings on the various pieces of equipment within the plant, otherwise the processing speed would be significantly reduced. Changing the settings was a manual process — a team of workers would have to go throughout the plant to make the changes. It would be impossible to do for every batch processed, so ABC had a strong preference that the average grade of each batch did not differ significantly from that of its predecessor.
ABC had found that there were two characteristics the most affected batch processing:
· Fe% (= iron ore%)
· DTR% (Davis Tube Recovery, roughly a measure of magnetite content)
From past experience, ABC had also found a Process Design Envelope or PDE that was defined by a quadrilateral in the 2D space formed by DTR% on the vertical axis and Fe% on the horizontal axis, as shown in Fig. 1 below.
The Problem
The optimization problem posed to us by ABC was this — find an optimal way of choosing specific trucks_from_mines to direct to stockpiles, and choosing replacement truckloads from other stockpiles, so that the sequence of truckloads sent to the processing plant would result in batch characteristics that were within the PDE, as far as possible.
Oh, boy. An optimization problem clearly defined by a client, and they even had a picture that helped explain things.
The Process
So of course, we proceeded right into an MILP³ formulation. We created a CPLEX-based solution, implemented on IBM’s Decision Optimization Center (DOC) platform, that chose the ideal trucks to divert to stockpiles, the ideal stockpile for each diverted truck, and the best other stockpile from which a replacement truckload would be chosen. Ore rehandling, which entails dumping onto stockpiles and/or retrieving from stockpiles, leads to some of the ore being crushed to powder. This is undesirable for ore products, we added a constraint limiting the % of trucks_from_mine that could be diverted to stockpiles. Not surprisingly, the greater this allowed % was set to, the more flexibility the problem solver had in finding feasible solutions, and the solution quality improved.
A Premature Solution
After several weeks, things were going very well. We had ironed out the inevitable data quality issues, refined and debugged the model, and established an excellent relationship with the client. The optimization model was producing significantly better results than those produced by the manual, Excel-based efforts to move batches into the PDE.
Flush with success, and near the end of the project, a burst of idle curiosity prompted us to look at the PDE diagram in Figure 1 and ask the client
“What do you guys get by moving so many batches into the PDE, anyway?”
“Oh, it reduces the variation in batch quality if we can keep them all close together.”
“Yeah, but why choose that particular design for the PDE, with specific coordinates for its four corners?”
“We get good processing speeds that way, through the slowest sequential piece of processing equipment in the plant.”
Those of us in the optimization team looked at each other in slowly dawning horror.
“You mean, all this time you guys were really looking to improve your processing speed through the plant?”
The client looked at us with the slightly pitying look you would give a chimpanzee trying to figure out a Rubik’s cube.
“Obviously.”
“Can we express the processing speed as a simple function of %Fe and %DTR? Or, equivalently, express the #batches processed per hour as another simple function?”
“Sure, we’ve kept records for the past several years for that. In fact, we even have the function figured out as a linear function in Excel.”
Solution 2.0
Our optimization team went back to the drawing board. We first added a new KPI of minimizing batch_to_batch deviations in grade, then added another KPI of minimizing the range of %Fe and %DTR across all batches, to minimize any slow drifting towards higher/lower values for each of those characteristics. The results showed a dramatic smoothing in the batch_to_batch variations of %Fe and %DTR. Then we added yet another KPI of minimizing batch processing times, and things got even better:
Bringing more batches into the PDE (which was the original KPI) increased plant throughput by 7 Mtons/year. By realizing (even if it was late in the game) what the client’s true objective was, and adding extra KPI’s to the model to achieve this, the throughput of the plant improved by a further 6.5 Mtons/year, to the point where the processing plant was no longer the bottleneck in the supply chain:
What You Really Really Want
In summary — we started out with an optimization model that did exactly what our client had asked us to do. Through a casual question late in the game, we found out what the client really wanted to do, and luckily were able to modify our model to deliver significantly better results than the first version produced.
…but in hindsight, that casual question was one that we should have asked on day 1.
(Epilogue — I wish I could tell you that the model was implemented fully and was a screaming success, showering riches on the heads of everyone in ABC and creating a lot of cashed-up bogans⁴ in the process. Unfortunately, a few months after we delivered the model to the client, the price of iron ore cratered worldwide, to a value well below the break-even level for ABC. They ceased operations shortly thereafter.
But it wasn’t because of our model. At least, that’s my story and I’m sticking to it.)
References
1. “Tell me what you want, what you really really want” by Spice Girls https://www.youtube.com/watch?v=zjuGSig4o_o
2. FCFS = First Come First Served
3. MILP = Mixed Integer Linear Programming
4. “Cashed up bogan” https://www.yourdictionary.com/cashed-up-bogan