Shelf Space Optimization
Step by step example where Decision Optimization helps to optimally position articles on shelves, e.g. in a retail store.
The problem
How do retail stores are placing the different items on their shelves? This is a very important topic for the retailers, included in broader Visual Merchandising, and which has long been based on the use of planograms. Apart from the physical constraints within the shelves. Some providers constraint the retailer to place their items in given positions, knowing that it has an impact on sales. Different competitors may not want to be too near or too far from others. Positions in the bottom are usually reserved for cheap and low quality products. Etc Many constraints and objectives will need to be taken into account in order to position the different items in the shelf.
This post describes a complete working example where Decision Optimization is used to solve this problem and run what-if analysis on different scenarios with different objectives in order to evaluate the trade-off.
All model development, tuning, debugging and validation is done in a Decision Optimization experiment. See this older post for an introduction on DO experiments (at the time the terminology was model builder).
The example can be recreated from the Cloud Pak for Data exported project available here. Have a look at the the following quick start videos if needed.
Get input data
The main input data table are the items to place, named SKUs
. The following screenshot shows some of the properties considered in this example that will be used for objectives and constraints.
The data model includes several other tables to represent Groups
and Shelves
. The dataset in the example has 6 shelves with the same total width to represent a real shelf with different levels.
One other important input table is the Weights
table which will be used to provide the importance factors of the different KPIs. In the Baseline scenario, all weights are equal to one.
Formulate the model
The model is formulated using OPL (Optimization Programming Language) and debugged in a Decision Optimization Experiments in Cloud Pak for Data. See this post about OPL vs Python docplex.
With OPL, the data can be loaded in tuple sets that mimic the input tables. For example:
tuple TSKUs {
key string id;
int minFacings;
int maxFacings;
float demand;
int replenishRate;
string parentGroup;
float width;
float price;
};{TSKUs} SKUs = ...;
OPL allows to create internal data (sets and arrays) calculated from the imported data. For example, some specific elements on groups are created. Some optimization level pre processing can be done directly within the pre processing section of the OPL model.
TGroups GroupForID[GroupIDs] = [g.id : g | g in Groups];
{TGroups} FirstLevelGroups = {g| g in GroupsForLevel[first(Levels)]};
{TGroups} SquareGroups = {g | g in Groups : g.needSquareness == 1};
{TGroups} GroupsWithSpaceElasticity = {g | g in Groups : g.alpha >= 0 && g.beta >= 0};
Then some decision variables are created. There are always several possible alternative ways to model business problems, and part of the job of Operations Research experts is to find the most suitable one, as a tradeoff between ease of use in the model and peformance in the engine. Here the main variables are arrays indicating for each combination of SKUs
and Shelves
the number of items and the position (offset).
dvar int numFacingsSKUOnShelf [SKUShelves];
dvar float offsetSKUOnShelf [SKUShelves];
Quite a few other auxiliary decision variables are introduced that will make the formulation of the constraints and objectives easier.
The KPIs are defined with decision expressions …
dexpr float expectedSales = sum(g in GroupsWithSpaceElasticity) piecewise(i in 1..MaxFacings[g])
{g.alpha*(pow(i,g.beta) - pow(i-1,g.beta))->i; 0} (0,0) numFacingsGroup[g];
dexpr float emptySpacePenalty = sum(s in Shelves) emptySpaceOnShelfVar[s];
dexpr float shortagePenalty = sum(s in SKUs) unitsShortageVar[s];
dexpr float rectangularityPenalty = sum(<g,s> in RectangularGroupShelves) (rectShapeLeftSlack[<g,s>] + rectShapeRtSlack[<g,s>]);
dexpr float squarenessPenalty = sum(g in SquareGroups) squareShapeSlack[g];
dexpr float avgDisplayPricePenalty = minAvgTargetPriceSlack + maxAvgTargetPriceSlack;
… which can then be combined to build the objective to optimize. In this case the expected sales are maximized while all the penalties are minimized (using a negative weight).
maximize Weight.expectedSales * expectedSales
- Weight.emptySpacePenalty * emptySpacePenalty
- Weight.shortagePenalty * shortagePenalty
- Weight.rectangularityPenalty * rectangularityPenalty
- Weight.squarenessPenalty * squarenessPenalty
- Weight.avgDisplayPricePenalty * avgDisplayPricePenalty;
Finally the model will contain all the constraints to be taken into account. Some of the constraints are pretty simple to read and understand, for example the one stating that a given SKU
can only be on one unique Shelf
.
//SKU on single shelf
forall(s in SKUIDs) {
sum(<s,sh,u> in SKUShelves) isSKUOnShelf[<s,sh>] <= 1;
}
In the model formulation are also included some CPLEX parameters settings . In this case, a time limit of 3 minutes (180 seconds) is set.
execute CPLEX_PARAM {
cplex.tilim = 180;
cplex.clocktype = 2;//wall clock time
cplex.rinsheur = 50;
};
Run the baseline scenario
The Baseline scenario, using baseline data and the formulated model can then be run. During the execution are displayed some statistics and a progress chart, showing the evolution of the best known solution combined objective and the best bound. The gap is getting smaller, until the time limit or the expected gap is reached.
Understand the solution
The solution of this optimization run can be seen as KPIs or as detailled output ables as defined at the end of the optimization model.
For example, one table includes all SKUs
palcements on Shelfs
:
In the form of a data table, this is practically impossible to understand and validate the correctness of the model.
In the DO Experiments UI, some visualizations are available to easily configure charts which will allow to validate the model or highlight some missing constraints. The vega-Lite visualizations can also be customized using JSON.
A chart can be created to show the shelves with different colors for different groups.
This is done using the x
and x2
properties of vega-lite charts.
{
"name": "",
"type": "Chart",
"props": {
"container": "",
"data": "SKUOnShelfPlacement",
"spec": {
"mark": "rect",
"encoding": {
"x": {
"field": "offset",
"type": "quantitative",
"title": "Position"
},
"x2": {
"field": "offset2",
"type": "quantitative"
},
"y": {
"field": "shelf",
"type": "nominal",
"title": "Shelfs"
},
"tooltip": {
"field": "sku",
"type": "nominal"
},
"color": {
"field": "parentGrp",
"type": "nominal",
"legend": {
"title": "Groups"
}
}
},
"config": {
"overlay": {
"line": true
},
"scale": {
"useUnaggregatedDomain": true
}
},
"width": 600,
"height": 200
},
"search": ""
}
}
Use Weights
This chart makes it pretty easy to understand the impact fo the input on the solution. For example, a new scenario in the DO experiment can be used where more importance is given to the rectangularity penalty:
When this alternative scenario is solved, the chart prefectly shows the impact on the solution. On the other hand, there is more empty space…
Do What-if analysis
Several scenarios can be created, playing with the different possible weights for the different KPIs.
Then some chart can be created to compare the KPIs among the different scenarios.
This chart shows that the scenario where the rectangularity penalty is minimized has a quite lower exected sales.
Using lexicographic objective
As described in the Multiple Objective Optimization post, an alternative to the use of weigthed sums is to use lexicographic objectives.
An additional scenario is created and the model modified to use such objective:
maximize staticLex(expectedSales, -emptySpacePenalty, -shortagePenalty, -rectangularityPenalty, -squarenessPenalty, -avgDisplayPricePenalty);
Overview
As commented in the dedicated post, the Overview tab allows to easily compare the execution times and gaps of each of the scenarios.
After several scenarios are used to understand the impact of KPIs and constraints, the right model, validated by the business analyst can be deployed for integration in the production application.
For more stories about AI and DO, follow me on Medium, Twitter or LinkedIn.