Defining Supply Chain Performance: Nailing the Jell-O to the Wall
When I worked as a Branch Chief in Air Mobility Command’s Analysis Group my boss often talked about nailing the Jell-O to the wall. I took that to mean we had a project with some ill-defined performance measures and objectives. Our first task was going to be establishing the goals. He was a crusty old colonel who was usually dead right on these matters.
The phrase stuck with me, but its meaning has probably evolved a bit. Working with our clients we frequently create optimization models where the objective is a mixture of terms – some that are concrete and some that are ill-defined. The user has certain costs that are very real but also recognizes that a good solution has other desirable qualities. Sometimes these additional criteria are easily expressed as constraints. But just as often they have to be weighed against the concrete and known cost terms in the objective function.
As an example, we have a client that must pay to move heavy equipment around the country from one customer site to another. The model output is a schedule of the required movements for each piece of machinery. Any given solution will have values for a few performance measures. The cost per mile of moving the equipment is easily measured and is known. Although this cost might vary during the year and is subject to some uncertainty, it is still well-defined. We can get solutions for different values of this cost parameter when we do sensitivity analysis. This is the most directly and easily quantified performance measure for a solution.
On the other hand, there are some other factors that enter into the objectives and are harder to value. For instance, risk is a factor we add to the model to give some latitude in meeting deadlines for the arrival/departure of the equipment to/from locations where it will service a user contract. Clearly, we want a solution with little risk. Upgrades are another such factor. At times, equipment of higher quality than contracted for must be used to meet contract dates. Again, this is something to be avoided since there is wear and tear on the expensive machinery. Finally, there can be occasions when an appropriate configuration of equipment is just not available. Meeting the contract requires leasing equipment at substantial cost.
None of the additional factors is well-expressed in a constraint since there are no absolute limits. The model we use influences these performance measures via the cost terms in the objective function we are trying to minimize. The trick is to assign the right costs to get solutions that the user can recognize as good. In cases like these, Experimental Design applied to the optimization model is an efficient way to derive a useful set of parameters.
Experimental Design (DOE)
Design of Experiments (DOE) has long been used in industrial settings to quantify the way parameters affect product design. Taguchi methods and Robust Engineering are the most popular names for this approach. Inferior methods based on intuition or one-factor-at-a-time experimentation have been thoroughly discredited and largely driven out of practice by DOE approaches.
Applying DOE to optimization and simulation modeling is not a recent development, either. In the machinery example, the solution (a schedule with routes) is essentially a product we are designing with the help of the model. In the typical application DOE is used to summarize the behavior of the model with respect to resource allocation. Essentially, we see how much benefit is derived from the addition of resources. In this case we use DOE to see how the shape of the solution changes as costs are varied. The costs represent the relative importance of the various aspects of the solution. Some are concrete others really are not.
Returning to the machinery movement example you can probably picture what happens with some extreme values of the costs. If substitution and upgrades are free, then you get a solution with a small number of miles. But you find customers who paid for low-tech equipment getting a lot of free upgrades. This is not desirable for other reasons besides wear-and-tear. Those customers that paid for the best can feel abused and it hardly motivates the lucky customer to pay full price for the best equipment next time around.
Also, an overly conservative approach to meeting contract deadlines means that the company must buy more equipment than it really needs. So a very high penalty for taking risk, e.g. days allowed to reposition equipment, can be very expensive. The start and end dates for the contracts shift during the course of the year. A little risk is not entirely a bad thing — especially since the allowed risk is easily controlled in the model.
Certainly what we really want to avoid is a model that recommends inferior solutions. In a two-dimensional example that would be a solution that had the same risk value but more miles associated with movement than are needed.
Realize that as time moves on the equipment example is a model that must be rerun repeatedly. New commitments are made and existing ones may have been modified. Those solutions that have attractive summary performance measures are the ones that should be examined in detail. Ones with inferior solutions can be safely ignored. In this application the cost settings seem pretty stable – produce good solutions. But there is nothing to prevent rerunning the DOE periodically to examine solutions based on their summary performance measures.
The table below (click to enlarge) shows a set of results from a point early in the year for a subset of the equipment to be scheduled for the coming year. All of the values have been changed, but patterns have been preserved for illustration purposes. In this example we chose just three of the possible six (shown in yellow) parameters that could be varied. We were able to run all eight possible combinations of the values shown. We could investigate a larger set of parameters, perhaps by using fractional experiments that look for important effects without doing all the possible runs.
Without actually estimating the effects of the parameters you can see at a glance most of the effects. High upgrade costs lead to fewer upgrades, high risk costs lead to smaller number of risk days. High costs per route lead to a set of routes with larger total miles. One could argue that run one or run three yields a schedule which is a good tradeoff among the performance measures we considered here.
So a user would be encouraged to take a look at the details of the solutions (scheduling) from Run 1 or Run 3. Finally, notice that there is an interaction between RouteParm and RiskParm on the RiskDays performance measure. That is, changing the RiskParm value has a different effect on Risk Days depending on the setting of RouteParm – much more influence when RouteParm is large than when RouteParm is small. Another benefit of DOE is the ability to spot those kinds of interactions. One-at-a-time experimentation is completely incapable of finding this sort of information.
In the heavy equipment case we had some ideas on the neighborhood to be investigated for the parameter settings. That is not always the case. Sometimes the parameters are very difficult to calibrate by reasoning from or comparison with costs that are known. For instance, we have an example from agriculture. Here the model suggests the order in which a group of farms will be harvested. Processing plants want the products in specific weight ranges and require amounts of each product for each day over the planning horizon.
Besides getting within the desired weight range there are some other goals. For instance, it would be good to hit the weight targets – not simply be within the range. We also want to reduce the number of trucks that have to be dispatched and the number of miles driven. We also want to visit individual farms a limited number of times. Furthermore, if product is not harvested while in an allowed range, it is ‘wasted’ and this is one of the most costly penalties – almost an absolute requirement. Realize that as time goes on the product is always growing and so there is a limited time window on the weight range.
Not all of a given the product on a farm is of the same weight. It varies from section to another based on the time it started growing. So in a simplest of experiments you can test the tradeoffs between distance traveled and missing the weight targets. If you are willing to visit several times, you can come closer to hitting the target weight. But this will involve more visits and usually more total mileage.
If one goes the extra step of fitting a statistical model to the DOE results, then predictions can be made for the performance measures. When multiple performance measures are involved, one can locate regions where all performance measures are in acceptable ranges. Various statistical packages, e.g. JMP or Minitab, provide the ability to create two-dimensional contour plots that show regions where both dimensions are within ranges defined by the user.
The three figures above are examples of contour plots. Given that the experiment only examined two levels of each factor, none of the plots can really show curvature. But you can see an interaction in the Miles graph.
Not every model requires an experimental design. But in cases where there are multiple performance measures that need to be combined using (possibly) arbitrary weights, DOE is an excellent approach. This is true for simulation and optimization models alike. Furthermore, DOE can be used as a predictive tool. If the design is carefully chosen, it can guide you to a useful operating region and reveal interactions between the various factors.
So how do you nail Jell-O to the wall? You throw a Design of Experiments net over it.