April 13th, 2013 10:53 am Category: About Profit Point, Global Supply Chain, Jim Piermarini, Operations Research, Optimization, Software Optimization, Supply Chain Improvement, Supply Chain Software, by: Jim Piermarini
Building applications, especially custom ones, carries with it the burden of answering the question: Does this do what the customer wants?
With complicated systems with many interacting features and business rules, answering this question can be daunting. In fact, evaluating the answer can be daunting too, from the perspective of the customer. Having the sales guy check some boxes in a questionnaire, or watching a demo just doesn’t leave you with the assurance that the application with handle all the business requirements, from either perspective, the vendors or the customer. Everyone I have spoken to who has sold complex software, or who has participated in the purchasing process of software has expressed the same doubt. They are just not sure that the tool will be a good fit. As we all know, that doubt does not always prevent the purchase of the software, as each organization has its own level of risk tolerance, and trust in the vendor’s brand or reputation. Often these other considerations can outweigh the amorphous doubt that some folks might feel. How can one quantify that doubt? Frankly, it’s a quandary.
This thought got us at Profit Point thinking… Wouldn’t it be great if there was another way to evaluate the goodness of fit or an application, or the appropriateness of the parameter settings, to match the business needs of an organization. Would it be great if there was a way to eliminate (or greatly reduce) the doubt, and replace it with facts. Either a business rule is obeyed or it is not. Either a decision is made according to the requirements, or it is not. Let’s eliminate the doubt, we thought, and the world would be a better place. (well a little bit anyway).
There are many processes for testing an application as it is being developed, with writing test scripts, and evaluating the results. All these are based on testing little pieces of code, to ensure that each function or sub routine does what it should do in each case of input data. These processes work fine in our opinion, but only when the sub of function is able to be considered independently form the others. When the system has functions that interact heavily, then this approach doesn’t reduce the doubt that the functions may conflict or compete in a way that the whole system suffers. How then to evaluate the whole system? Could we treat the entire application as one black box, and evaluate the important business cases, and evaluate the results? This is exactly what we have done, with the effect of reducing the doubt to zero about the suitability of the application for a business.
With several of our clients we have worked out what seems to be a great process of testing a complex software solution for suitability to the business requirement. In this case, the detailed level function testing methods were not open to us, since the solution relied on a Linear Programming technique.
This process is really just an amplification of the standard testing process.
- Define the test case, with the expected results
- Construct the test data
- Build or configure the application
- Run the Test using the Test Data and Evaluate the results – Pass or Fail
This is the standard process for testing small functions, where the expected results are clear and easy to imagine. However, in some systems where there many interacting rules and conflicting priorities, it may not be simple to know what the expected results should be without the help of the tool’s structure to evaluate them. Such is the case with many of our application, with layer upon layer of business rules and competing priorities… The very reason for using an LP based approach makes testing more complex.
In the revised process, we have, for each new business requirement:
- Construct the test case with the test data
- Build or configure the application
- Set the expected results using the results of the first pass build
- Re-factor the code and test until all test are passing
In my next blog I will show you the simple excel based tools we use to facilitate the test evaluation.
In practice, the process works well, new versions of the application go into production without any surprises, and with full confidence of the application management team that all the business requirements are 100% met.
No doubt – no doubt a better process.
By Jim Piermarini
Upgraded Vehicle Route Planner Software Improves Decisions in Distribution Planning, Fleet Sizing, Driver Productivity and Transportation Cost Reduction
Profit Point announces the introduction of Profit Vehicle Planner™ 3.1, a major upgrade to our distribution analysis and design software. Profit Vehicle Planner is designed for Strategic Logistic and Transportation Managers that have large fleets with multiple daily delivery stops and changing logistics processes. The software update includes a combination of new features and technical enhancements which combine to support richer scenario modeling for larger large fleets with multiple daily delivery stops and changing logistics processes.
Designed to be highly accessible and customizable, Profit Vehicle Planner (PVP™) uses standard Microsoft business tools for calculation and display of information, including Excel, Access and MapPoint. The software automatically creates and designs the optimal sales/distribution territories. It does this by dividing customers into territories and days of service, with each territory representing the volume delivered by one delivery vehicle and one driver over the course of the planning horizon. The objective of the proprietary heuristic algorithm used in Profit Vehicle Planner is to assign customers to territories that will minimize the number of trucks required to serve the customer volumes while delivering within the various common and business-specific constraints, including customer frequency of service, hours available per day, volume available per truck, unique equipment requirements and virtually any other custom constraint required.
“With 12 years in the field, Profit Vehicle Planner has been put to the test against some of the world’s largest supply chain distribution problems,” noted Jim Piermarini, Profit Point’s Chief Technology Officer. “Transportation best practices have expanded over time, so decision makers are looking for more comprehensive strategic logistics and transportation modeling solutions.”
With the new release, PVP’s expanded features include extensive customization of the software to tailor the territory planning solution to be cost and time effective to meet your unique and specific distribution requirements and the ability to use imported address data to automatically geocode customers for whom lat/long data is missing.
For companies that perceive distribution as mission critical, users have the option to integrate PVP deeply into their supply chain systems to import and export data in to their ERP system. Companies that seek the most cost-effective solution have the ability to import virtually any relevant data from an Excel template that includes the following:
- Customer data such as address, location, frequency of service, volume per stop, time required per stop, other data as needed
- Truck data such as size, days of the week that it is available, order in which it is to be scheduled, hours available each day, special equipment, other data as needed
- Warehouse and district data such as location and characteristics of associated trucks and drivers
- Time related data such as start date of planning horizon and number of weeks in the planning horizon.
- Product specific data such as unit of measure of the product being delivered
- Any other data required to accurately model unique constraints
Once optimized, users have the ability to review and assess the characteristics of the territories that are created using tables and maps to provide an enhanced visual experience. And to ensure the optimal distribution plan, users can manually move customers from one territory to another or from one service day pattern to another (e.g. from Monday-Thursday to Tuesday-Friday), if desired.
I was sitting on the plane the other day and chatting with the guy in the next seat when I asked him why he happened to be traveling. He was returning home from an SAP ERP software implementation training course. When I followed up and asked him how it was going, I got the predictable eye roll and sigh before he said, “It was going OK.” There are two things that were sad here. First, the implementation was only “going OK” and second, that I had heard this same type of response from so many different people implementing big ERP that I was expecting his response before he made it.
So, why is it so predictable that the implementations of big ERP systems struggle? I propose that one of the main reasons is that the implementation doesn’t focus enough on the operational decision-making that drives the company’s performance.
A high-level project history that I’ve heard from too many clients looks something like this:
- Blueprinting with wide participation from across the enterprise
- Implementation delays
- Data integrity is found to be an issue – more resources are focused here
- Transaction flow is found to be more complex than originally thought – more resources are focused here
- Project management notices the burn rate from both internal and external resources assigned to the project
- De-scoping of the project from the original blueprinting
- Reports are delayed
- Operational functionality is delayed
- Testing of transactional flows
- Go-live involves operational people at all levels frustrated because they can’t do their jobs
Unfortunately, the de-scoping phase seems to hit some of the key decision-makers in the supply chain, like plant schedulers, supply and demand planners, warehouse managers, dispatchers, buyers, etc. particularly hard, and it manifests in the chaos after go-live. These are the people that make the daily bread and butter decisions that drive the company’s performance, but they don’t have the information they need to make the decisions that they must make because of the de-scoping and the focus on transaction flow. (It’s ironic that the original sale of these big ERP systems are made at the executive level as a way to better monitor the enterprise’s performance and produce information that will enable better decision-making.)
What then, would be a better way to implement an ERP system? From my perspective, it’s all about decision-making. Thus, the entire implementation plan should be developed around the decisions that need to be made at each level in the enterprise. From blueprinting through the go-live testing plan, the question should be, “Does the user have the information in the form required and the tools (both from the new ERP system and external tools that will still work properly when the new ERP system goes live) to make the necessary decision in a timely manner?” Focusing on this question will drive user access, data accuracy, transaction flow, and all other elements of the configuration and implementation. Why? Because the ERP system is supposed to be an enabler and the only reasons to enter data into the system or to get data out is either to make a decision or as the result of a decision.
Perhaps with that sort of a focus there will be a time when I’ll hear an implementation team member rave about how much easier it will be for decision-makers throughout the enterprise once the new system goes live. I can only hope.
June 3rd, 2012 3:15 pm Category: Optimization, by: Alan Kosansky
Supply chain managers are tasked with making the critical decisions to improve supply chain operations by taking costs out of the system while improving customer service and profitability. The best managers rely not only on their experience but also on data-based decision-making. Making the best decision to minimize supply chain costs and maximize profit requires accuracy in your data.
However, don’t confuse accuracy with precision. Too many decision makers throw out the baby with the bath water when they shun data based decisions because the data does not precisely reflect the detail of their operations. Wise supply chain managers and analysts understand that for many decisions, aggregated and/or averaged data can accurately reflect the cost/benefit tradeoff of critical decision and point in the direction of near-optimal decisions. For example, modeling manufacturing capability and capacity at the product family level is often accurate enough to make the right supply and demand balancing decisions. Another example: even when optimizing detailed scheduling operations within a single plant it is important to find the right level of detail to model so as to balance accurately capturing the realities of the manufacturing operations and being able to evaluate a large number of production sequencing options in order to find the optimal schedule.
Finding the right level of data detail to inform accurate decision tradeoffs while searching for the optimal decisions is an art as much as a science. It requires experience and expertise in both supply chain operations and optimization modeling.
This is my passion, so let me know what experiences you’ve had, and how you have found that balance to achieve optimal performance in your supply chain operations.
A husband, two kids and a golden retriever later… I am back to implementations in Supply Chain planning and scheduling. To my surprise, the same challenges I encountered 10 years ago remain in force today: data, defining business processes, data, implementing software, data, training people, data, supporting the change to a new system and data.
Data collection remains one of the cornerstones of success of a supply chain planning or scheduling implementation. Though scores of data may exist in a company’s business, harnessing it to feed into a planning or scheduling model can be extremely complex and time consuming. Interestingly, the data collection process often drives an elucidation of manufacturing practices and process flows, and clients learn what they do and don’t know about their business. This may seem backwards and risky in terms of getting things out of order. In a perfect world, a thorough understanding of manufacturing and business processes would pave the way towards building an Advanced Planning and/or Scheduling System. In reality, they often happen in tandem and are evolutionary in nature.
Deciding how data will be housed, derived and propagated early on in an implementation will pay off in the long run. Establishing a systematic, automated way to update and propagate data is equally important as the decision of what software system to use. It is worth the investment to take the time to put this automation in place as a greater and greater number of products are added to a system the data will remain manageable and scalable.
From PC to Cloud, emails to tweets, networking happy hours to LinkedIn, it is nice to know some things stay the same.
Many commercial enterprises and public agencies operate telephone call centers to provide effective and timely service for customers. By employing nearly 5% of the national workforce, call centers arguably define the “new factory floor” in an increasingly service-based economy. They are fascinating socio-technological systems which are exceptionally well-suited for the application of mathematical modeling and optimization methods.
A typical call center utilizes a computerized call handling system which can archive detailed historical information on call volume, call handling time, and other relevant attributes. This data can be analyzed and aggregated (with appropriate accounting for probabilistic variation) to generate a profile of staffing requirements across future time intervals. In theory, service agents can be optimally scheduled to closely accommodate this profile, resulting in high service levels, low customer abandonment, and efficient agent utilization. In actual practice, however, such performance represents the exception rather than the rule. Most call centers, even well-run ones, do not simultaneously achieve high levels of service quality and operational efficiency .
One important reason for the performance gap between theory and practice is lack of sophistication and flexibility in the standard software systems available for call center management. For example, standard systems invariably base interval staffing requirements on the classic “Erlang C” model, which is known to produce distorted results because it does not consider pertinent factors such as customer impatience . Additionally, if the software has any capability for schedule “optimization,” the underlying algorithm is usually a greedy heuristic which sequentially adds agent shifts without due consideration of the complex interactions between them. Beyond these technical limitations, standard systems offer minimal capability to experiment with different shift types and customize the solution strategy.
Profit Point can provide the expertise and custom tools necessary to properly model your unique call center environment and achieve optimal performance. By applying recently-refined mathematics, interval staffing requirements can be accurately determined and optimal shift distributions can be precisely derived . Efficiency improvements exceeding 10% are typical, coincident with improvement in service level performance. Many additional operational factors, such as on-line chat activity and agent specialization, can also be addressed. There is no better time than now for you to reap the rewards of optimizing your organization’s call center operations.
 Noah Gans, Ger Koole, and Avishai Mandelbaum, “Telephone Call Centers: Tutorial, Review, and Research Prospects,” Manufacturing and Service Management 5, 79–141 (2003).
 Lawrence D. Brown, et al., “Statistical Analysis of a Telephone Call Center: A Queueing-Science Perspective,” Journal of the American Statistical Association 100, 36–50 (2005).
 Dennis C. Dietz, “Practical Scheduling for Call Center Operations,” Omega 39, 550–557 (2011).
You might think the title should be ‘Heuristics or Optimization’, implying a choice. But often the two approaches work well together with heuristics speeding an optimization process. The Wikipedia definition of heuristic calls it an experienced-based technique for problem solving, learning, and discovery. Wikipedia also mentions using heuristics to find a good enough solution and describes them as ‘strategies using readily accessible, though loosely applicable, information to control problem solving.’
Those descriptions do not emphasize another aspect of heuristics – there is generally an underlying concept that informs the heuristic. There is a good reason why we think it will work well in the majority of cases. For example, an angle sweep heuristic is often used when designing routes for pickup and delivery from a central hub. Those routes are candidates for selection in a formal optimization. The designed routes look a lot like the petals of a daisy.
The heuristic starts out by heading north and picking locations close to that direction on the way out and back. How far out the route goes is a property of vehicle capacity or time limitations. The next route to be generated starts out slightly east of north and follows the same limitations and usually overlaps many of the locations on the first route. Once the entire compass has been swept, the best set of routes to cover all locations is selected by an optimization. In the example the heuristic becomes a front end for the optimization.
Another example comes from a driver scheduling problem. Suppose a set of drivers must pick up some commodity from a set of locations for processing at a central plant. Each trip in this example is an out-and-back because of the nature of the commodity, i.e. only one location can be visited. Drivers pick up multiple loads in a day, and each location requires multiple visits. The pickup times are fixed because of other problem features. One approach is to simply allow all combination of driver-load-location pairings and let an optimizer grind away.
But there are other desirable features of the solution: equalizing number of loads among drivers, and keeping driver dead time between loads to a minimum. Specifying all the driver loads by some simple heuristic, e.g. send a driver out for the next load as soon as possible, usually ends up with some loads that cannot be covered. A totally greedy approach fails.
An approach that seems to work well in this case is to consider some drivers for early loads and some for late loads. Work from the front of the early loads assigning each of the early drivers the first two loads they can feasibly complete. Then work from the end of the late loads assigning the last two loads of the late drivers as the last two loads they can feasibly complete.
The loads in the middle and the drivers that are not considered early or late are handled by the optimization. Notice that the heuristic does well on the driver gaps and guarantees that most drivers automatically get two loads, which is a good base in this application. It also serves to speed the optimization by reducing the pairings to be searched while preserving enough flexibility to get a solution.
Furthermore, the heuristic is flexible in that one can choose how many drivers to consider early or late and how many of their loads to nail down heuristically. Most importantly the heuristic gets better solutions than the optimization finds in any reasonable time. So while the optimization must have that best solution out there, it will not find it in the time frame the scheduler has to work within.
My experience is that flexibility is one of the key properties in any good heuristic. Adaptability to new situations is also a feature of good heuristics. One final example illustrating adaptability is based on an algorithm called ‘Toyoda’s Algorithm’. I have applied this particular idea to a number of situations.
In this example it is applied to sequencing the unloading of containers at a port. Each container holds a selection of parts which have to be processed by various work centers prior to shipment out of the port. A manifest shows the selection of the parts and it is known how much work is associated with a given part at a work center. Not every work center can process every part type. The objective is to get all the work centers to end their day at the same time and to keep all the work centers busy throughout the day.
The approach is easy to understand in two dimensions. The X and Y axis represent the available work time of two work centers, e.g. eight hours. The arrows represent the amount of work delivered to a work center by a given container. The dashed line is the ‘ideal path’ — equal amount of work at each work center throughout the day.
The heuristic simply needs to loop through all available containers at each iteration and always try to get back onto the ideal path. Penalties for deviation are totally flexible in that small deviations can be without penalty while sizable ones are some function that penalizes them heavily. Other problem features can be captured, e.g., buffer space at a work center, and incorporated in the penalty function. This is not a formal optimization, but it is speedy and good enough for the real world application.
The watchwords seem to be these. Look for the important features of a good solution. See if a simple rule or concept will drive the solution toward these good features. This is especially true when there is little or no economic benefit to the optimal solution. Try to develop a heuristic that is flexible in adapting to normal variance in instances of the data and can be tuned to choose between competing objectives.
Okay. I am an anomaly. I live in Utah and drink coffee. The majority of the people that live in Utah do not drink coffee, and that is OK, but I do. So, is there a shortage of coffee Cafés in Utah? No. There are many cafés and several that serve outstanding coffee.
We have an exceptional espresso café downtown, located on a side street off of Main. They roast their own coffee and use triple certified organic grown beans. It is the type of place the local coffee lovers go to hang out and have good conversation over a morning or afternoon latté or espresso. Possibly the best coffee I have ever had. What is interesting to me is that a large percentage of the residents in my area do not even know that this café exists.
So what is my point? When it comes to outstanding services or products most people are unaware of what is available, primarily because it does not fit into their lifestyle or what they’re accustomed to. I believe you can transfer this similarity to the business world. Manufacturing logistics and transportation people become accustomed to doing things a certain way. Over time they may become blind to ideas for improving the supply chain. They are unaware of an exceptional Supply Chain Café, even when it is located just seconds from a combination of keystrokes and Google.
It is not their fault they are missing the best latté available. We, as consultants, who prepare those delightful solutions from the Supply Chain Café menu, have probably not done the finest job of promoting our services and software to your neighborhood, but that is changing.
There are many empty cups in the supply chain, waiting to be filled with successful solutions. Supply Chain and Logistic managers tackle difficult supply chain problems every day, but they are so focused on getting their job done and making it through the day that they have little time to think of alternatives that may improve their processes and well being. I am not sure how we can help everyone, so let’s focus on the window shoppers. These are the ones that are aware of the café, but have never been inside. Maybe you are one?
If you are reading this blog, then you must be a window shopper. I am guessing you are looking for a better espresso. OK, you found “Profit Point”, although you may not know what we do. Guess what? Help is on its way. We can share our menu with you. We just published four videos that will introduce you to the Profit Point team and what we do. Embrace three minutes out of your day, select one of the videos, and watch it. Learn how we help companies improve their supply chain, by serving the best coffee with a smile.
Yes, you can improve your supply chain with our help. The supply chain solution that you are looking for, is about to be yours. And if you place an order, we can fill your cup to the top, with the “good triple certified” stuff. If you cannot seem to find that special item on our Supply Chain menu, then no fear, we love special orders.
So, is there a shortage of Supply Chain Cafés? No. You just need to find the one that serves the optimal latté. I know it’s out there somewhere.
Frequently, you might hear somebody say that the capacity of a production facility is some known and fixed value. When this happens be very wary of what they might be trying to sell you. Because as with so many other things, when measuring capacity ”the devil is in the details”.
The “capacity” of a factory sounds like a pretty simple notion and something that should be easy to calculate. But this is only true for production systems that are fairly straightforward, consisting of totally independent machines and processes. If the organization however consists of operations that are interconnected and interdependent on each other, then capacity can be a fairly difficult thing to measure.
In the vast majority of production systems, there is a very real link between capacity and three critical factors:
- the mix of products, and how much time is required for setup/cleanup between consecutive production runs,
- the ability to create sophisticated and optimal schedules for the production resources,
- how much physical space exists in the factory where products that are only partially complete can be kept or stored; what’s known as Work in Process (or WIP) Inventory.
To see these 3 relationships at work, consider the simple case where a certain department produces two products, A and B, which both use the same piece of equipment, and there is only one of these machines available. The production rates of the machine are in the table below and there is a 4 hour setup time required when the machine switches over from producing one product to another. Now consider the 2 scenarios below. In Scenario A, the capacity is 170 units per day while in scenario B the capacity is 145.
|Scenario A||Scenario B|
|ProductionRate (Units / hr)||Daily Sched Qt.||Hrs required||Daily Sched Qt.||Hrs required|
|Setup hrs ->||4||4|
This example clearly demonstrates the frist item above, that the “capacity” of the department depends to a large extent on the mix of the 2 products that are being produced.
Now suppose that management wants to produce 110 of A and 80 of B per day. These new requirements seem to clearly exceed the capacity of the department given EITHER Scenario A or B. But maybe the necessary capacity can still be found.
If the new requirement is to produce at this increased rate for only a single day, or to produce at this rate each and every day, then there is definitely not enough capacity on the machine. However, if the increased production is required over a sustained length of time, then we can gain extra production by modifying the production schedule so as to eliminate or minimize the occurrence of the 4 hour setup. If the department schedules production in long blocks spanning several days, where first one product and then the other is produced, then the department DOES have the capacity. In the table below for example, 440 units of A is first produced followed by 320 of B, with a 4 hour setup between them. This represents 4 days worth of the increased management requirement (100 of A and 80 of B each multiplied by 4).
|ProductionRate (Units / hr)||Sched Qt.||Hrs required|
|Setup hrs ->||4|
With this schedule, the total required hours of 94 is less than the 96 hours available in 4 days, and so now there IS enough capacity! By scheduling wisely (i.e. “working smarter”), the department’s average daily capacity has actually risen to (760 / 4) = 190 units per day, a good deal higher than either 170 or 145 in the two previous scenarios.
Thus, the department capacity clearly depends on the ability to implement “smart” production schedules that make the best use of the available resources, i.e. the second issue mentioned earlier.
Finally, this higer capacity schedule is an example of a “good news / bad news” situation. Although the plant is able to produce more (and presumably company revenues will go up) the downside of this higher capacity schedule is that the department will be maintaining a larger amount of inventory in the supply chain on average. And if there is more “stuff” in the pipeline, then there has to be the physical space to put it. This is an important consideration if inventory has to be stored in or on particular types of storage facilities such as refrigerators or special racks. Therefore, although it might be possible to ”buy” extra production capacity with a better equipment schedule, it is important to realize that different schedules put more or less demand on the spatial capacity of the actual storage facilities.
Therefore, this example illustrates the third item, that increasing ouput can put stress on the plant’s storage facilities
This last scenario also shows that maximum capacity is not necessarily the same as minimum cost. Because notice that in this scenario there is only one 4-hour setup, and thus any costs from the setup activity are averaged over a larger number of produced items. But offsetting this savings in setup cost is the fact that with the increased WIP, the inventory costs will have gone up.
The fact that capacity can be such a difficult thing to measure, does not mean that it is not a valuable parameter to describe a given system. What it does mean is that when any capacity value is given for a particular supply chain, it is absolutely critical to understand the assumptions that underlie it. The fact that capacity is such a highly maleable concept, simply reinforces the fact that managing a company’s supply chain is always a delicate balancing act between competing costs and non-monetary factors.
A good friend of mine, who works for a large employer in her city, recently told me that her department’s budget, along with every other department budget that was classified as “Administration” in the ubiquitous SAP system, had to be cut by a large and specific percentage.
It didn’t matter that the “Administration” label was not uniformly applied across her organization and that some departments that were so labeled performed functions very similar to other departments that were not stuck with that label. It didn’t matter what services each department provided, or how efficiently they provided them, they just had to cut the budget and they had to hit the number. Incredibly, it didn’t matter that her group was one of the few “Administration” groups that actually generated revenue; in her case three times their total annual budget spend.
Unfortunately, hers is not the first story like this that I have heard.
There is no doubt that many corporations, organizations, governments and households have been hit hard by the recent economic downturn. Each of these groups has been forced to make some difficult decisions. So what do I have against across-the-board (ATB) budget cuts? Basically, I think it has to be the worst way to reduce costs in an organization, and here’s why.
Let’s take a look at something that is important and familiar to all of us; the family budget. Sadly, many families have been forced to drastically reduce spending as a result of a lay-off or furlough over the past two years. In those cases, an ATB cost-cutting strategy just doesn’t work. Try telling the bank that you’ve had to cut your monthly mortgage payments by 15%. I doubt that they will be impressed when you tell them that you’ve had to do the same with your property taxes, insurance premiums, electricity and water payments, as well. You might get lucky and be able to renegotiate your mortgage and you might get lucky if your state provides utilities assistance for people who have recently lost their jobs, but most tax assessors and insurance companies will not be particularly sympathetic.
But my guess is that you’d probably take a very different type of approach to cost-cutting in your household. You’d probably take a hard look at all of the money that you’re spending over a month or a quarter. You might first examine your spending to see if you could conserve on the amount you consume or if there were ways to get the same goods and services in a cheaper manner. If that didn’t reduce your spending enough, you’d probably divide the remaining spending into different categories. There are many different ways to categorize your expenses, but they’ll probably come down to something like, 1) Essential; 2) Non-essential, but painful to cut; 3) Non-essential and easier to cut. If you’re lucky, you will be able to cut enough of your spending by eliminating or reducing your expenses in the non-essential categories. If not, you might be forced to re-examine what really is “Essential.” For example, your mortgage payment is essential, as long as you plan to stay in your house, but if the situation calls for it, you can reduce your costs by moving into a smaller home or apartment. Not a fun choice, but it could be the right thing to do in certain situations.
Looking back on the family budget example, what did we do? First, we looked for opportunities to conserve and less expensive ways to purchase the same goods and services. Next, we prioritized our spending so we could make good decisions. To find less expensive ways to purchase the same goods and services and to prioritize the spending means that we needed to 1) understand what we were getting for the money we were spending and 2) understand what would happen when we stopped spending that money. After prioritizing our spending we made trade-offs by deciding what we could live without. Some of the trade-offs may have been no-brainers, but some may have been very difficult.
I would argue that this is the same process that should occur in any organization that needs to reduce its spending. It amazes me how a manager can walk into a large organization and mandate a large cut in the budget for each and every department (as they are defined in the accounting system, but that’s a different blog) without understanding where, how and why the money is spent. It would be laughable if the results weren’t so sad.
ATB budget cuts penalize your best managers. These are the managers that run a lean operation, who have taken the initiative to drive out all of the waste and improve productivity. They are already doing the job you’ve asked them to do with the fewest resources possible, but they are being treated in the same manner as the manager who is either not as effective, or who has become jaded by past ATB cuts, so that he/she keeps some “rainy day” resources in the budget for just such “emergencies.” (… and people wonder why their best managers seem to leave after these types of budget cuts, even when their positions are not eliminated.)
Let’s not forget the knock-on effect of penalizing your best managers. The best managers often assemble the best teams to do the work. If one or more members of a lean, highly productive, well-functioning team is forced out in an ATB cut, the rest of the team is forced to pick up the additional work of the departing team members. This extra work, on top of an already full workload, either forces the quality of the work to suffer, or reduces the total output of the team; that is, if the rest of the team elects to stay in an organization that doesn’t value efficiency.
ATB budget cuts often fail to achieve their savings targets or result in so much “slash and burn” damage to the organization that “add-backs” must occur after the blood-letting so the organization can survive. It continues to amaze me that these managers have the time to perform an initial ATB cut, followed by another one or by an “add back” program; but don’t have the time to do it right the first time.
ATB cuts suggest that the value of the work performed under each of the budgets is equal to the value of the work performed in all other budgets. I have seen a lot of different organizations over my career and I don’t think I’ve ever observed this to be the case. Take my friend’s case: her group makes money, while others spend it. Is a cost cut that forces a reduction in revenue equal to a cost cut that has no impact on revenue? Probably not.
So, what’s the answer? Clearly, many organizations are forced to radically reduce costs just to survive. I think it goes back to our home budget example: 1) know what you’re spending; 2) understand what you get for it, 3) find ways to get the same or similar things for less money, and 4) make the hard choices about what you can do without.
In the end, my experience has been that managers who drive ATB cost reductions are incapable/unwilling to understand their business processes and organizations sufficiently; lack the imagination or skills to reengineer their business processes; or lack the courage to make the hard choices about what their organization will do and what it won’t do in the future.
To all those top level managers who have instituted ATB cuts, or for those who are planning to do so: Don’t do it! Think before you act, and save your company the added burden of bad management.
Did you miss Jim Piermarini talk at CSCMP about Logitech’s supply chain distribution methodology? For those that are interested, we are posting the slide deck here for your review. To download the complete presentation click the image below:
July 6th, 2010 12:10 pm Category: Publications, by: Editor
“Swapping commodities with other manufacturers instead of shipping internationally can greatly reduce transportation costs and boost profits. Finding the right swap partner will help you avoid the risks that are inherent in these arrangements.”
“If you’re under a mandate to make substantial cuts in supply chain costs, you probably have already “picked the low-hanging fruit”—that is, you’ve made the obvious, easyto- achieve improvements. You probably have optimized delivery to your customers by routing orders through the most efficient network of warehouses. It’s likely that you also are sourcing from highly efficient manufacturing plants around the world. And perhaps you have partnered with ocean carriers and leveraged your shipping volumes to negotiate the best rates in the industry….”
This month, Manufacturing Today magazine published an article entitled Supply Chain: Time to Experiment, which was co-authored by Dr. Alan Kosansky and Dr. Joe Litko of Profit Point. The article discusses how executive-level business decisions can include a broad range of interconnected variables leading to an extensive array of options and discusses how business leaders can gain exceptional insights in to future scenarios.
Read the complete article here.
This month, Chief Executive magazine published an article entitled Increasing Value Across the Supply Chain, which was co-authored by Dr. Alan Kosansky and Jim Piermarini of Profit Point. The article discusses how optimization technologies can be used to maximize supply chain value by implementing more sophisticated approaches to standard practices such as inventory and asset management.
Read the complete article here.
This month’s issue of Supply & Demand Chain Executive features a supply chain “best practices” article entitled The Changing Landscape of Optimization Technology. The article, which was co-authored by Profit Point’s Director of Sales, Rich Guy, and the company’s President, Dr. Alan Kosansky, reviews the optimization tools that are empowering today’s leading supply chain decision makers.
Logitech is a world leader in personal peripherals, driving innovation in PC navigation, Internet communications, digital music, home-entertainment control, gaming and wireless devices. With a history of fast-growing distribution channels and a product line that is frequently being updated, Logitech’s key supply chain challenges are similar to those of many other consumer electronics heavyweights. Its product life cycles are relatively short and consumer demand can be fickle. But when Logitech gained global, mass market status with customers ranging from Walmart and Best Buy to direct online sales, its supply chain challenges were compounded.
With mounting distribution challenges, Logitech engaged Profit Point to bridge the gap between their ERP and their real world need to compete. Click the link below to access the case study:
Speed is a common measure for many of today’s supply chains. We all want to know how quickly we can respond to the customer’s request. Speed of response is one of the main drivers behind the current thinking on the Demand Driven Supply Network (DDSN) and many other recent supply chain innovations. One important thing about “speed” is that it comes in at least two flavors, “fast” and “slow”. Many of today’s best supply chains use both. A key supply chain opportunity is to know when to use which speed for transportation to meet customer service targets at the lowest total cost.
You might ask yourself, “Why, if my supply chain is supposed to be quick to respond to customer requests, would I want to include any speed other than “fast?” One “fast” answer is cost. We see this cost almost every day in our personal lives as well as our professional lives. When we purchase something off the internet, we always have the choice of “standard shipping” for a certain price (sometimes free) for delivery in a few days, or “premium shipping” that can have the package on our doorstep the next morning. However, this next-day service almost always costs us more. Likewise, when a supply chain professional chooses rail for his/her long-distance shipments the costs are substantially lower than they would have been for truck, but at the cost of a longer transit time.
Because of this cost differential, many of today’s supply chains use multiple modes (fast and slow) of transportation. The slower mode of transportation is usually applied to lower cost materials that have a longer shelf life and are consumed in a predictable pattern. That is, their demand can be reasonably accurately forecasted. However, in practice there are often significant numbers of shipments of this same material that are made using faster, more expensive modes of transportation. Why is this, and how can we minimize it in our supply chains?
A major factor contributing to the freight premium seen in these types of lanes is the fact that most supply chains are planned with “steady state” in mind, using average demands, average transit times and average supply capacities. Steady state looks great on paper, but rarely happens for prolonged periods in the real world. Thus, we find that natural variation in customer demand, transit time or manufacturing capacity can create low inventory situations that require expedited shipments to avoid a stock-out. In addition, unplanned surges in demand, transportation interruptions (like port congestion, strikes or storms) and temporary shortfalls of supply perturb the system and push us to expedite shipments that are supposed to move by a cheaper mode of transportation.
Reducing the premium freight caused by the natural variation in demand, transit time and supply should happen at the supply chain design stage. Using the right blend of statistics, modeling and experience will result in a much more robust supply chain that balances the additional cost of inventory, and logistics assets against the high cost of premium freight. Dealing with the issue of large, unanticipated perturbations to the supply, demand and transit times is another kettle of fish.
This issue may best be explored by using an example. Profit Point was retained by a large specialty materials manufacturer to help solve this precise problem. The specialty materials company manufactured a number of key raw materials at a large, integrated site on the US Gulf Coast. From there, they were shipped to nine company locations around the US and Canada as well as to a number of external customers. Although there were multiple products manufactured at the site, the level of integration among the products was such that an upset in one part of the process could impact the supply of a number of these raw materials. The preferred mode of transportation for all of the material was by rail in tank cars. However, each year the company was spending millions of dollars in premium freight to move the materials by tank truck to the very same sites. The root causes for the premium freight were:
1. The manufacturing site was required to operate very close to its instantaneous maximum capacity to meet demand. Thus, any small interruptions in supply had large ripple effects through the system because “catch-up capacity” was almost non-existent to rebuild inventory. (This is why you need significantly more inventory when you operate so close to capacity, but that’s the subject of a future article. )
2. Rail transit times were extremely variable, particularly to sites in the Far West.
Because neither of these problems would be solved in the foreseeable future, we needed to develop a customized solution that would minimize their premium freight costs while continuing to deal with the ongoing perturbations in the supply chain. In other words, how could we manage the transition from all rail shipments to partial or full truck shipments and back to rail at the minimum cost while meeting customer service requirements?
Using a customized heuristic algorithm, Profit Point developed a finite capacity scheduling application that created a product/customer-site specific schedule of tank cars and tank trucks that maintained minimum safety stocks at the nine company site and met customer service goals for the external customers while minimizing the total freight spend. With this new tool, the manufacturer cut the premium freight cost dramatically and improved overall customer service.
With the tool, the scheduler could create much better schedules in a fraction of the time required with the spreadsheet approach she had been using. (Those of us who have worked on the plant floor remember that “optimum” can be loosely defined as “the first schedule that works,” when you’re up to your eyeballs in alligators with people calling to find out when they’re going to receive their next shipment. The new scheduling algorithm was able to look at hundreds of schedules that would work and choose the best one.) She was able to quickly orchestrate the moves of certain lanes into trucks and then back into rail cars as conditioned changed either on the supply side or on the transit-time/demand side of the process.
If you’d like to find out more about managing multiple transportation modes in common lanes in your supply chain, please call us at (866) 347-1130, or send us an e-mail using the following link: http://www.profitpt.com/contact/.
Earlier today, Fair Isaac Corporation (NYSE:FIC), the leading provider of analytics and decision management technology, announced that it has acquired U.K.-based Dash Optimization, makers of Xpress-MP, the world’s leading software product for decision modeling and optimization. The move augments Fair Isaac’s decision management solutions, which automate, improve and connect decisions to enhance business performance.
The acquisition is considered a strategic move by Fair Isaac that will greatly benefit both Dash and Fair Isaac customers. According to Dash, all of the Dash employees are very excited about this new direction and will continue working as Fair Isaac employees. Fair Isaac views Dash as a key business which it is looking to grow and leverage while making a significant investment in the product suite.
We look forward working with the new combined company and expect that it will provide new resources and opportunities for all Profit Point clients.
Look for our new web-based release of Profit Network in spring 2008. Profit Network is a stand-alone optimization planning software package that is used to design better supply chain networks. Profit Network can be used to analyze the placement and location of production facilities, distribution centers and warehouses over a multi-period planning horizon. Profit Network helps firms restructure their supply chains after mergers, periods of rapid growth or in anticipation of geographic or product preference shifts in the market. Savings of 10% of supply chain costs and 25% of supply chain cycle time are typical.
Profit Vehicle Router (PVR) helps distributors save money by cutting the time needed to develop sales/distribution territories and schedules, as well as reducing delivery miles and the number of sales/delivery vehicles and drivers needed. PVR helps you plan optimal delivery or sales territories, cycle-day territories (what days each site will receive deliveries), and daily routes from a distribution center or office, improving customer service, employee productivity and ultimately increasing profits.