September 3rd, 2015 3:28 pm Category: Optimization, by: Jim Piermarini
New technologies spring up each year, seemingly every day. There are new programming languages, frameworks, and processes. There are new personal productivity gadgets and apps. There are new internet connectivity method (routers, access points, bridges, cable modems, etc). It can all be pretty daunting if you think you need to stay ahead of all this change.
Back in the day, I used to go to a brick and mortar bookstore and peruse the computer books, and end up purchasing one or several books to read cover to cover to learn about the new item. I have purchased many dozens of technology related books over the course of the last 20ish years in my attempt to stay abreast of the bow way of technology change. I realized recently that I have not purchased a new technology book in several years. I got to thinking about why, and if I would be comfortable with my wife’s requests to toss all these old books. My first reaction was, I can’t get rid of these, they are books! But then I got to considering whether I had opened them anytime in the last 5 years (or 10 or 15!), and projecting if I would actually open them anytime in the foreseeable future. The short answer is, I really can’t see when I would open these books, ever again. So I asked myself why is that? And the answer is not that I have given up my desire to stay current, not exactly. Nor is it that all the technology books are hopelessly out of date (although some are). The reason I don’t think I’ll be using these books ever again has to do with the way the internet and Google specifically, has changed the way we learn.
Learning a new technology is easier today than ever before. You can google the subject and find not only the theoretical background of the subject, but also many practical implementation details and examples. For instance, along with myself, I know several people who are self-taught in SQL Server using only the resources available on the internet. And we are actually pretty competent at it. Given that experience, I know that I could also easily learn mySQL (I have had to learn some of that recently) or Java (again, I’ve dabbled in it) or Mongo DB, or any other no-SQL database new technology. Knowing that there are ample examples and many resources for new technologies, has allowed me to redefine how much I need to know before I can be confident that I can tackle a project in that new technology. I know that the syntax is a detail that will soon fall into place. Now that syntax may be in a book somewhere on my shelves, but it is also on the internet, just a few clicks away. I’ll opt for the easier and faster approach to getting that info anytime. So the books stay on my shelves, (or get donated as my wife is suggesting.)
Keeping current today in technology is a different thing than knowing all the depth and detail of a subject in previous years. Google is everywhere, and has almost everything, and it is not going away any time soon. Think calculators, and the way they were reviled for undermining the need to learn how to do math in your head. “You can’t always count of having a calculator!” was the refrain that was meant to show the importance of being a competent math mentalist. But today there are calculators everywhere, on my PC, my phone, and on my watch (if I had a google watch), and for that matter, so is Google! It seems reasonable to expect that the internet and google search will be with us for some time. People have accepted the pervasive and ubiquitous nature of internet, and it is changing the way we devote our limited brain space to information and knowledge. For me, I think that it is more important to know some critical details along with the broad understanding of the subject matter. I can offload the rest of the details to the internet to be retrieved when I need them. My local cache of info can be more specialized, in that it can be very deep and narrow in some areas while very broad and shallow in others. I don’t mind being shallow in broad areas, since even in the shallow areas, I know I can go to any depth I need very quickly with the help offered on the internet. That is how my view of knowledge has been transformed away from packing it into my head and onto my bookshelves, and using into the internet age. Others may have a different need of knowledge, and that is a discussion beyond my understanding. And while there may be a book on this subject, I’m sure I could google it.
March 6th, 2014 9:32 am Category: Operations Research, Optimization, Optimization Software, Profit Network, Profit Vehicle Planner, Profit Vehicle Router, Supply Chain Improvement, Supply Chain Optimization, Supply Chain Planning, by: Jim Piermarini
In the recent weeks, I have been thinking about testing our applications, like our popular Profit Network, or Profit Vehicle Planner. When we test, we run data sets that are designed to stress the system in different ways, to ensure that all the important paths through the code are working properly. When we test, our applications get better and better. There are many good reasons to test, most importantly, is to know that an improvement in one part of the code does not break a feature in a different part of the code.
I have been thinking about how we could test our code a bit more, and the means by which we could do that. I have been reading about automated testing, and its benefits. They are many, but the upshot is that if the testing is automated, you will likely test more often, and that is a good thing. To automate application testing requires the ability to churn out runs with nobody watching. And to do that, the application needs to be able to be kicked off and run in a way that there are no buttons or dialog boxes that must be manually clicked to continue. There can be no settings that must be manually set, or information reviewed to decide what to do next. In addition, the application must then save the results somewhere, either in the instance of the application, or to a log file, or to a database of some sort. Then finally, to really be testing, the results must be compared to the expected results to determine the pass/fail state of the test. This requires having a set of expected results for every test data set.
In looking at this process above, I see numerous similarities to the process used to run a sensitivity analysis, in that many runs are typically run, (so automation is a natural help) and the results need to be recorded. Sensitivity Analysis is a typical process for user of our Profit Network tool, and out Profit Planner and Profit Scheduler tool. An additional step in sensitivity analysis however, is that you may desire to change the input data in a systematic way (say Demand + 5%, and Demand -5%), and to the extent that it is indeed systematic, this too could be folded into the automation. The results analysis is different too, in that here you would like to look across the final sets of results at the differences, while in testing you just compare one set of test results to its expected results. I can foresee difficulty in automating the data changes, since each type of data may need to be changed in a very specific way. Never-the-less, even if the data changes are manual, they could be prepared ahead of the run, and the runs themselves could be grouped in a batch run to generate the results needed for a sensitivity analysis.
Constructing a harness that lashes up to an application where you can define the number of runs to be made, the setting for that run, the different data sets to be used, and the output location for results to be analyzed, would be useful not only for testing, but for the type of sensitivity analysis we do a lot of here at Profit Point.
I am going to encourage our developers to investigate this type of a system harness to be able to talk to and control our applications to be able to run them automatically, and have their results automatically stored in a data store for either test or sensitivity analysis.
Jim Piermarini | CEO Profit Point Inc.
February 19th, 2014 3:51 pm Category: Supply Chain Optimization, by: Karen Bird
The economy has been slow to recover after the Great Recession of 2008, however, many economists believe that 2014 and 2015 will be strong years for the US and Global economies. How accurate will your forecasting model be in projecting the supply needed from your business? Since forecasting models typically use two to three years of history (actual sales) to predict the future and we are coming out of a down economy and heading towards positive growth, the standard forecasting models will not predict the future very well. This is where human intelligence and companies with a formal Sales and Operations Planning (S&OP) process have an advantage.
A formal S&OP process gives companies a monthly opportunity for their Sales and Operations teams to come together and review the data, latest intelligence from the field and make the best decisions possible for the company. In addition, a formal S&OP process gives the business a forum each month to challenge the current execution plan and either reconfirm or adjust the plan to meet the strategic goals of the company. A monthly review of key forecasting metrics can provide the Sales team with valuable feedback regarding the forecast.
Read the Profit Point S&OP Research Report
A study by the Aberdeen Group shows that greater than 60% of Best-In-Class companies view a formal S&OP process as a strategic priority for their organization and that Best-In-Class companies hold an 18 point advantage in forecast accuracy. According to an AMR Research study from 2008, companies that are best at demand forecasting average:
- 15% less inventory
- 17% higher perfect order fulfillment
- 35% shorter cash-to-cash cycle times
- 1/10 the stock-outs of their peers
How does a formal S&OP process help to deliver these benefits? It is a combination of getting the right people together to make the right decisions at the right time. A few years ago, Thomas Wallace, author of Sales and Operations Planning: How-To Handbook initiated a project to study the experiences of companies using Executive S&OP very well. The companies in the Best Practices Project cited similar hard benefits listed above but they also said “the soft benefits are equal in importance, or perhaps greater, than the hard benefits”. The soft benefits most often cited were:
- Enhanced Teamwork
- Embedded Communications
- Better Decisions
- Better Financial Plans
- More Focused Accountability
- Greater Control
- A Window Into The Future
A well run S&OP process will put a spotlight on problem areas or gaps in your business 18 – 24 months in the future. This allows the team to collectively see a potential problem or upside opportunity and produce scenarios to help the company to react to them in a timely and efficient manner.
October 23rd, 2013 9:00 am Category: White Papers, by: Editor
Today, smart manufacturers view the supply chain as a critical element for gaining competitive advantage. Leading companies have long since gloablized their manufacturing and distribution operations. They rely heavily on enterprise resource planning (ERP) platforms to track and record virtually every transaction that occurs in the supply chain – from raw materials sourcing to point-of-sale sell-through.Without doubt, the efficiencies that have accrued through ERP are significant. When one accounts for reduced inventory, carrying costs, labor costs, improvements to sales and customer service, and efficiencies in financial management, the tangible cost savings to enterprises have been estimated to range between 10 and 25% or more. 1 2 Global and multinational concerns have reorgnized themselves – through ERP standardization – to create a competitive advantage over regional manufacturers.
While this ERP standardization has created an advantage for larger concerns, leading supply chain managers are discovering new ways to improve beyond ERP’s limitations. In essence, these supply chain ‘disruptors’ are seeking new ways to separate themselves from the pack. The functional areas and tools used by these disruptors varies widely – from long-term global supply chain network design to near-term sales and operations planing (S&OP) and order fulfillment; and from realtively simple solver-based spreadsheets to powerful optimization software deeply integrated in to the ERP data warehouse.
At Profit Point, we believe that continued pursuit of supply chain improvement is great. We believe that it is good for business, for consumers and for the efficient use (and reuse) of resources around the globe. In this survey, we set out to explore the methods, tools and processes that supply chain professionals utilize to improve upon their historical gains and to gain competitive advantage in the future. You can request a copy of the report here.
We welcome your feedback. Please feel free to contact us or leave a comment below.
Profit Point, a leading supply chain optimization firm, adds total delivered cost and margin at the customer location-product level of detail to its supply chain network design software.
Profit Point, the leading supply chain optimization consultancy, today announced the release of an update to Profit Network™, a supply chain network design software that is used by supply chain managers all over the world to gain visibility in to the trade-offs they will face when designing or optimizing a global supply chain. In addition to several other new enhancements, Profit Network now allows users to analyze and report on the total delivered cost and the resulting gross profit margin for all products delivered to each customer location.
“With the ever-increasing availability of granular data across the supply chain, many of our clients have expressed a strong interest in analyzing and reporting on the total delivered cost of a single product or set of customer products,” said Alan Kosanksy, Profit Point’s President. “Previously, it was quite a challenge to understand how costs accumulate over time from raw material procurement through manufacturing, inventory, transportation and customer delivery. Now our customers are able to see the true total cost for each unit of product delivered to each customer. This will be a powerful tool in helping them evaluate their product and customer portfolios.”
In addition to total delivered cost, now Profit Network also enables more control over source-destination matching, as well as inventory levels by establishing minimum and maximum number of days of inventory demand.
“Profit Network software has been helping Fortune 500 companies around the world build more robust and profitable supply chains for more than 10 years,” said Jim Piermarini, Profit Point’s CEO and CTO. “Over that time, the dramatic increase in data availability across the supply chain has provided us tremendous opportunities to solve unique and critical problems in a variety of supply chain networks.”
In addition to Profit Network, Profit Point’s line of supply chain software also includes Distribution and Vehicle Planning, Sales and Operations Planning (S&OP), Production Planning, Scheduling and Order Fulfillment software.
About Profit Point
Profit Point Inc. was founded in 1995 and is now the leading supply chain software and consulting company. The company’s team of supply chain consultants includes industry leaders in the fields infrastructure planning, green operations, supply chain planning, distribution, scheduling, transportation, warehouse improvement and business optimization. Profit Point has combined software and service solutions that have been successfully applied across a breadth of industries and by a diverse set of companies, including Dow Chemical, Coca-Cola, Lifetech, Logitech and Toyota.
Building applications, especially custom ones, carries with it the burden of answering the question: Does this do what the customer wants?
With complicated systems with many interacting features and business rules, answering this question can be daunting. In fact, evaluating the answer can be daunting too, from the perspective of the customer. Having the sales guy check some boxes in a questionnaire, or watching a demo just doesn’t leave you with the assurance that the application with handle all the business requirements, from either perspective, the vendors or the customer. Everyone I have spoken to who has sold complex software, or who has participated in the purchasing process of software has expressed the same doubt. They are just not sure that the tool will be a good fit. As we all know, that doubt does not always prevent the purchase of the software, as each organization has its own level of risk tolerance, and trust in the vendor’s brand or reputation. Often these other considerations can outweigh the amorphous doubt that some folks might feel. How can one quantify that doubt? Frankly, it’s a quandary.
This thought got us at Profit Point thinking… Wouldn’t it be great if there was another way to evaluate the goodness of fit or an application, or the appropriateness of the parameter settings, to match the business needs of an organization. Would it be great if there was a way to eliminate (or greatly reduce) the doubt, and replace it with facts. Either a business rule is obeyed or it is not. Either a decision is made according to the requirements, or it is not. Let’s eliminate the doubt, we thought, and the world would be a better place. (well a little bit anyway).
There are many processes for testing an application as it is being developed, with writing test scripts, and evaluating the results. All these are based on testing little pieces of code, to ensure that each function or sub routine does what it should do in each case of input data. These processes work fine in our opinion, but only when the sub of function is able to be considered independently form the others. When the system has functions that interact heavily, then this approach doesn’t reduce the doubt that the functions may conflict or compete in a way that the whole system suffers. How then to evaluate the whole system? Could we treat the entire application as one black box, and evaluate the important business cases, and evaluate the results? This is exactly what we have done, with the effect of reducing the doubt to zero about the suitability of the application for a business.
With several of our clients we have worked out what seems to be a great process of testing a complex software solution for suitability to the business requirement. In this case, the detailed level function testing methods were not open to us, since the solution relied on a Linear Programming technique.
This process is really just an amplification of the standard testing process.
- Define the test case, with the expected results
- Construct the test data
- Build or configure the application
- Run the Test using the Test Data and Evaluate the results – Pass or Fail
This is the standard process for testing small functions, where the expected results are clear and easy to imagine. However, in some systems where there many interacting rules and conflicting priorities, it may not be simple to know what the expected results should be without the help of the tool’s structure to evaluate them. Such is the case with many of our application, with layer upon layer of business rules and competing priorities… The very reason for using an LP based approach makes testing more complex.
In the revised process, we have, for each new business requirement:
- Construct the test case with the test data
- Build or configure the application
- Set the expected results using the results of the first pass build
- Re-factor the code and test until all test are passing
In my next blog I will show you the simple excel based tools we use to facilitate the test evaluation.
In practice, the process works well, new versions of the application go into production without any surprises, and with full confidence of the application management team that all the business requirements are 100% met.
No doubt – no doubt a better process.
By Jim Piermarini
Upgraded Vehicle Route Planner Software Improves Decisions in Distribution Planning, Fleet Sizing, Driver Productivity and Transportation Cost Reduction
Profit Point announces the introduction of Profit Vehicle Planner™ 3.1, a major upgrade to our distribution analysis and design software. Profit Vehicle Planner is designed for Strategic Logistic and Transportation Managers that have large fleets with multiple daily delivery stops and changing logistics processes. The software update includes a combination of new features and technical enhancements which combine to support richer scenario modeling for larger large fleets with multiple daily delivery stops and changing logistics processes.
Designed to be highly accessible and customizable, Profit Vehicle Planner (PVP™) uses standard Microsoft business tools for calculation and display of information, including Excel, Access and MapPoint. The software automatically creates and designs the optimal sales/distribution territories. It does this by dividing customers into territories and days of service, with each territory representing the volume delivered by one delivery vehicle and one driver over the course of the planning horizon. The objective of the proprietary heuristic algorithm used in Profit Vehicle Planner is to assign customers to territories that will minimize the number of trucks required to serve the customer volumes while delivering within the various common and business-specific constraints, including customer frequency of service, hours available per day, volume available per truck, unique equipment requirements and virtually any other custom constraint required.
“With 12 years in the field, Profit Vehicle Planner has been put to the test against some of the world’s largest supply chain distribution problems,” noted Jim Piermarini, Profit Point’s Chief Technology Officer. “Transportation best practices have expanded over time, so decision makers are looking for more comprehensive strategic logistics and transportation modeling solutions.”
With the new release, PVP’s expanded features include extensive customization of the software to tailor the territory planning solution to be cost and time effective to meet your unique and specific distribution requirements and the ability to use imported address data to automatically geocode customers for whom lat/long data is missing.
For companies that perceive distribution as mission critical, users have the option to integrate PVP deeply into their supply chain systems to import and export data in to their ERP system. Companies that seek the most cost-effective solution have the ability to import virtually any relevant data from an Excel template that includes the following:
- Customer data such as address, location, frequency of service, volume per stop, time required per stop, other data as needed
- Truck data such as size, days of the week that it is available, order in which it is to be scheduled, hours available each day, special equipment, other data as needed
- Warehouse and district data such as location and characteristics of associated trucks and drivers
- Time related data such as start date of planning horizon and number of weeks in the planning horizon.
- Product specific data such as unit of measure of the product being delivered
- Any other data required to accurately model unique constraints
Once optimized, users have the ability to review and assess the characteristics of the territories that are created using tables and maps to provide an enhanced visual experience. And to ensure the optimal distribution plan, users can manually move customers from one territory to another or from one service day pattern to another (e.g. from Monday-Thursday to Tuesday-Friday), if desired.
I was sitting on the plane the other day and chatting with the guy in the next seat when I asked him why he happened to be traveling. He was returning home from an SAP ERP software implementation training course. When I followed up and asked him how it was going, I got the predictable eye roll and sigh before he said, “It was going OK.” There are two things that were sad here. First, the implementation was only “going OK” and second, that I had heard this same type of response from so many different people implementing big ERP that I was expecting his response before he made it.
So, why is it so predictable that the implementations of big ERP systems struggle? I propose that one of the main reasons is that the implementation doesn’t focus enough on the operational decision-making that drives the company’s performance.
A high-level project history that I’ve heard from too many clients looks something like this:
- Blueprinting with wide participation from across the enterprise
- Implementation delays
- Data integrity is found to be an issue – more resources are focused here
- Transaction flow is found to be more complex than originally thought – more resources are focused here
- Project management notices the burn rate from both internal and external resources assigned to the project
- De-scoping of the project from the original blueprinting
- Reports are delayed
- Operational functionality is delayed
- Testing of transactional flows
- Go-live involves operational people at all levels frustrated because they can’t do their jobs
Unfortunately, the de-scoping phase seems to hit some of the key decision-makers in the supply chain, like plant schedulers, supply and demand planners, warehouse managers, dispatchers, buyers, etc. particularly hard, and it manifests in the chaos after go-live. These are the people that make the daily bread and butter decisions that drive the company’s performance, but they don’t have the information they need to make the decisions that they must make because of the de-scoping and the focus on transaction flow. (It’s ironic that the original sale of these big ERP systems are made at the executive level as a way to better monitor the enterprise’s performance and produce information that will enable better decision-making.)
What then, would be a better way to implement an ERP system? From my perspective, it’s all about decision-making. Thus, the entire implementation plan should be developed around the decisions that need to be made at each level in the enterprise. From blueprinting through the go-live testing plan, the question should be, “Does the user have the information in the form required and the tools (both from the new ERP system and external tools that will still work properly when the new ERP system goes live) to make the necessary decision in a timely manner?” Focusing on this question will drive user access, data accuracy, transaction flow, and all other elements of the configuration and implementation. Why? Because the ERP system is supposed to be an enabler and the only reasons to enter data into the system or to get data out is either to make a decision or as the result of a decision.
Perhaps with that sort of a focus there will be a time when I’ll hear an implementation team member rave about how much easier it will be for decision-makers throughout the enterprise once the new system goes live. I can only hope.
June 3rd, 2012 3:15 pm Category: Optimization, by: Alan Kosansky
Supply chain managers are tasked with making the critical decisions to improve supply chain operations by taking costs out of the system while improving customer service and profitability. The best managers rely not only on their experience but also on data-based decision-making. Making the best decision to minimize supply chain costs and maximize profit requires accuracy in your data.
However, don’t confuse accuracy with precision. Too many decision makers throw out the baby with the bath water when they shun data based decisions because the data does not precisely reflect the detail of their operations. Wise supply chain managers and analysts understand that for many decisions, aggregated and/or averaged data can accurately reflect the cost/benefit tradeoff of critical decision and point in the direction of near-optimal decisions. For example, modeling manufacturing capability and capacity at the product family level is often accurate enough to make the right supply and demand balancing decisions. Another example: even when optimizing detailed scheduling operations within a single plant it is important to find the right level of detail to model so as to balance accurately capturing the realities of the manufacturing operations and being able to evaluate a large number of production sequencing options in order to find the optimal schedule.
Finding the right level of data detail to inform accurate decision tradeoffs while searching for the optimal decisions is an art as much as a science. It requires experience and expertise in both supply chain operations and optimization modeling.
This is my passion, so let me know what experiences you’ve had, and how you have found that balance to achieve optimal performance in your supply chain operations.
A husband, two kids and a golden retriever later… I am back to implementations in Supply Chain planning and scheduling. To my surprise, the same challenges I encountered 10 years ago remain in force today: data, defining business processes, data, implementing software, data, training people, data, supporting the change to a new system and data.
Data collection remains one of the cornerstones of success of a supply chain planning or scheduling implementation. Though scores of data may exist in a company’s business, harnessing it to feed into a planning or scheduling model can be extremely complex and time consuming. Interestingly, the data collection process often drives an elucidation of manufacturing practices and process flows, and clients learn what they do and don’t know about their business. This may seem backwards and risky in terms of getting things out of order. In a perfect world, a thorough understanding of manufacturing and business processes would pave the way towards building an Advanced Planning and/or Scheduling System. In reality, they often happen in tandem and are evolutionary in nature.
Deciding how data will be housed, derived and propagated early on in an implementation will pay off in the long run. Establishing a systematic, automated way to update and propagate data is equally important as the decision of what software system to use. It is worth the investment to take the time to put this automation in place as a greater and greater number of products are added to a system the data will remain manageable and scalable.
From PC to Cloud, emails to tweets, networking happy hours to LinkedIn, it is nice to know some things stay the same.
Many commercial enterprises and public agencies operate telephone call centers to provide effective and timely service for customers. By employing nearly 5% of the national workforce, call centers arguably define the “new factory floor” in an increasingly service-based economy. They are fascinating socio-technological systems which are exceptionally well-suited for the application of mathematical modeling and optimization methods.
A typical call center utilizes a computerized call handling system which can archive detailed historical information on call volume, call handling time, and other relevant attributes. This data can be analyzed and aggregated (with appropriate accounting for probabilistic variation) to generate a profile of staffing requirements across future time intervals. In theory, service agents can be optimally scheduled to closely accommodate this profile, resulting in high service levels, low customer abandonment, and efficient agent utilization. In actual practice, however, such performance represents the exception rather than the rule. Most call centers, even well-run ones, do not simultaneously achieve high levels of service quality and operational efficiency .
One important reason for the performance gap between theory and practice is lack of sophistication and flexibility in the standard software systems available for call center management. For example, standard systems invariably base interval staffing requirements on the classic “Erlang C” model, which is known to produce distorted results because it does not consider pertinent factors such as customer impatience . Additionally, if the software has any capability for schedule “optimization,” the underlying algorithm is usually a greedy heuristic which sequentially adds agent shifts without due consideration of the complex interactions between them. Beyond these technical limitations, standard systems offer minimal capability to experiment with different shift types and customize the solution strategy.
Profit Point can provide the expertise and custom tools necessary to properly model your unique call center environment and achieve optimal performance. By applying recently-refined mathematics, interval staffing requirements can be accurately determined and optimal shift distributions can be precisely derived . Efficiency improvements exceeding 10% are typical, coincident with improvement in service level performance. Many additional operational factors, such as on-line chat activity and agent specialization, can also be addressed. There is no better time than now for you to reap the rewards of optimizing your organization’s call center operations.
 Noah Gans, Ger Koole, and Avishai Mandelbaum, “Telephone Call Centers: Tutorial, Review, and Research Prospects,” Manufacturing and Service Management 5, 79–141 (2003).
 Lawrence D. Brown, et al., “Statistical Analysis of a Telephone Call Center: A Queueing-Science Perspective,” Journal of the American Statistical Association 100, 36–50 (2005).
 Dennis C. Dietz, “Practical Scheduling for Call Center Operations,” Omega 39, 550–557 (2011).
You might think the title should be ‘Heuristics or Optimization’, implying a choice. But often the two approaches work well together with heuristics speeding an optimization process. The Wikipedia definition of heuristic calls it an experienced-based technique for problem solving, learning, and discovery. Wikipedia also mentions using heuristics to find a good enough solution and describes them as ‘strategies using readily accessible, though loosely applicable, information to control problem solving.’
Those descriptions do not emphasize another aspect of heuristics – there is generally an underlying concept that informs the heuristic. There is a good reason why we think it will work well in the majority of cases. For example, an angle sweep heuristic is often used when designing routes for pickup and delivery from a central hub. Those routes are candidates for selection in a formal optimization. The designed routes look a lot like the petals of a daisy.
The heuristic starts out by heading north and picking locations close to that direction on the way out and back. How far out the route goes is a property of vehicle capacity or time limitations. The next route to be generated starts out slightly east of north and follows the same limitations and usually overlaps many of the locations on the first route. Once the entire compass has been swept, the best set of routes to cover all locations is selected by an optimization. In the example the heuristic becomes a front end for the optimization.
Another example comes from a driver scheduling problem. Suppose a set of drivers must pick up some commodity from a set of locations for processing at a central plant. Each trip in this example is an out-and-back because of the nature of the commodity, i.e. only one location can be visited. Drivers pick up multiple loads in a day, and each location requires multiple visits. The pickup times are fixed because of other problem features. One approach is to simply allow all combination of driver-load-location pairings and let an optimizer grind away.
But there are other desirable features of the solution: equalizing number of loads among drivers, and keeping driver dead time between loads to a minimum. Specifying all the driver loads by some simple heuristic, e.g. send a driver out for the next load as soon as possible, usually ends up with some loads that cannot be covered. A totally greedy approach fails.
An approach that seems to work well in this case is to consider some drivers for early loads and some for late loads. Work from the front of the early loads assigning each of the early drivers the first two loads they can feasibly complete. Then work from the end of the late loads assigning the last two loads of the late drivers as the last two loads they can feasibly complete.
The loads in the middle and the drivers that are not considered early or late are handled by the optimization. Notice that the heuristic does well on the driver gaps and guarantees that most drivers automatically get two loads, which is a good base in this application. It also serves to speed the optimization by reducing the pairings to be searched while preserving enough flexibility to get a solution.
Furthermore, the heuristic is flexible in that one can choose how many drivers to consider early or late and how many of their loads to nail down heuristically. Most importantly the heuristic gets better solutions than the optimization finds in any reasonable time. So while the optimization must have that best solution out there, it will not find it in the time frame the scheduler has to work within.
My experience is that flexibility is one of the key properties in any good heuristic. Adaptability to new situations is also a feature of good heuristics. One final example illustrating adaptability is based on an algorithm called ‘Toyoda’s Algorithm’. I have applied this particular idea to a number of situations.
In this example it is applied to sequencing the unloading of containers at a port. Each container holds a selection of parts which have to be processed by various work centers prior to shipment out of the port. A manifest shows the selection of the parts and it is known how much work is associated with a given part at a work center. Not every work center can process every part type. The objective is to get all the work centers to end their day at the same time and to keep all the work centers busy throughout the day.
The approach is easy to understand in two dimensions. The X and Y axis represent the available work time of two work centers, e.g. eight hours. The arrows represent the amount of work delivered to a work center by a given container. The dashed line is the ‘ideal path’ — equal amount of work at each work center throughout the day.
The heuristic simply needs to loop through all available containers at each iteration and always try to get back onto the ideal path. Penalties for deviation are totally flexible in that small deviations can be without penalty while sizable ones are some function that penalizes them heavily. Other problem features can be captured, e.g., buffer space at a work center, and incorporated in the penalty function. This is not a formal optimization, but it is speedy and good enough for the real world application.
The watchwords seem to be these. Look for the important features of a good solution. See if a simple rule or concept will drive the solution toward these good features. This is especially true when there is little or no economic benefit to the optimal solution. Try to develop a heuristic that is flexible in adapting to normal variance in instances of the data and can be tuned to choose between competing objectives.
Okay. I am an anomaly. I live in Utah and drink coffee. The majority of the people that live in Utah do not drink coffee, and that is OK, but I do. So, is there a shortage of coffee Cafés in Utah? No. There are many cafés and several that serve outstanding coffee.
We have an exceptional espresso café downtown, located on a side street off of Main. They roast their own coffee and use triple certified organic grown beans. It is the type of place the local coffee lovers go to hang out and have good conversation over a morning or afternoon latté or espresso. Possibly the best coffee I have ever had. What is interesting to me is that a large percentage of the residents in my area do not even know that this café exists.
So what is my point? When it comes to outstanding services or products most people are unaware of what is available, primarily because it does not fit into their lifestyle or what they’re accustomed to. I believe you can transfer this similarity to the business world. Manufacturing logistics and transportation people become accustomed to doing things a certain way. Over time they may become blind to ideas for improving the supply chain. They are unaware of an exceptional Supply Chain Café, even when it is located just seconds from a combination of keystrokes and Google.
It is not their fault they are missing the best latté available. We, as consultants, who prepare those delightful solutions from the Supply Chain Café menu, have probably not done the finest job of promoting our services and software to your neighborhood, but that is changing.
There are many empty cups in the supply chain, waiting to be filled with successful solutions. Supply Chain and Logistic managers tackle difficult supply chain problems every day, but they are so focused on getting their job done and making it through the day that they have little time to think of alternatives that may improve their processes and well being. I am not sure how we can help everyone, so let’s focus on the window shoppers. These are the ones that are aware of the café, but have never been inside. Maybe you are one?
If you are reading this blog, then you must be a window shopper. I am guessing you are looking for a better espresso. OK, you found “Profit Point”, although you may not know what we do. Guess what? Help is on its way. We can share our menu with you. We just published four videos that will introduce you to the Profit Point team and what we do. Embrace three minutes out of your day, select one of the videos, and watch it. Learn how we help companies improve their supply chain, by serving the best coffee with a smile.
Yes, you can improve your supply chain with our help. The supply chain solution that you are looking for, is about to be yours. And if you place an order, we can fill your cup to the top, with the “good triple certified” stuff. If you cannot seem to find that special item on our Supply Chain menu, then no fear, we love special orders.
So, is there a shortage of Supply Chain Cafés? No. You just need to find the one that serves the optimal latté. I know it’s out there somewhere.
Frequently, you might hear somebody say that the capacity of a production facility is some known and fixed value. When this happens be very wary of what they might be trying to sell you. Because as with so many other things, when measuring capacity “the devil is in the details”.
The “capacity” of a factory sounds like a pretty simple notion and something that should be easy to calculate. But this is only true for production systems that are fairly straightforward, consisting of totally independent machines and processes. If the organization however consists of operations that are interconnected and interdependent on each other, then capacity can be a fairly difficult thing to measure.
In the vast majority of production systems, there is a very real link between capacity and three critical factors:
- the mix of products, and how much time is required for setup/cleanup between consecutive production runs,
- the ability to create sophisticated and optimal schedules for the production resources,
- how much physical space exists in the factory where products that are only partially complete can be kept or stored; what’s known as Work in Process (or WIP) Inventory.
To see these 3 relationships at work, consider the simple case where a certain department produces two products, A and B, which both use the same piece of equipment, and there is only one of these machines available. The production rates of the machine are in the table below and there is a 4 hour setup time required when the machine switches over from producing one product to another. Now consider the 2 scenarios below. In Scenario A, the capacity is 170 units per day while in scenario B the capacity is 145.
|Scenario A||Scenario B|
|ProductionRate (Units / hr)||Daily Sched Qt.||Hrs required||Daily Sched Qt.||Hrs required|
|Setup hrs ->||4||4|
This example clearly demonstrates the frist item above, that the “capacity” of the department depends to a large extent on the mix of the 2 products that are being produced.
Now suppose that management wants to produce 110 of A and 80 of B per day. These new requirements seem to clearly exceed the capacity of the department given EITHER Scenario A or B. But maybe the necessary capacity can still be found.
If the new requirement is to produce at this increased rate for only a single day, or to produce at this rate each and every day, then there is definitely not enough capacity on the machine. However, if the increased production is required over a sustained length of time, then we can gain extra production by modifying the production schedule so as to eliminate or minimize the occurrence of the 4 hour setup. If the department schedules production in long blocks spanning several days, where first one product and then the other is produced, then the department DOES have the capacity. In the table below for example, 440 units of A is first produced followed by 320 of B, with a 4 hour setup between them. This represents 4 days worth of the increased management requirement (100 of A and 80 of B each multiplied by 4).
|ProductionRate (Units / hr)||Sched Qt.||Hrs required|
|Setup hrs ->||4|
With this schedule, the total required hours of 94 is less than the 96 hours available in 4 days, and so now there IS enough capacity! By scheduling wisely (i.e. “working smarter”), the department’s average daily capacity has actually risen to (760 / 4) = 190 units per day, a good deal higher than either 170 or 145 in the two previous scenarios.
Thus, the department capacity clearly depends on the ability to implement “smart” production schedules that make the best use of the available resources, i.e. the second issue mentioned earlier.
Finally, this higer capacity schedule is an example of a “good news / bad news” situation. Although the plant is able to produce more (and presumably company revenues will go up) the downside of this higher capacity schedule is that the department will be maintaining a larger amount of inventory in the supply chain on average. And if there is more “stuff” in the pipeline, then there has to be the physical space to put it. This is an important consideration if inventory has to be stored in or on particular types of storage facilities such as refrigerators or special racks. Therefore, although it might be possible to “buy” extra production capacity with a better equipment schedule, it is important to realize that different schedules put more or less demand on the spatial capacity of the actual storage facilities.
Therefore, this example illustrates the third item, that increasing ouput can put stress on the plant’s storage facilities
This last scenario also shows that maximum capacity is not necessarily the same as minimum cost. Because notice that in this scenario there is only one 4-hour setup, and thus any costs from the setup activity are averaged over a larger number of produced items. But offsetting this savings in setup cost is the fact that with the increased WIP, the inventory costs will have gone up.
The fact that capacity can be such a difficult thing to measure, does not mean that it is not a valuable parameter to describe a given system. What it does mean is that when any capacity value is given for a particular supply chain, it is absolutely critical to understand the assumptions that underlie it. The fact that capacity is such a highly maleable concept, simply reinforces the fact that managing a company’s supply chain is always a delicate balancing act between competing costs and non-monetary factors.
October 20th, 2010 11:37 am Category: Optimization, by: Ted Schaefer
A good friend of mine, who works for a large employer in her city, recently told me that her department’s budget, along with every other department budget that was classified as “Administration” in the ubiquitous SAP system, had to be cut by a large and specific percentage.
It didn’t matter that the “Administration” label was not uniformly applied across her organization and that some departments that were so labeled performed functions very similar to other departments that were not stuck with that label. It didn’t matter what services each department provided, or how efficiently they provided them, they just had to cut the budget and they had to hit the number. Incredibly, it didn’t matter that her group was one of the few “Administration” groups that actually generated revenue; in her case three times their total annual budget spend.
Unfortunately, hers is not the first story like this that I have heard.
There is no doubt that many corporations, organizations, governments and households have been hit hard by the recent economic downturn. Each of these groups has been forced to make some difficult decisions. So what do I have against across-the-board (ATB) budget cuts? Basically, I think it has to be the worst way to reduce costs in an organization, and here’s why.
Let’s take a look at something that is important and familiar to all of us; the family budget. Sadly, many families have been forced to drastically reduce spending as a result of a lay-off or furlough over the past two years. In those cases, an ATB cost-cutting strategy just doesn’t work. Try telling the bank that you’ve had to cut your monthly mortgage payments by 15%. I doubt that they will be impressed when you tell them that you’ve had to do the same with your property taxes, insurance premiums, electricity and water payments, as well. You might get lucky and be able to renegotiate your mortgage and you might get lucky if your state provides utilities assistance for people who have recently lost their jobs, but most tax assessors and insurance companies will not be particularly sympathetic.
But my guess is that you’d probably take a very different type of approach to cost-cutting in your household. You’d probably take a hard look at all of the money that you’re spending over a month or a quarter. You might first examine your spending to see if you could conserve on the amount you consume or if there were ways to get the same goods and services in a cheaper manner. If that didn’t reduce your spending enough, you’d probably divide the remaining spending into different categories. There are many different ways to categorize your expenses, but they’ll probably come down to something like, 1) Essential; 2) Non-essential, but painful to cut; 3) Non-essential and easier to cut. If you’re lucky, you will be able to cut enough of your spending by eliminating or reducing your expenses in the non-essential categories. If not, you might be forced to re-examine what really is “Essential.” For example, your mortgage payment is essential, as long as you plan to stay in your house, but if the situation calls for it, you can reduce your costs by moving into a smaller home or apartment. Not a fun choice, but it could be the right thing to do in certain situations.
Looking back on the family budget example, what did we do? First, we looked for opportunities to conserve and less expensive ways to purchase the same goods and services. Next, we prioritized our spending so we could make good decisions. To find less expensive ways to purchase the same goods and services and to prioritize the spending means that we needed to 1) understand what we were getting for the money we were spending and 2) understand what would happen when we stopped spending that money. After prioritizing our spending we made trade-offs by deciding what we could live without. Some of the trade-offs may have been no-brainers, but some may have been very difficult.
I would argue that this is the same process that should occur in any organization that needs to reduce its spending. It amazes me how a manager can walk into a large organization and mandate a large cut in the budget for each and every department (as they are defined in the accounting system, but that’s a different blog) without understanding where, how and why the money is spent. It would be laughable if the results weren’t so sad.
ATB budget cuts penalize your best managers. These are the managers that run a lean operation, who have taken the initiative to drive out all of the waste and improve productivity. They are already doing the job you’ve asked them to do with the fewest resources possible, but they are being treated in the same manner as the manager who is either not as effective, or who has become jaded by past ATB cuts, so that he/she keeps some “rainy day” resources in the budget for just such “emergencies.” (… and people wonder why their best managers seem to leave after these types of budget cuts, even when their positions are not eliminated.)
Let’s not forget the knock-on effect of penalizing your best managers. The best managers often assemble the best teams to do the work. If one or more members of a lean, highly productive, well-functioning team is forced out in an ATB cut, the rest of the team is forced to pick up the additional work of the departing team members. This extra work, on top of an already full workload, either forces the quality of the work to suffer, or reduces the total output of the team; that is, if the rest of the team elects to stay in an organization that doesn’t value efficiency.
ATB budget cuts often fail to achieve their savings targets or result in so much “slash and burn” damage to the organization that “add-backs” must occur after the blood-letting so the organization can survive. It continues to amaze me that these managers have the time to perform an initial ATB cut, followed by another one or by an “add back” program; but don’t have the time to do it right the first time.
ATB cuts suggest that the value of the work performed under each of the budgets is equal to the value of the work performed in all other budgets. I have seen a lot of different organizations over my career and I don’t think I’ve ever observed this to be the case. Take my friend’s case: her group makes money, while others spend it. Is a cost cut that forces a reduction in revenue equal to a cost cut that has no impact on revenue? Probably not.
So, what’s the answer? Clearly, many organizations are forced to radically reduce costs just to survive. I think it goes back to our home budget example: 1) know what you’re spending; 2) understand what you get for it, 3) find ways to get the same or similar things for less money, and 4) make the hard choices about what you can do without.
In the end, my experience has been that managers who drive ATB cost reductions are incapable/unwilling to understand their business processes and organizations sufficiently; lack the imagination or skills to reengineer their business processes; or lack the courage to make the hard choices about what their organization will do and what it won’t do in the future.
To all those top level managers who have instituted ATB cuts, or for those who are planning to do so: Don’t do it! Think before you act, and save your company the added burden of bad management.
Did you miss Jim Piermarini talk at CSCMP about Logitech’s supply chain distribution methodology? For those that are interested, we are posting the slide deck here for your review. To download the complete presentation click the image below:
July 6th, 2010 12:10 pm Category: Publications, by: Editor
“Swapping commodities with other manufacturers instead of shipping internationally can greatly reduce transportation costs and boost profits. Finding the right swap partner will help you avoid the risks that are inherent in these arrangements.”
“If you’re under a mandate to make substantial cuts in supply chain costs, you probably have already “picked the low-hanging fruit”—that is, you’ve made the obvious, easyto- achieve improvements. You probably have optimized delivery to your customers by routing orders through the most efficient network of warehouses. It’s likely that you also are sourcing from highly efficient manufacturing plants around the world. And perhaps you have partnered with ocean carriers and leveraged your shipping volumes to negotiate the best rates in the industry….”
This month, Manufacturing Today magazine published an article entitled Supply Chain: Time to Experiment, which was co-authored by Dr. Alan Kosansky and Dr. Joe Litko of Profit Point. The article discusses how executive-level business decisions can include a broad range of interconnected variables leading to an extensive array of options and discusses how business leaders can gain exceptional insights in to future scenarios.
Read the complete article here.
This month, Chief Executive magazine published an article entitled Increasing Value Across the Supply Chain, which was co-authored by Dr. Alan Kosansky and Jim Piermarini of Profit Point. The article discusses how optimization technologies can be used to maximize supply chain value by implementing more sophisticated approaches to standard practices such as inventory and asset management.
Read the complete article here.
This month’s issue of Supply & Demand Chain Executive features a supply chain “best practices” article entitled The Changing Landscape of Optimization Technology. The article, which was co-authored by Profit Point’s Director of Sales, Rich Guy, and the company’s President, Dr. Alan Kosansky, reviews the optimization tools that are empowering today’s leading supply chain decision makers.