Frequently when we work with clients to implement decision support tools for supply chain scheduling and planning, they often have some unique constraint that is essential to model and may be unique to their environment. Some recent examples we have encountered include the following:
- When producing a batch in a make to order environment, the plant always produces some extra amount called the purge quantity which is stuck in the piping from the reactor to the packout line. After purging the line, this material is recycled into the next batch.
- A warehouse can have capacity constraints on both the
- Throughput based on the number and type of doors and
- Storage based on material characteristics such as hazardous material classifications.
- When working with a dairy industry client, the bill of materials changes throughout the year based on the component ratios of the milk produced by the cows which drives the product split.
These types of situations are a regular occurrence and require modeling tools that allow for the flexibility to deal with them. We will implement decision support tools either with a development suite such as Aspen Tech’s Supply Chain ManagementTM or develop an application that connects to an optimization engine such as FICO’s XpressTM. These tools provide a base starting point but then allow for adding modeling constraints that are required to include to get to a solution that the client can actually implement.
In addition, having this flexibility allows the work processes and tools that enable the work processes to evolve over time as the business needs change.
This flexibility though has to be balanced with some level of standardization. Therefore we will often build a new application by using a previous application as a starting point. For a production scheduling tool, there are many things that are common between different implementations including how to represent the schedule via an interactive Gantt chart, common basic reports, standard interfaces to external systems, etc. In a production planning tool, typically there are plants, warehouses and transshipment points to be modelled via a network representation; costs and capacities at each of these nodes in the network that need to be modelled; and an objective function that is either to minimize cost or maximize profit. All of these would be common elements between different planning model implementations.
- Flexibility allows for
- Modelling essential constraints that may be unique to a particular client’s environment but are required to get to a feasible solution that the client can actually implement.
- Changing the tool over time as the business needs change.
- Standardization allows for
- Faster / cheaper implementation.
- Faster / cheaper support.
- Ease of training when moving to a different role but using similar tools.
Having a hybrid of flexibility with standardization is the best of both worlds!
My day started out ordinary: tea, checking emails, and weekly team meeting. Little did I know that I was in store for some mystery, detours, and redemption. It all started with a few email requests at the beginning of my day.
I checked my emails, and saw two pressing requests addressed to me. One was a problem importing planned orders from SAP into an Aspen SCM production scheduling model for my specialty chemicals customer in Argentina, and a separate request to add weekly gridlines to the planning board for a scheduler of the same manufacturer in Kentucky.
I shot a quick request to the Argentina scheduler and made tea while I waited for his response. Five minutes later, we jumped on a WebEx together. I quickly understood that during the weekly planned order (PLO) import from SAP, one product was not importing. Now, he had to manually add them.
Manual entry was not only time consuming—it is risky. There was a high chance of missing an order or two due to the large amount of data. A missed order becomes a missed shipment, which leads to halting an end customer’s production line.
I let the scheduler know I would get back to him as soon as I have an answer. First I had to run to another meeting.
I sent an Instant Message (IM) to the Kentucky scheduler who logged the other pressing request: “I got your request. I’ll get back to you ASAP.”
He replied: “OK!”
I jumped on my weekly call with Profit Point colleagues that support the same client. We talked about various issues. I also let everyone know that I had some active issues and might reach out for help. Lastly, I asked Deanna Wenstrup to invite some US schedulers to a training class that I will lead for LAR (Latin America Region) schedulers next week.
I hung up the phone, ready to tackle the Argentina PLO import issue. I ran a trace on the planned order import code to see if that would quickly identify the issue.
Then, I searched the code for the product in question looking for unusual behavior. And, finally, I had a clue in the Unit of Measure (UOM) field!
I shot an instant message to my colleague, Mark Rockey, to see if he had seen the UOM, “M”, used for this product. He hadn’t.
I sent some screen shots to the Argentina scheduler for his input on this potential root cause.
While I waited for Argentina, I focused on the Kentucky scheduler’s request. The schedule gridlines were currently set to weekly intervals. However, he required a non-standard gridlines setting.
I found very little in the documentation. I needed a little extra help. Thankfully, Profit Point has MMI Masters (team members who have worked with MMI models for 20+ years). I quickly posted a question to them.
After lunch, I received responses from MMI Masters, John Hughes and Mark Rockey. They gave me two options, which I emailed to the customer: (1) implement available code which will anchor time but not the week day, or (2) Profit Point will provide custom code to anchor both time and week.
The customer quickly responded that he would prefer to wait on the custom code. I added it to his minor enhancement request list.
Then, the Argentina scheduler pinged me on IM, and explained that the unit of measure, “M”, or meters, was in use by other products that successfully imported.
Frustrated with this dead end, I jumped back into the code. After what seemed like hours, I suddenly spotted a field called MRP Type (Materials Requirements Planning). I got a flash of inspiration and adjusted two fields manually to match a product that worked.
BINGO!! The planned orders came in.
I immediately sent a screen shot to the Argentina scheduler to let him know that I finally found the problem. Ten minutes later, I received a response simply stating:
You are a genius.
I received an email from Deanna confirming the attendance of the US schedulers at training next week. Now that I know it will be a mix of US and Latin American schedulers, I set to work on my training presentation.
I began working on a mockup of a Schedule Metrics report requested by the Global Innovation Manager of a customer. I let the customer know if he approves the changes then Jim Piermarini and I can begin SQL development work the following week. (In fact, it was approved two days later!)
I tested AspenTech’s recommended changes for an enhancement question that I had. Their recommendation solved my original question, but I found another issue which I documented and shared.
My day went on until 6 PM. I resolved another urgent issue in California. It’s these long and hectic days, when we resolve difficult and mysterious problems as a team, that makes me grateful to be a Profit Pointer!
For years we’ve been hearing about Big Data. Now the call is to make the data visible and actionable. Easier said than done. Remember when we wanted our music, phone and camera on one device instead of having to carry multiple devices? Data Visualization is that desirable right now and it is challenging to do well. Here’s why:
Challenge #1: Properly defining the question that you want the data to answer
In the world of supply chain the leaders typically want all of the data summed up into Good News or Bad News. For example, at the end of a monthly S&OP meeting one of the key questions that gets asked is Can Sales continue to promote product A? For Operations to give a Yes or No answer, a timeframe has to be defined. Once the timeframe is agreed, then Operations can answer the question by building a heat map for every product or family of product (if that makes the data more manageable). The heat map then can be given to Sales at the end of the monthly S&OP.
Challenge #2: Cleaning up dirty data
This is where most organizations get stuck. Cleaning up the data is tedious work but it has to be done or the metric is useless. Take heart, sometimes identifying and fixing the issues with the data is meaningful on its own. Also, think about what decisions that dirty data is influencing on a daily basis or the time that it takes to explain variances every month.
Challenge #3: Developing graphics that tell the story at a glance with the push of a button
You have to work with your audience to determine what graphics work for them. I find that it’s best to create something and then get feedback. This step can be a bit of trial and error but once you have the design locked in, then you need a skilled developer to automate the report out. The end users really appreciate if they can easily run the reports and generate the charts and graphs on-demand with the push of button. If it is complicated or requires many manual key strokes to generate the charts and graphs then that report out will not be sustainable.
Challenge #4: Making the data actionable
Congratulations on making it to this step. You have put so much effort into getting here and now all you have to do is summarize thousands or even millions of data points across multiple parameters in a way that helps the receivers of the results to take action. If you can monetize the results by showing costs or savings that will give direction to the receivers of the output to either keep doing what they’ve been doing or give them incentive to make a change. Or, if you can summarize the data into categories that is meaningful to the audience then they will know where to focus their time and energy to make improvements.
Here is an example of a chart that answers the question: How good is my schedule? This chart in addition to five other supporting charts can be generated on-demand in 30 seconds.
At Profit Point, we work with our clients to overcome the challenges with Data Visualization and develop Meaningful Supply Chain Metrics. Contact us at www.profitpt.com or at 610-645-5557 and we will be happy to assist you.
Here at Profit Point, we typically put in a fair amount of effort up front to scope out a project together with our client. This typically helps us and our client to set appropriate expectations and develop mutually agreeable deliverables. These are key to project success. But another key element to project success is getting good quality data that will allow our clients to make cost effective decisions from the analysis work we are doing or the software tool we are implementing.
Decision support models are notoriously data hogs. Whether we are working on a strategic supply chain network design analysis project or implementing a production scheduling tool or some optimization model, they all need lots and lots of data.
The first thing we do (which is usually part of our scoping effort) is identify each of the data types that will be required and what will be the source of this data. To do this we start with what decisions are trying to be made and what data is required to make them successfully. From there we identify if the data currently exists in some electronic form (such as an MRP system) or whether it will have to be collected and entered into some system (say a spreadsheet or database program) and then figure out how the data will get into the tool we are developing.
Second, we try to get sample data from each data source as early as possible. This allows us to see if the assumptions that were made as part of the scoping effort were valid. There is nothing like getting your hands on some real data to see if what you and your team were assuming is really true! Often there are some discoveries and revelations that are made by looking at real data that require design decisions to be made to be able to meet the project deliverables.
Third, to help with data validation we find it extremely helpful to be able to visualize the data in an appropriate way. This could take the form of graphs, maps, Gantt charts, etc. depending on the type of data and model we are working on. On a recent scheduling project, we had the schedulers review cycle times in a spreadsheet but it wasn’t until they saw the data in Gantt chart form that they noticed problems with the data that needed correcting.
Identifying data sources, getting data as early as possible and presenting the data in a visualized form are absolutely required to make a project successful. Omitting any of these steps will at least add to the project cost and / or duration or possibly doom the project to failure.
December 1st, 2015 5:11 pm Category: Distribution, Global Supply Chain, Green Network, Inventory Management, Network Design, Optimization, Optimization Software, Scheduling, Solver Optimization, Supply Chain Improvement, Supply Chain Optimization, Supply Chain Planning, Transportation, Vehicle Routing, Warehouse Optimization, by: Gene Ramsay
Profit Point has been helping companies apply mathematical techniques to improve their business decisions for 20 years now, and it is interesting to review some of the advances in technology that have occurred over this time that have most enabled us to help our clients, including:
• The ability for companies to capture, store and access increasingly larger amounts of transaction and anecdotal data that quantify the behavior and motivation of customers, manufacturers, suppliers and other entities
• The improvement in analytical capabilities that help make optimized choices, in such areas as the solving mixed integer optimization problems, and
• The improvement of computing technology, allowing us to perform calculations in a fraction of the time required just a few years ago
A recent post on the Data Science Central website highlights the use of advanced techniques based on these advances by on-line marketplace Amazon, which is generally acknowledged as one of the most tech-savvy companies on the planet. 21 techniques are listed that Amazon uses to improve both their day-to-day operations and planning processes, including supply chain network design, delivery scheduling, sales and inventory forecasting, advertising optimization, revenue / price optimization, fraud detection and many others. For a complete list see the link below:
Like our customer Amazon, Profit Point is committed to using these techniques for the benefit of our clients – we have been concentrating on implementing business improvement for our clients, including optimization in various forms, since our very beginning. Are you, like Amazon, using the best methods to seize the opportunities that are available today?
Profit Point is helping several large chemical manufacturers upgrade their many Aspen SCM scheduling models with a goal to achieving long term support-ability in the new Aspen architecture of ver 8.5. An Aspen SCM (MIMI) Upgrade is no small undertaking, but we have been helping people manage, support, and enhance their scheduling models for over 20 years.
I have seen many Mimi scheduling models over the last 20 years, in many different businesses, and it is still amazing to me how well these scheduling models work. Their superior applicability is primarily due to creativeness of the their original modelers and their efforts to incorporate all the important aspects of the plants which they schedule, and most that I have seen have remained relevant and useful all these years. Their longevity is due is no small part to the flexibility of the scheduling environment which is Aspen SCM (AKA Mimi). This allows for many minor changes to the tool as equipment characteristics change or are upgraded, or as the business needs change, or indeed as the scheduler changes. This new version retains that flexibility which has made Aspen SCM scheduling models still relevant today.
In previous version changes, Aspen SCM has always been backward compatible; meaning that with nominal effort a newer Aspen SCM version would open an older version’s scheduling model. This was true up to ver 8.x released earlier this year. With this version, the older scheduling models, especially those that were developed in house, will not be able to function properly without a more substantial effort. Version 8.x brings a new XML based architecture and with it a new look and feel, more compatible with today’s applications. In addition, it has some useful new features that can make scheduling easier. Link here https://www.aspentech.com/products/aspen-plant-scheduler/ Aspen SCM remains, in my opinion, the best tool for the job of scheduling plants of all types and sizes. This new version is no break from that long history of being the best, indeed it has just been made even better.
With plants around the world, our customers trust Profit Point to upgrade their effective scheduling models to the latest version of Aspen SCM (Mimi) so they can enjoy many more years of effective scheduling at their plants.
We love doing this work. Call us if you are facing the same upgrade challenge, we may be able to help get you going.
September 4th, 2013 4:39 pm Category: Scheduling, by: Richard Guy
Can Pulling the Plug on a Legacy Application Be Pouring Money down the Drain?
It’s expensive to develop new business solutions. If an existing application continues to accommodate your business requirements, and the complete software application platform (licensing, service, and support model) presents a continued value proposition, then why change? What language the code is written in or how old the software package is matters less than whether you can sustain it, support it and maintain it.
For an example: Why is AspenTech’s supply chain management solution, Aspen SCM™, more commonly known as MIMI™, still used in production planning and scheduling at companies around the world? In this era of high tech rapid application development, you would think an interactive modeling interface that started its beginnings more than 25 years ago would die peacefully. But in this case, it’s not dead, it is still flourishing.
Since Profit Point supports a variety of MIMI™ users in the chemical, biotech, tire, and petrochemical industries we recognize its history of value, and it’s potential to change with the changing business climate. MIMI has allowed companies to take control of their production scheduling problems, and help them maximize throughput while optimizing resource investment.
Profit Point has evaluated and supported various supply-chain management solutions, and we continue to provide support for older MIMI versions used for production planning and scheduling. It’s the built in scheduling and task sequencing capabilities, and MIMI’s user-friendly interface and the ease which it can be configured to address a company’s specific business process requirements that makes it the ideal legacy supply-chain management solution. Furthermore, leveraging MIMI’s scheduling models to integrate with ERP systems, such as SAP, Oracle or others, streamlines the scheduling process with minimal human interaction. Even after 20 plus years of use, MIMI’s flexibility, configurability, ability to integrate with other databases and custom applications, allows it to survive. Add in MIMI’s ability to model a company’s business processes at a fraction of the cost to implement a new system, and you have a survivor. Aspen SCM™ or MIMI™ (Which by the way, stands for Manager for Interactive Modeling Interfaces) continues to outlast other scheduling tools
In our opinion, legacy systems can continue to provide exceptional value if nurtured with the right service and support resources. Pulling the plug on some legacy applications can be pouring money down the drain. These are resources that can be used in other areas.
March 4th, 2013 4:19 pm Category: Global Supply Chain, Operations Research, Optimization Software, Scheduling, Supply Chain Agility, Supply Chain Improvement, Supply Chain Software, by: Danielle Cohen Jarvie
In a recent sailing trip to Croatia, we lost our sailboat. Sounds outrageous, but it really wasn’t difficult at all. It was early evening when we anchored in the harbor and took our dinghy to shore for dinner. A few hours later, seeing the wind pick up, we returned to the spot where we thought we had left the boat and it had vanished. It was dark, very dark. Looking for the boat on the dark ocean at night was like looking for a needle in a haystack. After hours of searching, we finally found the boat headed out to sea, we had not let out sufficient line for the anchor. The harbor was surrounded by rocky cliffs, and we had no idea what course the boat had taken, and if it had incurred any damage in its renegade voyage. We shook a mechanic out of bed to evaluate if any damage had been done. After it was all said and done, we were very lucky, the boat was fine.
I can’t help but liken this to a manufacturing supply chain, without a business process to chart the way, without software helping to navigate and support the process and without people in place trained to captain the process, the business, like an unanchored sailboat, drifts into sometimes dangerous territory. Yet, this scenario is not atypical for many companies.
How do you know if the anchor is set on your supply chain? Here are some attributes:
- A clear and documented business process that serves as the guide to how you operate under normal conditions as well as defines flows for unexpected changes and events. As conditions change, the process should be reevaluated and assessed in an ongoing fashion.
- Software that supports the business process and enables users to react quickly to unexpected events making your business adaptable and flexible. Software should be tailored to your business needs, one that is one size fits all does not necessarily work for all products and customers.
- Trained people who are living the business process and using the software giving your business a competitive edge. Additionally, attention to detail and data driving the software is crucial and can have a big impact on the business.
To learn more about Profit Point’s Global Supply Chain services, please contact us.
At the risk of sounding like a supply chain nerd, here at Profit Point, I get a similar sense of exhilaration in enabling our clients to increase the velocity in their supply chains by implementing decision support tools that enable faster and better decisions.
These decision support tools enable faster and better decisions in at least the following 3 ways:
1. Faster visibility to the data – By having a software tool that holds all the data needed to make a particular decision with automated interfaces to source systems, our users don’t have to spend countless hours combing through multiple spreadsheets and other software systems to get the data they need. We bring all the data needed together in one place for the user to be able to make effective decisions.
2. Faster understanding of the data – Supply chain decision support tools have huge amounts of data coming in and going out of them. Making sense of it all can be challenging. Typically what we do is build tools that allow the user to sort through all this data by:
a. Having graphical user interfaces that make it easier to understand what is going on. After all a picture is worth a thousand numbers any day of the week.
b. Show only the exceptions or problems that need to be resolved to help the user focus on what needs to be changed.
3. Faster processing of the data – Oftentimes we will automate tasks that are menial and time consuming or if the task is very complex it may be appropriate to employ an optimization or heuristic solution approach to speed getting to a feasible or better solution. We like to call these “Power Assist” tools that greatly ease the burden on the user while still giving them ultimate control over the decisions that are made.
Do you feel the need for more speed in your supply chain? Give us a call so we can discuss how we can help to get you moving faster.
A husband, two kids and a golden retriever later… I am back to implementations in Supply Chain planning and scheduling. To my surprise, the same challenges I encountered 10 years ago remain in force today: data, defining business processes, data, implementing software, data, training people, data, supporting the change to a new system and data.
Data collection remains one of the cornerstones of success of a supply chain planning or scheduling implementation. Though scores of data may exist in a company’s business, harnessing it to feed into a planning or scheduling model can be extremely complex and time consuming. Interestingly, the data collection process often drives an elucidation of manufacturing practices and process flows, and clients learn what they do and don’t know about their business. This may seem backwards and risky in terms of getting things out of order. In a perfect world, a thorough understanding of manufacturing and business processes would pave the way towards building an Advanced Planning and/or Scheduling System. In reality, they often happen in tandem and are evolutionary in nature.
Deciding how data will be housed, derived and propagated early on in an implementation will pay off in the long run. Establishing a systematic, automated way to update and propagate data is equally important as the decision of what software system to use. It is worth the investment to take the time to put this automation in place as a greater and greater number of products are added to a system the data will remain manageable and scalable.
From PC to Cloud, emails to tweets, networking happy hours to LinkedIn, it is nice to know some things stay the same.
November 21st, 2011 12:20 pm Category: Global Supply Chain, Network Design, Optimization, Risk Management, Scheduling, Supply Chain Agility, Supply Chain Improvement, Supply Chain Planning, Sustainability, by: Jim Piermarini
Change is hard.
In the businesses that I help, change comes for several reasons. It may be thrust upon the business from the outside, a change in the competitive landscape for instance, or a new regulation. It may come from some innovative source within the company, looking for cost savings to increase profitability of productivity, or a new process or product with increased productivity. Change can come from the top down, or from the bottom up. Change can come in a directed way, as part of a larger program, or organically as part of a larger cultural shift. Change can come that makes your work easier, or harder, and may even eliminate a portion (or all) of the job that you were doing. Change can come to increase the bottom line or the top line. But primarily change comes to continue the adaptation of the company to the business environment. Change is the response to the Darwinian selector for businesses. Adapt or decline. Change is necessary. It is clear to me from my experience that businesses need to change to stay relevant.
This may seem trite or trivial, but accepting that change is not only inevitable, but that it is good, is the shift in attitude that separates the best companies (and best employees) from the others.
So, you say, I see the need to change, it is not the change itself that is so difficult, but rather the way that it is inflicted upon us that makes it hard. So, why does it have to be so hard? Good question.
Effective managers know that change is necessary but hard. They are wary of making changes, and rightly so. Most change projects fail. People generally just don’t like it. Netflix is a great example. Recently, Netflix separated their streaming movie service from their DVD rental business. After what I am sure must have been careful planning, they announced the change, and formed Quikster, the DVD rental site, and the response from the customer base was awful. As you likely know, Netflix, faced with the terrible reception from their customer base and stockholders, reversed their decision to separate streaming from DVDs. What was likely planned as a very important change, failed dead. Dead, dead, dead. Change can be risky too.
If change is necessary, but hard and risky… how can you tame this unruly beast?
The secret of change is that it relies on three things: People, Process, and Technology. I name them in the order in which they are important.
People are the most important agents relative to change, since they are the one who decide on the success or failure of the change. People decided that the Netflix change was dead. People decide all the time about whether to adopt change. And people can be capricious and fickle. People are sensitive to the delivery of the change. They peer into the future to try to understand the affect it will have on them, and if they do not like what they see… It is the real people in the organization who have to live with the change, who have to make it work, and learn the new, and unlearn the old. It is likely the very same people who have proudly constructed the current situation that will have to let go of their ‘old’ way of doing things to adopt to the new. Barriers to change exist in many directions in the minds of people. I know this to be true… in making change happen, if you are not sensitive to the people who you are asking to change, and address their fears and concerns, the change will never be accepted. If you do not give them a clear sense of the future state and where they will be in it, and why it is a better place, they will resist the change and have a very high likely hood of stopping the change, either openly, or more likely passively and quietly, and you may never know why the fabulously planned for change project failed.
Process is the next aspect of a change project that matters. A better business process is what drives costs down. Avoiding duplication of efforts, and removing extra steps. Looking at alternatives in a ‘what-if’ manner, in order to make better decisions, these are what make businesses smarter, faster, better. A better business process is like getting a better recipe for the kitchen. Yet, no matter how good a recipe; it still relies on the chef to execute it and the ovens to perform properly. Every business is looking for better business processes, just as every Chef is looking for new recipes. But putting an expert soufflé recipe, where the soufflé riser higher, in the hands of an inexperienced Chef does not always yield a better soufflé. People really do matter more than the process.
Technology is the last aspect of the three that effect change. Better technology enables better processes. A better oven does not make a Chef better. The Chef gets better when they learn to use the new oven in better ways, when they change the way they make the soufflé, since the oven can do it. A better oven does not do it by itself. An oven is just an oven. In the same way, better technology is still just technology. It by itself changes nothing. New processes can be built that use it, and people can be encouraged to use it in the new process. Technology changes are the least difficult to implement, and it is likely due to this fact that they are often fixed upon as the simple answer to what are complex business problems requiring a comprehensive approach to changing the business via it people, process, and technology.
Change is necessary, but hard and risky. Without change businesses will miss opportunities to adapt to the unforgiving business world, and decline. However, change can be tamed if the attitude towards it is changed to be considered a good thing, and is addressed with a focus on people, process and technology, in that order. Done right, you can implement the change that will increase the bottom line and avoid a collapse of your soufflé.
Many commercial enterprises and public agencies operate telephone call centers to provide effective and timely service for customers. By employing nearly 5% of the national workforce, call centers arguably define the “new factory floor” in an increasingly service-based economy. They are fascinating socio-technological systems which are exceptionally well-suited for the application of mathematical modeling and optimization methods.
A typical call center utilizes a computerized call handling system which can archive detailed historical information on call volume, call handling time, and other relevant attributes. This data can be analyzed and aggregated (with appropriate accounting for probabilistic variation) to generate a profile of staffing requirements across future time intervals. In theory, service agents can be optimally scheduled to closely accommodate this profile, resulting in high service levels, low customer abandonment, and efficient agent utilization. In actual practice, however, such performance represents the exception rather than the rule. Most call centers, even well-run ones, do not simultaneously achieve high levels of service quality and operational efficiency .
One important reason for the performance gap between theory and practice is lack of sophistication and flexibility in the standard software systems available for call center management. For example, standard systems invariably base interval staffing requirements on the classic “Erlang C” model, which is known to produce distorted results because it does not consider pertinent factors such as customer impatience . Additionally, if the software has any capability for schedule “optimization,” the underlying algorithm is usually a greedy heuristic which sequentially adds agent shifts without due consideration of the complex interactions between them. Beyond these technical limitations, standard systems offer minimal capability to experiment with different shift types and customize the solution strategy.
Profit Point can provide the expertise and custom tools necessary to properly model your unique call center environment and achieve optimal performance. By applying recently-refined mathematics, interval staffing requirements can be accurately determined and optimal shift distributions can be precisely derived . Efficiency improvements exceeding 10% are typical, coincident with improvement in service level performance. Many additional operational factors, such as on-line chat activity and agent specialization, can also be addressed. There is no better time than now for you to reap the rewards of optimizing your organization’s call center operations.
 Noah Gans, Ger Koole, and Avishai Mandelbaum, “Telephone Call Centers: Tutorial, Review, and Research Prospects,” Manufacturing and Service Management 5, 79–141 (2003).
 Lawrence D. Brown, et al., “Statistical Analysis of a Telephone Call Center: A Queueing-Science Perspective,” Journal of the American Statistical Association 100, 36–50 (2005).
 Dennis C. Dietz, “Practical Scheduling for Call Center Operations,” Omega 39, 550–557 (2011).
Frequently, you might hear somebody say that the capacity of a production facility is some known and fixed value. When this happens be very wary of what they might be trying to sell you. Because as with so many other things, when measuring capacity “the devil is in the details”.
The “capacity” of a factory sounds like a pretty simple notion and something that should be easy to calculate. But this is only true for production systems that are fairly straightforward, consisting of totally independent machines and processes. If the organization however consists of operations that are interconnected and interdependent on each other, then capacity can be a fairly difficult thing to measure.
In the vast majority of production systems, there is a very real link between capacity and three critical factors:
- the mix of products, and how much time is required for setup/cleanup between consecutive production runs,
- the ability to create sophisticated and optimal schedules for the production resources,
- how much physical space exists in the factory where products that are only partially complete can be kept or stored; what’s known as Work in Process (or WIP) Inventory.
To see these 3 relationships at work, consider the simple case where a certain department produces two products, A and B, which both use the same piece of equipment, and there is only one of these machines available. The production rates of the machine are in the table below and there is a 4 hour setup time required when the machine switches over from producing one product to another. Now consider the 2 scenarios below. In Scenario A, the capacity is 170 units per day while in scenario B the capacity is 145.
|Scenario A||Scenario B|
|ProductionRate (Units / hr)||Daily Sched Qt.||Hrs required||Daily Sched Qt.||Hrs required|
|Setup hrs ->||4||4|
This example clearly demonstrates the frist item above, that the “capacity” of the department depends to a large extent on the mix of the 2 products that are being produced.
Now suppose that management wants to produce 110 of A and 80 of B per day. These new requirements seem to clearly exceed the capacity of the department given EITHER Scenario A or B. But maybe the necessary capacity can still be found.
If the new requirement is to produce at this increased rate for only a single day, or to produce at this rate each and every day, then there is definitely not enough capacity on the machine. However, if the increased production is required over a sustained length of time, then we can gain extra production by modifying the production schedule so as to eliminate or minimize the occurrence of the 4 hour setup. If the department schedules production in long blocks spanning several days, where first one product and then the other is produced, then the department DOES have the capacity. In the table below for example, 440 units of A is first produced followed by 320 of B, with a 4 hour setup between them. This represents 4 days worth of the increased management requirement (100 of A and 80 of B each multiplied by 4).
|ProductionRate (Units / hr)||Sched Qt.||Hrs required|
|Setup hrs ->||4|
With this schedule, the total required hours of 94 is less than the 96 hours available in 4 days, and so now there IS enough capacity! By scheduling wisely (i.e. “working smarter”), the department’s average daily capacity has actually risen to (760 / 4) = 190 units per day, a good deal higher than either 170 or 145 in the two previous scenarios.
Thus, the department capacity clearly depends on the ability to implement “smart” production schedules that make the best use of the available resources, i.e. the second issue mentioned earlier.
Finally, this higer capacity schedule is an example of a “good news / bad news” situation. Although the plant is able to produce more (and presumably company revenues will go up) the downside of this higher capacity schedule is that the department will be maintaining a larger amount of inventory in the supply chain on average. And if there is more “stuff” in the pipeline, then there has to be the physical space to put it. This is an important consideration if inventory has to be stored in or on particular types of storage facilities such as refrigerators or special racks. Therefore, although it might be possible to “buy” extra production capacity with a better equipment schedule, it is important to realize that different schedules put more or less demand on the spatial capacity of the actual storage facilities.
Therefore, this example illustrates the third item, that increasing ouput can put stress on the plant’s storage facilities
This last scenario also shows that maximum capacity is not necessarily the same as minimum cost. Because notice that in this scenario there is only one 4-hour setup, and thus any costs from the setup activity are averaged over a larger number of produced items. But offsetting this savings in setup cost is the fact that with the increased WIP, the inventory costs will have gone up.
The fact that capacity can be such a difficult thing to measure, does not mean that it is not a valuable parameter to describe a given system. What it does mean is that when any capacity value is given for a particular supply chain, it is absolutely critical to understand the assumptions that underlie it. The fact that capacity is such a highly maleable concept, simply reinforces the fact that managing a company’s supply chain is always a delicate balancing act between competing costs and non-monetary factors.
When using a rapid iterative development process, a client may see several solutions to his problem that are not perfect. Getting the best feedback from the client at every cycle of the development is important. If you get only the most obvious evaluations, the progress towards an acceptable product is erratic, possibly unsuccessful, and very painful.
Recent experience was a scheduling application where we did not have direct contact with the final user. Here we were working for an intermediate third party. It finally became clear that feedback was going to be limited to a comment about the feature the customer wanted to focus on. Usually that would represent some aspect of the solution that was either unacceptable or very different from current practice. Some current practices would require new resources and changes in parts of the system that could not be easily changed. Read the rest of this entry »
March 9th, 2010 8:41 pm Category: Scheduling, by: Jim Piermarini
“Hello, this is Jake at Western Chemical Resupply. How can I help you?”
“Hello, Jake, this is Paul, I need to order some pool supplies for my pool business.” says the voice on the other end of the line.
This is great, no problem. After getting Paul’s customer information, he asks:
“So I typically order 150 units of the new chlorine product to start the season, and I am opening my store in a few weeks. Do you have any in stock?” Read the rest of this entry »
What is a Monte Carlo model and what good is it? We’re not talking a type of car produced by General Motors under the Chevy nameplate. “Monte Carlo” is the name of a type of mathematical computer model. A Monte Carlo is merely a tool for figuring out how risky some particular situation is. It is a method to answer a question like: “what are the odds that such-and-such event will happen”. Now a good statistician can calculate an answer to this kind of question when the circumstances are simple or if the system that you’re dealing with doesn’t have a lot of forces that work together to give the final result. But when you’re faced with a complicated situation that has several processes that interact with each other, and where luck or chance determines the outcome of each, then calculating the odds for how the whole system behaves can be a very difficult task.
Let’s just get some jargon out of the way. To be a little more technical, any process which has a range of possible outcomes and where luck is what ultimately determines the actual result is called “stochastic”, “random” or “probabilistic”. Flipping a coin or rolling dice are simple examples. And a “stochastic system” would be two or more of these probabilistic events that interact.
Imagine that the system you’re interested in is a chemical or pharmaceutical plant where to produce one batch of material requires a mixing and a drying step. Suppose there are 3 mixers and 5 dryers that function completely independent of one another; the department uses a ‘pool concept’ where any batch can use any available mixer and any available dryer. However, since there is not enough room in the area, if a batch completes mixing but there is no dryer available, then the material must sit in the mixer and wait. Thus the mixer can’t be used for any other production. Finally, there are 20 different materials that are produced in this department, and each of them can have a different average mixing and drying time.
Now assume that the graph of the process times for each of the 8 machines looks somewhat like what’s called a ‘bell-shaped curve’. This graph, with it’s highest point (at the average) right in the middle and the left and right sides are mirror images of each other, is known as a Normal Distribution. But because of the nature of the technology and the machines having different ages, the “bells” aren’t really centered; their average values are pulled to the left or right so the bell is actually a little skewed to one side or the other. (Therefore, these process times are really not Normally distributed.)
If you’re trying to analyze this department, the fact that the equipment is treated as a pooled resource means it’s not a straightforward calculation to determine the average length of time required to mix and dry one batch of a certain product. And complicating the effort would be the fact that the answer depends on how many other batches are then in the department and what products they are. If you’re trying to modify the configuration of the department, maybe make changes to the scheduling policies or procedures, or add/change the material handling equipment that moves supplies to and from this department, a Monte Carlo model would be the best approach to performing the analysis.
In a Monte Carlo simulation of this manufacturing operation, the model would have a clock and a ‘to-do’ list of the next events that would occur as batches are processed through the unit. The first events to go onto this list would be requests to start a batch, i.e. the paperwork that directs or initiates production. The order and timing for the appearance of these batches at the department’s front-door could either be random or might be a pre-defined production schedule that is an input to the model.
The model “knows” the rules of how material is processed from a command to produce through the various steps in manufacturing and it keeps track of the status (empty and available, busy mixing/drying, possibly blocked from emptying a finished batch, etc.) of all the equipment. And the program also follows the progress and location of each batch. The model has a simulated clock, which keeps moving ahead and as it does, batches move through the equipment according to the policies and logic that it’s been given. Each batch moves from the initial request stage to being mixed, dried and then out the back-door. At any given point in simulated time, if there is no equipment available for the next step, then the batch waits (and if it has just completed mixing it might prevent another batch from being started).
What sets a Monte Carlo model apart however is that when the program needs to make a decision or perform an action where the outcome is a matter of chance, it has the ability to essentially roll a pair of dice (or flip a coin, or “choose straws”) in order to determine the specific outcome. In fact, since rolling dice means that each number has an equal chance of “coming up”, a Monte Carlo model actually contains equations known as “probability distributions”, which will pick a result where certain outcomes have more or less likelihood of occurrence. It’s through the use of these distributions, that we can accurately reflect those skewed non-Normal process times of the equipment in the manufacturing department.
The really cool thing about these distributions is that if the Monte Carlo uses the same distribution repeatedly, it might get a different result each time simply due to the random nature of the process. Suppose that the graph below represents the range of values for the process time of material XYZ (one of the 20 products) in one of the mixers. Notice how the middle of the ‘bell’ is off-center to the right (it’s skewed to the right).
So if the model makes several repeated calls to the probability distribution equation for this graph, sometimes the result will be the 2.0-2.5 hrs, other times 3.5-4.0 hrs, and on some occasions >4hrs. But in the long run, over many repetitions of this distribution, the proportion of times for each of the time bands will be the values that are in the graph (5%, 10%, 15%, 20%, etc.) and were used to define the equation.
So to come back to the manufacturing simulation, as the model moves batches through production, when it needs to determine how much time will be required for a particular mixer or dryer, it runs the appropriate probability equation and gets back a certain process time. In the computer’s memory, the batch will continue to occupy the machine (and the machine’s status will be busy) until the simulation clock gets to the correct time when the process duration has completed. Then the model will check the next step required for the batch and it will move it to the proper equipment (if there is one available) or out of the department all together.
In this way then, the model would continue to process batches until it either ran out of batches in the production schedule that was an input, or until the simulation clock reached some pre-set stopping point. During the course of one run, the computer would have been monitoring the process and recording in memory whatever statistics were relevant to the goal of the analysis. For example, the model might have kept track of the amount of time that certain equipment was block
ed from emptying XYZ to the next step. Or if the aim of the project was to calculate the average length of time to produce a batch, the model would have been following the overall duration of each batch from start to finish in the simulated department.
The results from just one run of the Monte Carlo model however are not sufficient to be used as a basis for any decisions. The reason for this is the fact that this is a stochastic system where chance determines the outcome. We can’t really rely on just one set of results, because just through the “luck of the draw” the process times that were picked by those probability distribution equations might have been generally on the high or low side. So the model is run repeatedly some pre-set number of repetitions, say 100 or 500, and results of each of these is saved.
Once all of the Monte Carlo simulations have been accumulated, it’s possible to make certain conclusions. For example, it might turn out that the overall process time through the department was 10 hrs or more on 8% of the times. Or the average length of blocked time, when batches are prevented from moving to the next stage because there was no available equipment, was 12 hrs; or that the amount of blocked time was 15hrs or more on 15% of the simulations.
With information like this, a decision maker would be able to weigh the advantages of adding/changing specific items of equipment as well as modifications to the department’s policies, procedures, or even computer systems. In a larger more complicated system, a Monte Carlo model such as the one outlined here, could help to decrease the overall plant throughput time significantly. At some pharmaceutical plants for instance, where raw materials can be extremely high valued, decreasing the overall throughput time by 30% to 40% would represent a large and very real savings in the value of the work in process inventory.
Hopefully, this discussion has helped to clarify just what a Monte Carlo model is, and how it is built. This kind of model accounts for the fundamental variability that is present is almost all decision making. It does not eliminate risk or prevent a worst-case scenario from actually occurring. Nor does it guarantee a best-case outcome either. But it does give the business manager added insight into what can go wrong or right and the best ways to handle the inherent variability of a process.
Profit Point’s data integration and scheduling optimization services deliver reliable results with reduced operations costs.
North Brookfield, MA
Profit Point today announced that its Profit Data InterfaceTM software has been selected by Rohm and Haas Company (NYSE: ROH) to integrate its scheduling processes with the company’s ERP data warehouse. The company, which last reported nearly $9 billion in annual sales, produces innovative products for nine industries worldwide through a network of more than 100 manufacturing, technical research and customer service sites. Optimizing and supporting the production and distribution scheduling across this network is a complex and ever-changing process.
“Rohm and Haas has a history of improving our operations to enhance customer service levels and reduce cost,” said Dave Shaw, the company’s Business Process Manager for MFG and Supply Chain. “Production scheduling, which entails constant change to meet demand, is one of the toughest challenges in the supply chain. In the past, the lack of a reliable data interface has limited our ability to react quickly and with a high degree of confidence in our results. Profit Point’s Data Interface software has given us near real-time access to highly reliable data, so we can respond quickly and know that our plan is right.”
Profit Data Interface is a robust application that helps decision makers boost the effectiveness of their ERP data by extending its usefulness with optimization applications. By leveraging existing ERP systems, the software provides a robust and proven method that supply chain managers can rely upon to optimize their critical business processes and improve profitability.
“Rohm and Haas is a recognized leader in the chemicals industry with a reputation for supply chain excellence,” said Jim Piermarini, Profit Point’s CEO. “We have supported their scheduling processes for years. So, it was clear that the next evolution was to directly connect their optimization software to the date store using our Data Interface product.”
Profit Data Interface, which integrates with SAP® and Oracle® data stores, can be used to optimize the entire supply chain including network planning, production and inventory planning, distribution scheduling, sales planning and vehicle routing.
To learn more about Profit Point’s supply chain software and services, visit www.profitpt.com.
About Profit Point:
Profit Point Inc. was founded in 1995 and is now a global leader in supply chain optimization. The company’s team of supply chain consultants includes industry leaders in the fields infrastructure planning, green operations, supply chain planning, distribution, scheduling, transportation, warehouse improvement and business optimization. Profit Point’s has combined software and service solutions that have been successfully applied across a breadth of industries and by a diverse set of companies, including General Electric, Dole Foods, Logitech and Toyota.
About Rohm and Haas Company:
Leading the way since 1909, Rohm and Haas is a global pioneer in the creation and development of innovative technologies and solutions for the specialty materials industry. The company’s technologies are found in a wide range of industries including: Building and Construction, Electronics and Electronic Devices, Household Goods and Personal Care, Packaging and Paper, Transportation, Pharmaceutical and Medical, Water, Food and Food Related, and Industrial Process. Innovative Rohm and Haas technologies and solutions help to improve life every day, around the world. Visit www.rohmhaas.com for more information.
Leveraging Profit Point’s supply chain optimization methodologies, Toyota North American Part Center California improves efficiency and quality of their workload planning sequencing process to receive containers from Japan.
North Brookfield, MA (PRWEB) October 6, 2008
Profit Point today announced that Toyota Motor Sales (TMS), U.S.A., Inc.’s North American Part Center California (NAPCC) has improved its receiving sequencing processes using advanced mathematical optimization techniques. NAPCC is one of the parts distribution centers among TMS’ North American Parts Operations network, which was established to improve local parts sourcing and manage a parts distribution network that supplies all North American Toyota distributors, U.S. Toyota, Lexus and Scion dealers as well as export to parts centers in Japan. NAPCC turned to Profit Point to apply mathematical optimization techniques to further improve their supply chain operations.
“We turned to Profit Point to apply mathematical optimization techniques to further improve our supply chain operations,” Johnnie Garlington, NAPCCs warehouse operations manager. The program supported the increase in daily offload by 16% resulting in labor savings, off-site storage costs and detention expenses.
Profit Point, the leading supply chain optimization company, combines proprietary software with proven optimization techniques to help business managers improve their operations. Profit Point supported NAPCC’s objective to redesign their workload planning process to improve the efficiency and quality of their sequencing processes. Profit Point carried this out by designing and building custom supply chain software to optimize their sequencing processes.
“We were asked to investigate a mathematical approach to solving Toyota NAPCC’s container receiving sequencing process,” said Joe Litko, Profit Point’s Business Optimization Practice Leader. “This was an interesting challenge for several reasons. We needed a cost-effective solution using legacy tools, the model needed to run quickly, be flexible, and give robust solutions that consider several performance measures simultaneously.”
NAPCC had been using a traditional spreadsheet to manually achieve an hourly workload plan. Profit Point reviewed the sequencing process and designed a stand-alone application to smooth out the flow of containers to maximize the daily unload capacity.
“Like most businesses, Toyota NAPCC was using good, traditional operations practices,” said Dr. Alan Kosansky, Profit Point’s President. “But, by combining the right mathematical optimization methods with a clear understanding of the business requirements, we were able to achieve a superior supply chain process for Toyota.”
About Profit Point:
Profit Point Inc. was founded in 1995 and is now a global leader in supply chain optimization. The company’s team of supply chain consultants includes industry leaders in the fields infrastructure planning, green operations, supply chain planning, distribution, scheduling, transportation, warehouse improvement and business optimization. Profit Point’s has combined software and service solutions that have been successfully applied across a breadth of industries and by a diverse set of companies, including The Coca-Cola Company, General Electric, Rohm and Haas and Toyota.
There are many approaches to production scheduling, each relying on different business process and/or technology solutions. Manufacturers have a key decision to make at each of their production facilities: How much information technology is needed to support the scheduling process, and how could the manufacturing process be simplified to reduce the information technology requirements?
Profit Point uses a structured decision process to help determine the appropriate level of technology to support scheduling in each manufacturing environment. This assessment process considers the following factors:
-> Material flow through the stages of the manufacturing process (product routing), including alternate routings, manufacturing options, and bill of materials
-> Storage asset utilization
-> Impact of make- to-order and make-to-stock policies on production
-> Manufacturing bottlenecks and their locations (single or multiple)
->Cost of de-bottlenecking the operation
-> Information requirements to support the scheduling process
-> Information sources and bottlenecks, including refreshing of data elements
-> Scheduler (user) experience and analytic capabilities
-> Business rules for customer responsiveness and manufacturing flexibility
-> Cost of information technology solutions and their support
-> Cost of manufacturing simplification
A one day hands-on Scheduling Process Assessment incorporates the needs of the following stakeholders
-> Plant Scheduling Management
-> Plant manufacturing operations
-> Business owner (e.g. product manager)
-> Information Technology support staff
The Assessment process culminates with a specific recommendation for the most cost effective approach to support and sustain the scheduling requirements.
Profit Point was selected by Rohm and Haas Company to provide multiple consulting and support services using AspenTech‘s Supply Chain Management (SCM) (formerly Aspen MIMI) supply chain modeling software. Aspen SCM is used to power numerous supply chain applications. These applications can include industry optimization solvers like Xpress-MP by Dash Optimization or CPLEX by ILOG, Inc.
Profit Point has worked with Rohm and Haas to design support coverage that was flexible but also responsive to their requirements. The services included end-user support and enhancement work to all Aspen SCM based supply chain scheduling applications. Profit Point provides support service that includes the repair and debug of model problems as they arise, support of the various production and operational processes that feed data to and from the models and minor enhancements to the models. Profit Point’s Aspen SCM support service allowed Rohm and Haas’s business units to continue providing excellent customer service by delivering quality products with minimal scheduling interruptions.
In addition, Rohm and Haas selected and engaged Profit Point Inc to improve, design and develop several scheduling models to manage Rohm and Haas’s production scheduling process for the plastics additives and coatings production, which includes 100+ products, 1,000+ SKUs, and 30+ production facilities. Profit Point worked with Rohm and Haas to identify ongoing requirements for production scheduling and has designed, created and delivered over 30+ plant and process specific scheduling tools to allow Rohm and Haas to achieve their strategic goals to improve production scheduling, lower operating costs and provide better service to their customers.
- Model Enhancements
- Model Application Design And Development
- Model And SAP APO (Or Other) Application Integration
- Technical Support and Training
- Help Desk Break/Fix Support
To learn more about how Profit Point can help improve and support your scheduling processes, contact us here: (866) 347-1130 or
To learn more about how Profit Point can help improve and support your scheduling processes, contact us here:
(866) 347-1130 or