Posts Tagged ‘Scheduling’

Here at Profit Point, we typically put in a fair amount of effort up front to scope out a project together with our client.  This typically helps us and our client to set appropriate expectations and develop mutually agreeable deliverables.  These are key to project success.  But another key element to project success is getting good quality data that will allow our clients to make cost effective decisions from the analysis work we are doing or the software tool we are implementing.

Decision support models are notoriously data hogs.  Whether we are working on a strategic supply chain network design analysis project or implementing a production scheduling tool or some optimization model, they all need lots and lots of data.

The first thing we do (which is usually part of our scoping effort) is identify each of the data types that will be required and what will be the source of this data.  To do this we start with what decisions are trying to be made and what data is required to make them successfully.  From there we identify if the data currently exists in some electronic form (such as an MRP system) or whether it will have to be collected and entered into some system (say a spreadsheet or database program) and then figure out how the data will get into the tool we are developing.

Second, we try to get sample data from each data source as early as possible.  This allows us to see if the assumptions that were made as part of the scoping effort were valid.  There is nothing like getting your hands on some real data to see if what you and your team were assuming is really true!  Often there are some discoveries and revelations that are made by looking at real data that require design decisions to be made to be able to meet the project deliverables.

Third, to help with data validation we find it extremely helpful to be able to visualize the data in an appropriate way.  This could take the form of graphs, maps, Gantt charts, etc. depending on the type of data and model we are working on.  On a recent scheduling project, we had the schedulers review cycle times in a spreadsheet but it wasn’t until they saw the data in Gantt chart form that they noticed problems with the data that needed correcting.

Identifying data sources, getting data as early as possible and presenting the data in a visualized form are absolutely required to make a project successful.  Omitting any of these steps will at least add to the project cost and / or duration or possibly doom the project to failure.

Aspen SCM (MIMI) Upgrade to version 8.5

August 6th, 2014 9:10 am Category: Aspen SCM, MIMI, Scheduling, Supply Chain Software, by: Jim Piermarini

Profit-Point-Logo-(Lo-res)Profit Point is helping several large chemical manufacturers upgrade their many Aspen SCM scheduling models with a goal to achieving long term support-ability in the new Aspen architecture of ver 8.5. An Aspen SCM (MIMI) Upgrade is no small undertaking, but we have been helping people manage, support, and enhance their scheduling models for over 20 years.

I have seen many Mimi scheduling models over the last 20 years, in many different businesses, and it is still amazing to me how well these scheduling models work. Their superior applicability is primarily due to creativeness of the their original modelers and their efforts to incorporate all the important aspects of the plants which they schedule, and most that I have seen have remained relevant and useful all these years. Their longevity is due is no small part to the flexibility of the scheduling environment which is Aspen SCM (AKA Mimi). This allows for many minor changes to the tool as equipment characteristics change or are upgraded, or as the business needs change, or indeed as the scheduler changes. This new version retains that flexibility which has made Aspen SCM scheduling models still relevant today.

In previous version changes, Aspen SCM has always been backward compatible; meaning that with nominal effort a newer Aspen SCM version would open an older version’s scheduling model. This was true up to ver 8.x released earlier this year. With this version, the older scheduling models, especially those that were developed in house, will not be able to function properly without a more substantial effort. Version 8.x brings a new XML based architecture and with it a new look and feel, more compatible with today’s applications. In addition, it has some useful new features that can make scheduling easier. Link here https://www.aspentech.com/products/aspen-plant-scheduler/ Aspen SCM remains, in my opinion, the best tool for the job of scheduling plants of all types and sizes. This new version is no break from that long history of being the best, indeed it has just been made even better.

With plants around the world, our customers trust Profit Point to upgrade their effective scheduling models to the latest version of Aspen SCM (Mimi) so they can enjoy many more years of effective scheduling at their plants.

We love doing this work. Call us if you are facing the same upgrade challenge, we may be able to help get you going.

 

Jim Piermarini

In the recent weeks, I have been thinking about testing our applications, like our popular Profit Network, or Profit Vehicle Planner.  When we test, we run data sets that are designed to stress the system in different ways, to ensure that all the important paths through the code are working properly.  When we test, our applications get better and better. There are many good reasons to test, most importantly, is to know that an improvement in one part of the code does not break a feature in a different part of the code.

ApplicationHarness

I have been thinking about how we could test our code a bit more, and the means by which we could do that. I have been reading about automated testing, and its benefits. They are many, but the upshot is that if the testing is automated, you will likely test more often, and that is a good thing.  To automate application testing requires the ability to churn out runs with nobody watching. And to do that, the application needs to be able to be kicked off and run in a way that there are no buttons or dialog boxes that must be manually clicked to continue. There can be no settings that must be manually set, or information reviewed to decide what to do next. In addition, the application must then save the results somewhere, either in the instance of the application, or to a log file, or to a database of some sort. Then finally, to really be testing, the results must be compared to the expected results to determine the pass/fail state of the test. This requires having a set of expected results for every test data set.

 

In looking at this process above, I see numerous similarities to the process used to run a sensitivity analysis, in that many runs are typically run, (so automation is a natural help) and the results need to be recorded. Sensitivity Analysis is a typical process for user of our Profit Network tool, and out Profit Planner and Profit Scheduler tool.   An additional step in sensitivity analysis however, is that you ApplicationHarness1may desire to change the input data in a systematic way (say Demand + 5%, and Demand -5%), and to the extent that it is indeed systematic, this too could be folded into the automation. The results analysis is different too, in that here you would like to look across the final sets of results at the differences, while in testing you just compare one set of test results to its expected results.  I can foresee difficulty in automating the data changes, since each type of data may need to be changed in a very specific way.  Never-the-less, even if the data changes are manual, they could be prepared ahead of the run, and the runs themselves could be grouped in a batch run to generate the results needed for a sensitivity analysis.

Constructing a harness that lashes up to an application where you can define the number of runs to be made, the setting for that run, the different data sets to be used, and the output location for results to be analyzed, would be useful not only for testing, but for the type of sensitivity analysis we do a lot of here at Profit Point.

I am going to encourage our developers to investigate this type of a system harness to be able to talk to and control our applications to be able to run them automatically, and have their results automatically stored in a data store for either test or sensitivity analysis.

Jim Piermarini  |  CEO Profit Point Inc.

 

8 Signs That Your Supply Chain Software Needs To Be Replaced

October 3rd, 2013 5:54 pm Category: Supply Chain Software, by: Richard Guy

softwareAre you frustrated with your Supply Chain Software application? Is it time to pull the plug and start searching for a replacement application? Here’s a list of signs that we often hear from customers that motivates them to start searching for better solutions.

1. Installation is REALLY difficult.
2. It takes up a lot of space on the hard drive. Several gig.
3. It often throws errors with no obvious solution on how to fix it.
4. It frequently does not work properly, and you find out later, when someone questions the results.
5. Support for new features is very slow and cumbersome.
6. It leaves the user with the sense that you don’t want to touch it, it is delicate, sort of scary.
7. It has not grown with your business; business requirements have changed, but the software has not kept pace.
8. You are searching for others in your organization to own and use the application.

Any of these signs could indicate a time to question the usefulness of the software. If you identify more than four, you need to start searching for a better software application, switch jobs, or double up on those yoga classes.

Building applications, especially custom ones, carries with it the burden of answering the question: Does this do what the customer wants?

With complicated systems with many interacting features and business rules, answering this question can be daunting. In fact, evaluating the answer can be daunting too, from the perspective of the customer. Having the sales guy check some boxes in a questionnaire, or watching a demo just doesn’t leave you with the assurance that the application with handle all the business requirements, from either perspective, the vendors or the customer. Everyone I have spoken to who has sold complex software, or who has participated in the purchasing process of software has expressed the same doubt. They are just not sure that the tool will be a good fit. As we all know, that doubt does not always prevent the purchase of the software, as each organization has its own level of risk tolerance, and trust in the vendor’s brand or reputation. Often these other considerations can outweigh the amorphous doubt that some folks might feel. How can one quantify that doubt? Frankly, it’s a quandary.
This thought got us at Profit Point thinking… Wouldn’t it be great if there was another way to evaluate the goodness of fit or an application, or the appropriateness of the parameter settings, to match the business needs of an organization. Would it be great if there was a way to eliminate (or greatly reduce) the doubt, and replace it with facts. Either a business rule is obeyed or it is not. Either a decision is made according to the requirements, or it is not. Let’s eliminate the doubt, we thought, and the world would be a better place. (well a little bit anyway).

There are many processes for testing an application as it is being developed, with writing test scripts, and evaluating the results. All these are based on testing little pieces of code, to ensure that each function or sub routine does what it should do in each case of input data. These processes work fine in our opinion, but only when the sub of function is able to be considered independently form the others. When the system has functions that interact heavily, then this approach doesn’t reduce the doubt that the functions may conflict or compete in a way that the whole system suffers. How then to evaluate the whole system? Could we treat the entire application as one black box, and evaluate the important business cases, and evaluate the results? This is exactly what we have done, with the effect of reducing the doubt to zero about the suitability of the application for a business.
With several of our clients we have worked out what seems to be a great process of testing a complex software solution for suitability to the business requirement. In this case, the detailed level function testing methods were not open to us, since the solution relied on a Linear Programming technique.
This process is really just an amplification of the standard testing process.

  1. Define the test case, with the expected results
  2. Construct the test data
  3. Build or configure the application
  4. Run the Test using the Test Data and Evaluate the results – Pass or Fail

This is the standard process for testing small functions, where the expected results are clear and easy to imagine. However, in some systems where there many interacting rules and conflicting priorities, it may not be simple to know what the expected results should be without the help of the tool’s structure to evaluate them. Such is the case with many of our application, with layer upon layer of business rules and competing priorities… The very reason for using an LP based approach makes testing more complex.
In the revised process, we have, for each new business requirement:

  1.  Construct the test case with the test data
  2. Build or configure the application
  3. Set the expected results using the results of the first pass build
  4. Re-factor the code and test until all test are passing
Profit Point's Software Testing Process

Profit Point’s Software Testing Process

In my next blog I will show you the simple excel based tools we use to facilitate the test evaluation.

In practice, the process works well, new versions of the application go into production without any surprises, and with full confidence of the application management team that all the business requirements are 100% met.

No doubt – no doubt a better process.

By Jim Piermarini

There is nothing like a bit of vacation to help with perspective.

Recently, I read about the San Diego Big Boom fireworks fiasco — when an elaborate Fourth of July fireworks display was spectacularly ruined after all 7,000 fireworks went off at the same time. If you haven’t seen the video, here is a link.

And I was reading an article in the local newspaper on the recent news on the Higgs: Getting from Cape Cod to Higgs boson read it here:

And I was thinking about how hard it is to know something, really know it. The data collected at CERN when they smash those particle streams together must look a lot like the first video. A ton of activity, all in a short time, and a bunch of noise in that Big Data. Imagine having to look at the fireworks video and then determine the list of all the individual type of fireworks that went up… I guess that is similar to what the folks at CERN have to do to find the single firecracker that is the Higgs boson.

Sometimes we are faced with seemingly overwhelming tasks of finding that needle in the haystack.

In our business, we help companies look among potentially many millions of choices to find the best way of operating their supply chains. Yeah, I know it is not the Higgs boson. But it could be a way to recover from a devastating earthquake and tsunami that disrupted operations literally overnight. It could be the way to restore profitability to an ailing business in a contracting economy. It could be a way to reduce the greenhouse footprint by eliminating unneeded transportation, or decrease water consumption in dry areas. It could be a way to expand in the best way to use assets and capital in the long term. It could be to reduce waste by stocking what the customers want.

These ways of running the business, of running the supply chain, that make a real difference, are made possible by the vast amounts of data being collected by ERP systems all over the world, every day. Big Data like the ‘point-of’sale’ info on each unit that is sold from a retailer. Big Data like actual transportation costs to move a unit from LA to Boston, or from Shanghai to LA. Big Data like the price elasticity of a product, or the number of products that can be in a certain warehouse. These data and many many other data points are being collected every day and can be utilized to improve the operation of the business in nearly real time. In our experience, much of the potential of this vast collection of data is going to waste. The vastness of the Big Data can itself appear to be overwhelming. Too many fireworks at once.

Having the data is only part of the solution. Businesses are adopting systems to organize that data and make it available to their business users in data warehouses and other data cubes. Business users are learning to devour that data with great visualization tools like Tableau and pivot tables. They are looking for the trends or anomalies that will allow them to learn something about their operations. And some businesses adopting more specialized tools to leverage that data into an automated way of looking deeper into the data. Optimization tools like our Profit Network, Profit Planner, or Profit Scheduler can process vast quantities of data to find the best way of configuring or operating the supply chain.
So, while it is not the Higgs boson that we help people find, businesses do rely on us to make sense of a big bang of data and hopefully see some fireworks along the way.

Upgraded Vehicle Route Planner Software Improves Decisions in Distribution Planning, Fleet Sizing, Driver Productivity and Transportation Cost Reduction  

Profit Point announces the introduction of Profit Vehicle Planner™ 3.1, a major upgrade to our distribution analysis and design software. Profit Vehicle Planner is designed for Strategic Logistic and Transportation Managers that have large fleets with multiple daily delivery stops and changing logistics processes. The software update includes a combination of new features and technical enhancements which combine to support richer scenario modeling for larger large fleets with multiple daily delivery stops and changing logistics processes.

Designed to be highly accessible and customizable, Profit Vehicle Planner (PVP™) uses standard Microsoft business tools for calculation and display of information, including Excel, Access and MapPoint. The software automatically creates and designs the optimal sales/distribution territories. It does this by dividing customers into territories and days of service, with each territory representing the volume delivered by one delivery vehicle and one driver over the course of the planning horizon. The objective of the proprietary heuristic algorithm used in Profit Vehicle Planner is to assign customers to territories that will minimize the number of trucks required to serve the customer volumes while delivering within the various common and business-specific constraints, including customer frequency of service, hours available per day, volume available per truck, unique equipment requirements and virtually any other custom constraint required.

“With 12 years in the field, Profit Vehicle Planner has been put to the test against some of the world’s largest supply chain distribution problems,”  noted Jim Piermarini, Profit Point’s Chief Technology Officer. “Transportation best practices have expanded over time, so decision makers are looking for more comprehensive strategic logistics and transportation modeling solutions.”

With the new release, PVP’s expanded features include extensive customization of the software to tailor the territory planning solution to be cost and time effective to meet your unique and specific distribution requirements and the ability to use imported address data to automatically geocode customers for whom lat/long data is missing.

For companies that perceive distribution as mission critical, users have the option to integrate PVP deeply into their supply chain systems to import and export data in to their ERP system. Companies that seek the most cost-effective solution have the ability to import virtually any relevant data from an Excel template that includes the following:

  • Customer data such as address, location, frequency of service, volume per stop, time required per stop, other data as needed
  • Truck data such as size, days of the week that it is available, order in which it is to be scheduled, hours available each day, special equipment, other data as needed
  • Warehouse and district data such as location and characteristics of associated trucks and drivers
  • Time related data such as start date of planning horizon and number of weeks in the planning horizon.
  • Product specific data such as unit of measure of the product being delivered
  • Any other data required to accurately model unique constraints

Once optimized, users have the ability to review and assess the characteristics of the territories that are created using tables and maps to provide an enhanced visual experience. And to ensure the optimal distribution plan, users can manually move customers from one territory to another or from one service day pattern to another (e.g. from Monday-Thursday to Tuesday-Friday), if desired.

To learn more about Profit Vehicle Planner and Profit Point’s distribution planning services, visit www.profitpt.com.

A husband, two kids and a golden retriever later…  I am back to implementations in Supply Chain planning and scheduling.  To my surprise, the same challenges I encountered 10 years ago remain in force today:  data, defining business processes, data, implementing software, data, training people, data, supporting the change to a new system and data.

Data collection remains one of the cornerstones of success of a supply chain planning or scheduling implementation.  Though scores of data may exist in a company’s business, harnessing it to feed into a planning or scheduling model can be extremely complex and time consuming.  Interestingly, the data collection process often drives an elucidation of manufacturing practices and process flows, and clients learn what they do and don’t know about their business.  This may seem backwards and risky in terms of getting things out of order.  In a perfect world, a thorough understanding of manufacturing and business processes would pave the way towards building an Advanced Planning and/or Scheduling System.  In reality, they often happen in tandem and are evolutionary in nature.

Deciding how data will be housed, derived and propagated early on in an implementation will pay off in the long run.   Establishing a systematic, automated way to update and propagate data is equally important as the decision of what software system to use.  It is worth the investment to take the time to put this automation in place as a greater and greater number of products are added to a system the data will remain manageable and scalable.

From PC to Cloud, emails to tweets, networking happy hours to LinkedIn, it is nice to know some things stay the same.

Optimal Call Center Scheduling

July 13th, 2011 1:20 pm Category: Optimization, Scheduling, by: Dennis Dietz

Many commercial enterprises and public agencies operate telephone call centers to provide effective and timely service for customers. By employing nearly 5% of the national workforce, call centers arguably define the “new factory floor” in an increasingly service-based economy. They are fascinating socio-technological systems which are exceptionally well-suited for the application of mathematical modeling and optimization methods.

A typical call center utilizes a computerized call handling system which can archive detailed historical information on call volume, call handling time, and other relevant attributes. This data can be analyzed and aggregated (with appropriate accounting for probabilistic variation) to generate a profile of staffing requirements across future time intervals. In theory, service agents can be optimally scheduled to closely accommodate this profile, resulting in high service levels, low customer abandonment, and efficient agent utilization. In actual practice, however, such performance represents the exception rather than the rule. Most call centers, even well-run ones, do not simultaneously achieve high levels of service quality and operational efficiency [1].

One important reason for the performance gap between theory and practice is lack of sophistication and flexibility in the standard software systems available for call center management. For example, standard systems invariably base interval staffing requirements on the classic “Erlang C” model, which is known to produce distorted results because it does not consider pertinent factors such as customer impatience [2]. Additionally, if the software has any capability for schedule “optimization,” the underlying algorithm is usually a greedy heuristic which sequentially adds agent shifts without due consideration of the complex interactions between them. Beyond these technical limitations, standard systems offer minimal capability to experiment with different shift types and customize the solution strategy.

Profit Point can provide the expertise and custom tools necessary to properly model your unique call center environment and achieve optimal performance. By applying recently-refined mathematics, interval staffing requirements can be accurately determined and optimal shift distributions can be precisely derived [3]. Efficiency improvements exceeding 10% are typical, coincident with improvement in service level performance. Many additional operational factors, such as on-line chat activity and agent specialization, can also be addressed. There is no better time than now for you to reap the rewards of optimizing your organization’s call center operations.

References

[1] Noah Gans, Ger Koole, and Avishai Mandelbaum, “Telephone Call Centers: Tutorial, Review, and Research Prospects,” Manufacturing and Service Management 5, 79–141 (2003).

[2] Lawrence D. Brown, et al., “Statistical Analysis of a Telephone Call Center: A Queueing-Science Perspective,” Journal of the American Statistical Association 100, 36–50 (2005).

[3] Dennis C. Dietz, “Practical Scheduling for Call Center Operations,” Omega 39, 550–557 (2011).

Here at Profit Point we regularly hear from clients with well established Enterprise Resource Planning (ERP) systems that they need something more.  ERP systems are excellent for doing certain things including:

  1. Providing central repositories of data
  2. Enabling cross functional work processes within and across companies
  3. Costing of goods
  4. Planning resources and materials at a high level

However the more complicated your business work processes and manufacturing production processes the less sufficient a standard ERP system will be in providing the best decision support functionality.  Some of the complications that require decision support systems (DSS) and which we have been helping clients deal with lately include:

  1. Work processes to handle make to order versus make to stock material assignments
  2. Allocation of inventory to customer orders when in an oversold position
  3. Sequence dependent setups / cleanings of manufacturing equipment
  4. Scheduling of production sequenced through a “product wheel”

DSS are necessary because of the complexity of first finding a feasible solution and then having some means of sorting through the huge number of feasible options to find a “good” or “optimal” solution.  DSS help in these kinds of situations to:

  1. Reduce costs
  2. Reduce manufacturing lead times
  3. Improve customer service
  4. Increase revenue

ERP systems are a necessary part of being able to deliver a DSS by providing the data necessary for making the decisions in question but don’t have the following:

  1. Ability to be tailored to a specific work process or manufacturing environment
  2. Advanced analytical capability to sort through the complexity and volume of options to get to a “good” or “optimal” solution
  3. Graphical user interface tools to be able to allow a user to visualize the data in a way that gives them the insights needed to make decisions

At Profit Point we specialize in listening to our clients needs and then building DSS to unlock improvement opportunities which enable our clients to outdistance the competition.

Supply Chain Café

December 22nd, 2010 2:45 pm Category: Optimization, Supply Chain Agility, Supply Chain Improvement, Supply Chain Software, by: Richard Guy

Okay. I am an anomaly. I live in Utah and drink coffee. The majority of the people that live in Utah do not drink coffee, and that is OK, but I do. So, is there a shortage of coffee Cafés in Utah? No. There are many cafés and several that serve outstanding coffee.

We have an exceptional espresso café downtown, located on a side street off of Main. They roast their own coffee and use triple certified organic grown beans. It is the type of place the local coffee lovers go to hang out and have good conversation over a morning or afternoon latté or espresso. Possibly the best coffee I have ever had. What is interesting to me is that a large percentage of the residents in my area do not even know that this café exists.

So what is my point? When it comes to outstanding services or products most people are unaware of what is available, primarily because it does not fit into their lifestyle or what they’re accustomed to. I believe you can transfer this similarity to the business world. Manufacturing logistics and transportation people become accustomed to doing things a certain way. Over time they may become blind to ideas for improving the supply chain. They are unaware of an exceptional Supply Chain Café, even when it is located just seconds from a combination of keystrokes and Google.

It is not their fault they are missing the best latté available. We, as consultants, who prepare those delightful solutions from the Supply Chain Café menu, have probably not done the finest job of promoting our services and software to your neighborhood, but that is changing.

There are many empty cups in the supply chain, waiting to be filled with successful solutions. Supply Chain and Logistic managers tackle difficult supply chain problems every day, but they are so focused on getting their job done and making it through the day that they have little time to think of alternatives that may improve their processes and well being. I am not sure how we can help everyone, so let’s focus on the window shoppers. These are the ones that are aware of the café, but have never been inside. Maybe you are one?

If you are reading this blog, then you must be a window shopper. I am guessing you are looking for a better espresso. OK, you found “Profit Point”, although you may not know what we do. Guess what? Help is on its way. We can share our menu with you. We just published four videos that will introduce you to the Profit Point team and what we do. Embrace three minutes out of your day, select one of the videos, and watch it. Learn how we help companies improve their supply chain, by serving the best coffee with a smile.

Yes, you can improve your supply chain with our help. The supply chain solution that you are looking for, is about to be yours. And if you place an order, we can fill your cup to the top, with the “good triple certified” stuff. If you cannot seem to find that special item on our Supply Chain menu, then no fear, we love special orders.

So, is there a shortage of Supply Chain Cafés? No. You just need to find the one that serves the optimal latté. I know it’s out there somewhere.

To learn more about Profit Point’s Global Supply Chain Optimzation services, please contact us.

Calculating Capacity – “Making Jello Feel Like Concrete”

December 21st, 2010 3:43 pm Category: Optimization, Scheduling, by: John Hughes

Frequently, you might hear somebody say that the capacity of a production facility is some known and fixed value.  When this happens be very wary of what they might be trying to sell you.  Because as with so many other things, when measuring capacity “the devil is in the details”.

The “capacity” of a factory sounds like a pretty simple notion and something that should be easy to calculate.  But this is only true for production systems that are fairly straightforward, consisting of totally independent machines and processes.  If the organization however consists of operations that are interconnected and interdependent on each other, then capacity can be a fairly difficult thing to measure.

In the vast majority of production systems, there is a very real link between capacity and three critical factors:

  1. the mix of products, and how much time is required for setup/cleanup between consecutive production runs,
  2. the ability to create sophisticated and optimal schedules for the production resources,
  3. how much physical space exists in the factory where products that are only partially complete can be kept or stored; what’s known as Work in Process (or WIP)  Inventory.

To see these 3 relationships at work, consider the simple case where a certain department produces two products, A and B, which both use the same piece of equipment, and there is only one of these machines available.  The production rates of the machine are in the table below and there is a 4 hour setup time required when the machine switches over from producing one product to another.  Now consider the 2 scenarios below.  In Scenario A, the capacity is 170 units per day while in scenario B the capacity is 145.

    Scenario A   Scenario B
  ProductionRate (Units / hr) Daily Sched Qt. Hrs required   Daily Sched Qt. Hrs required
A 12 100 8.33   50 4.17
B 6 70 11.67   95 15.83
Tot   170 20   145 20
  Setup hrs ->   4     4
  Grand Total   24     24

This example clearly demonstrates the frist item above, that the “capacity” of the department depends to a large extent on the mix of the 2 products that are being produced.

Now suppose that management wants to produce 110 of A and 80 of B per day.  These new requirements seem to clearly exceed the capacity of the department given EITHER Scenario A or B.  But maybe the necessary capacity can still be found.

If the new requirement is to produce at this increased rate for only a single day, or to produce at this rate each and every day, then there is definitely not enough capacity on the machine.  However, if the increased production is required over a sustained length of time, then we can gain extra production by modifying the production schedule so as to eliminate or minimize the occurrence of the 4 hour setup.  If the department schedules production in long blocks spanning several days, where first one product and then the other is produced, then the department DOES have the capacity.  In the table below for example, 440 units of A is first produced followed by 320 of B, with a 4 hour setup between them.  This represents 4 days worth of the increased management requirement (100 of A and 80 of B each multiplied by 4).

  ProductionRate (Units / hr) Sched Qt. Hrs required
A 12 440 36.67
B 6 320 53.33
Tot   760 90
  Setup hrs ->   4
  Grand Total   94

With this schedule, the total required hours of 94 is less than the 96 hours available in 4 days, and so now there IS enough capacity!  By scheduling wisely (i.e. “working smarter”), the department’s average daily capacity has actually risen to (760 / 4) = 190 units per day, a good deal higher than either 170 or 145 in the two previous scenarios.

Thus, the department capacity clearly depends on the ability to implement “smart” production schedules that make the best use of the available resources, i.e. the second issue mentioned earlier.

Finally, this higer capacity schedule is an example of a “good news / bad news” situation.  Although the plant is able to produce more (and presumably company revenues will go up) the downside of this higher capacity schedule is that  the department will be maintaining a larger amount of inventory in the supply chain on average.  And if there is more “stuff” in the pipeline, then there has to be the physical space to put it.  This is an important consideration if inventory has to be stored in or on particular types of storage facilities such as refrigerators or special racks.  Therefore, although it might be possible to “buy” extra production capacity with a better equipment schedule,  it is important to realize that different schedules put more or less demand on the spatial capacity of the actual storage facilities.

Therefore, this example illustrates the third item, that increasing ouput can put stress on the plant’s storage facilities

This last scenario also shows that maximum capacity is not necessarily the same as minimum cost.  Because notice that in this scenario there is only one 4-hour setup, and thus any costs from the setup activity are averaged over a larger number of produced items.  But offsetting this savings in setup cost is the fact that with the increased WIP, the inventory costs will have gone up.

The fact that capacity can be such a difficult thing to measure, does not mean that it is not a valuable parameter to describe a given system.  What it does mean is that when any capacity value is given for a particular supply chain, it is absolutely critical to understand the assumptions that underlie it.  The fact that capacity is such a highly maleable concept, simply reinforces the fact that managing a company’s supply chain is always a delicate balancing act between competing costs and non-monetary factors.

Focusing Clients on Solution Features

April 29th, 2010 11:54 pm Category: Scheduling, by: Joe Litko

When using a rapid iterative development process, a client may see several solutions to his problem that are not perfect.  Getting the best feedback from the client at every cycle of the development is important.  If you get only the most obvious evaluations, the progress towards an acceptable product is erratic, possibly unsuccessful, and very painful.

Recent experience was a scheduling application where we did not have direct contact with the final user.  Here we were working for an intermediate third party.  It finally became clear that feedback was going to be limited to a comment about the feature the customer wanted to focus on.  Usually that would represent some aspect of the solution that was either unacceptable or very different from current practice.  Some current practices would require new resources and changes in parts of the system that could not be easily changed. Read the rest of this entry »

When can I have it? A not so simple question.

March 9th, 2010 8:41 pm Category: Scheduling, by: Jim Piermarini


Jake works in customer service, and the phone rings… It has been a long day already, but he is happy when the phone rings, so he says:

“Hello, this is Jake at Western Chemical Resupply. How can I help you?”

“Hello, Jake, this is Paul, I need to order some pool supplies for my pool business.” says the voice on the other end of the line.

This is great, no problem. After getting Paul’s customer information, he asks:

“So I typically order 150 units of the new chlorine product to start the season, and I am opening my store in a few weeks. Do you have any in stock?” Read the rest of this entry »

Balancing Cost, Inventory and Service

January 6th, 2010 11:05 pm Category: Publications, Supply Chain Improvement, by: Editor

This month’s issue of U.S. Business Review features a supply chain article entitled Balancing Act: Cost, inventory and service in a volatile economy. The article, which was co-authored by Profit Point’s CEO/CTO, Jim Piermarini, and the company’s Senior Account Manager, Cindy Engers, discusses solutions for preserving customer service levels, while reducing costs and inventory risks.

You can read the Complete article here.

If you would like to learn more about our Supply Chain Optimization services, please call (866) 347-1130 or contact us here.

What is a Monte Carlo model and what good is it? We’re not talking a type of car produced by General Motors under the Chevy nameplate. “Monte Carlo” is the name of a type of mathematical computer model. A Monte Carlo is merely a tool for figuring out how risky some particular situation is. It is a method to answer a question like: “what are the odds that such-and-such event will happen”. Now a good statistician can calculate an answer to this kind of question when the circumstances are simple or if the system that you’re dealing with doesn’t have a lot of forces that work together to give the final result. But when you’re faced with a complicated situation that has several processes that interact with each other, and where luck or chance determines the outcome of each, then calculating the odds for how the whole system behaves can be a very difficult task.

Let’s just get some jargon out of the way. To be a little more technical, any process which has a range of possible outcomes and where luck is what ultimately determines the actual result is called “stochastic”, “random” or “probabilistic”. Flipping a coin or rolling dice are simple examples. And a “stochastic system” would be two or more of these probabilistic events that interact.

Imagine that the system you’re interested in is a chemical or pharmaceutical plant where to produce one batch of material requires a mixing and a drying step. Suppose there are 3 mixers and 5 dryers that function completely independent of one another; the department uses a ‘pool concept’ where any batch can use any available mixer and any available dryer. However, since there is not enough room in the area, if a batch completes mixing but there is no dryer available, then the material must sit in the mixer and wait. Thus the mixer can’t be used for any other production. Finally, there are 20 different materials that are produced in this department, and each of them can have a different average mixing and drying time.

Now assume that the graph of the process times for each of the 8 machines looks somewhat like what’s called a ‘bell-shaped curve’. This graph, with it’s highest point (at the average) right in the middle and the left and right sides are mirror images of each other, is known as a Normal Distribution. But because of the nature of the technology and the machines having different ages, the “bells” aren’t really centered; their average values are pulled to the left or right so the bell is actually a little skewed to one side or the other. (Therefore, these process times are really not Normally distributed.)

If you’re trying to analyze this department, the fact that the equipment is treated as a pooled resource means it’s not a straightforward calculation to determine the average length of time required to mix and dry one batch of a certain product. And complicating the effort would be the fact that the answer depends on how many other batches are then in the department and what products they are. If you’re trying to modify the configuration of the department, maybe make changes to the scheduling policies or procedures, or add/change the material handling equipment that moves supplies to and from this department, a Monte Carlo model would be the best approach to performing the analysis.

In a Monte Carlo simulation of this manufacturing operation, the model would have a clock and a ‘to-do’ list of the next events that would occur as batches are processed through the unit. The first events to go onto this list would be requests to start a batch, i.e. the paperwork that directs or initiates production. The order and timing for the appearance of these batches at the department’s front-door could either be random or might be a pre-defined production schedule that is an input to the model.

The model “knows” the rules of how material is processed from a command to produce through the various steps in manufacturing and it keeps track of the status (empty and available, busy mixing/drying, possibly blocked from emptying a finished batch, etc.) of all the equipment. And the program also follows the progress and location of each batch. The model has a simulated clock, which keeps moving ahead and as it does, batches move through the equipment according to the policies and logic that it’s been given. Each batch moves from the initial request stage to being mixed, dried and then out the back-door. At any given point in simulated time, if there is no equipment available for the next step, then the batch waits (and if it has just completed mixing it might prevent another batch from being started).

What sets a Monte Carlo model apart however is that when the program needs to make a decision or perform an action where the outcome is a matter of chance, it has the ability to essentially roll a pair of dice (or flip a coin, or “choose straws”) in order to determine the specific outcome. In fact, since rolling dice means that each number has an equal chance of “coming up”, a Monte Carlo model actually contains equations known as “probability distributions”, which will pick a result where certain outcomes have more or less likelihood of occurrence. It’s through the use of these distributions, that we can accurately reflect those skewed non-Normal process times of the equipment in the manufacturing department.

The really cool thing about these distributions is that if the Monte Carlo uses the same distribution repeatedly, it might get a different result each time simply due to the random nature of the process. Suppose that the graph below represents the range of values for the process time of material XYZ (one of the 20 products) in one of the mixers. Notice how the middle of the ‘bell’ is off-center to the right (it’s skewed to the right).


So if the model makes several repeated calls to the probability distribution equation for this graph, sometimes the result will be the 2.0-2.5 hrs, other times 3.5-4.0 hrs, and on some occasions >4hrs. But in the long run, over many repetitions of this distribution, the proportion of times for each of the time bands will be the values that are in the graph (5%, 10%, 15%, 20%, etc.) and were used to define the equation.

So to come back to the manufacturing simulation, as the model moves batches through production, when it needs to determine how much time will be required for a particular mixer or dryer, it runs the appropriate probability equation and gets back a certain process time. In the computer’s memory, the batch will continue to occupy the machine (and the machine’s status will be busy) until the simulation clock gets to the correct time when the process duration has completed. Then the model will check the next step required for the batch and it will move it to the proper equipment (if there is one available) or out of the department all together.

In this way then, the model would continue to process batches until it either ran out of batches in the production schedule that was an input, or until the simulation clock reached some pre-set stopping point. During the course of one run, the computer would have been monitoring the process and recording in memory whatever statistics were relevant to the goal of the analysis. For example, the model might have kept track of the amount of time that certain equipment was block
ed from emptying XYZ to the next step. Or if the aim of the project was to calculate the average length of time to produce a batch, the model would have been following the overall duration of each batch from start to finish in the simulated department.

The results from just one run of the Monte Carlo model however are not sufficient to be used as a basis for any decisions. The reason for this is the fact that this is a stochastic system where chance determines the outcome. We can’t really rely on just one set of results, because just through the “luck of the draw” the process times that were picked by those probability distribution equations might have been generally on the high or low side. So the model is run repeatedly some pre-set number of repetitions, say 100 or 500, and results of each of these is saved.

Once all of the Monte Carlo simulations have been accumulated, it’s possible to make certain conclusions. For example, it might turn out that the overall process time through the department was 10 hrs or more on 8% of the times. Or the average length of blocked time, when batches are prevented from moving to the next stage because there was no available equipment, was 12 hrs; or that the amount of blocked time was 15hrs or more on 15% of the simulations.

With information like this, a decision maker would be able to weigh the advantages of adding/changing specific items of equipment as well as modifications to the department’s policies, procedures, or even computer systems. In a larger more complicated system, a Monte Carlo model such as the one outlined here, could help to decrease the overall plant throughput time significantly. At some pharmaceutical plants for instance, where raw materials can be extremely high valued, decreasing the overall throughput time by 30% to 40% would represent a large and very real savings in the value of the work in process inventory.

Hopefully, this discussion has helped to clarify just what a Monte Carlo model is, and how it is built. This kind of model accounts for the fundamental variability that is present is almost all decision making. It does not eliminate risk or prevent a worst-case scenario from actually occurring. Nor does it guarantee a best-case outcome either. But it does give the business manager added insight into what can go wrong or right and the best ways to handle the inherent variability of a process.

This article was written by John Hughes, Profit Point’s Production Scheduling Practice Leader.

To learn more about our supply chain optimization services, contact us here.

Profit Point’s data integration and scheduling optimization services deliver reliable results with reduced operations costs.

North Brookfield, MA

Profit Point today announced that its Profit Data InterfaceTM software has been selected by Rohm and Haas Company (NYSE: ROH) to integrate its scheduling processes with the company’s ERP data warehouse. The company, which last reported nearly $9 billion in annual sales, produces innovative products for nine industries worldwide through a network of more than 100 manufacturing, technical research and customer service sites. Optimizing and supporting the production and distribution scheduling across this network is a complex and ever-changing process.

“Rohm and Haas has a history of improving our operations to enhance customer service levels and reduce cost,” said Dave Shaw, the company’s Business Process Manager for MFG and Supply Chain. “Production scheduling, which entails constant change to meet demand, is one of the toughest challenges in the supply chain. In the past, the lack of a reliable data interface has limited our ability to react quickly and with a high degree of confidence in our results. Profit Point’s Data Interface software has given us near real-time access to highly reliable data, so we can respond quickly and know that our plan is right.”

Profit Data Interface is a robust application that helps decision makers boost the effectiveness of their ERP data by extending its usefulness with optimization applications. By leveraging existing ERP systems, the software provides a robust and proven method that supply chain managers can rely upon to optimize their critical business processes and improve profitability.

“Rohm and Haas is a recognized leader in the chemicals industry with a reputation for supply chain excellence,” said Jim Piermarini, Profit Point’s CEO. “We have supported their scheduling processes for years. So, it was clear that the next evolution was to directly connect their optimization software to the date store using our Data Interface product.”

Profit Data Interface, which integrates with SAP® and Oracle® data stores, can be used to optimize the entire supply chain including network planning, production and inventory planning, distribution scheduling, sales planning and vehicle routing.

To learn more about Profit Point’s supply chain software and services, visit www.profitpt.com.

About Profit Point:
Profit Point Inc. was founded in 1995 and is now a global leader in supply chain optimization. The company’s team of supply chain consultants includes industry leaders in the fields infrastructure planning, green operations, supply chain planning, distribution, scheduling, transportation, warehouse improvement and business optimization. Profit Point’s has combined software and service solutions that have been successfully applied across a breadth of industries and by a diverse set of companies, including General Electric, Dole Foods, Logitech and Toyota.

About Rohm and Haas Company:
Leading the way since 1909, Rohm and Haas is a global pioneer in the creation and development of innovative technologies and solutions for the specialty materials industry. The company’s technologies are found in a wide range of industries including: Building and Construction, Electronics and Electronic Devices, Household Goods and Personal Care, Packaging and Paper, Transportation, Pharmaceutical and Medical, Water, Food and Food Related, and Industrial Process. Innovative Rohm and Haas technologies and solutions help to improve life every day, around the world. Visit www.rohmhaas.com for more information.

Contact:
Richard Guy
Profit Point
(866) 347-1130
http://www.profitpt.com

Leveraging Profit Point’s supply chain optimization methodologies, Toyota North American Part Center California improves efficiency and quality of their workload planning sequencing process to receive containers from Japan.

North Brookfield, MA (PRWEB) October 6, 2008

Profit Point today announced that Toyota Motor Sales (TMS), U.S.A., Inc.’s North American Part Center California (NAPCC) has improved its receiving sequencing processes using advanced mathematical optimization techniques. NAPCC is one of the parts distribution centers among TMS’ North American Parts Operations network, which was established to improve local parts sourcing and manage a parts distribution network that supplies all North American Toyota distributors, U.S. Toyota, Lexus and Scion dealers as well as export to parts centers in Japan. NAPCC turned to Profit Point to apply mathematical optimization techniques to further improve their supply chain operations.

“We turned to Profit Point to apply mathematical optimization techniques to further improve our supply chain operations,” Johnnie Garlington, NAPCCs warehouse operations manager. The program supported the increase in daily offload by 16% resulting in labor savings, off-site storage costs and detention expenses.

Profit Point, the leading supply chain optimization company, combines proprietary software with proven optimization techniques to help business managers improve their operations. Profit Point supported NAPCC’s objective to redesign their workload planning process to improve the efficiency and quality of their sequencing processes. Profit Point carried this out by designing and building custom supply chain software to optimize their sequencing processes.

“We were asked to investigate a mathematical approach to solving Toyota NAPCC’s container receiving sequencing process,” said Joe Litko, Profit Point’s Business Optimization Practice Leader. “This was an interesting challenge for several reasons. We needed a cost-effective solution using legacy tools, the model needed to run quickly, be flexible, and give robust solutions that consider several performance measures simultaneously.”

NAPCC had been using a traditional spreadsheet to manually achieve an hourly workload plan. Profit Point reviewed the sequencing process and designed a stand-alone application to smooth out the flow of containers to maximize the daily unload capacity.

“Like most businesses, Toyota NAPCC was using good, traditional operations practices,” said Dr. Alan Kosansky, Profit Point’s President. “But, by combining the right mathematical optimization methods with a clear understanding of the business requirements, we were able to achieve a superior supply chain process for Toyota.”

To learn more about Profit Point’s supply chain optimization software and services, visit www.profitpt.com.

About Profit Point:

Profit Point Inc. was founded in 1995 and is now a global leader in supply chain optimization. The company’s team of supply chain consultants includes industry leaders in the fields infrastructure planning, green operations, supply chain planning, distribution, scheduling, transportation, warehouse improvement and business optimization. Profit Point’s has combined software and service solutions that have been successfully applied across a breadth of industries and by a diverse set of companies, including The Coca-Cola Company, General Electric, Rohm and Haas and Toyota.

Contact:
Richard Guy
Profit Point
(866) 347-1130
http://www.profitpt.com

Contact Us Now

610.645.5557

Contact Us

Contact UsInfo

Please call:
+1 (610) 645-5557

Meet our Team

Our Clients

Published articles

  • A Fresh Approach to Improving Total Delivered Cost
  • Filling the Gap: Tying ERP to the Business Strategy
  • 10 Guidelines for Supply Chain Network Infrastructure Planning
  • Making Sound Business Decisions in the Face of Complexity
  • Leveraging Value in the Executive Suite
  • Should you swap commodities with your competitors?
  • Supply Chain: Time to Experiment
  • Optimization Technology Review
  • The Future of Network Planning: On the Verge of a New Cottage Industry?
  • Greening Your Supply Chain… and Your Bottom Line
  • Profit Point’s CEO and CTO Named a "Pro to Know" by Supply & Demand Chain Executive Magazine