September 3rd, 2015 3:28 pm Category: Optimization, by: Jim Piermarini
New technologies spring up each year, seemingly every day. There are new programming languages, frameworks, and processes. There are new personal productivity gadgets and apps. There are new internet connectivity method (routers, access points, bridges, cable modems, etc). It can all be pretty daunting if you think you need to stay ahead of all this change.
Back in the day, I used to go to a brick and mortar bookstore and peruse the computer books, and end up purchasing one or several books to read cover to cover to learn about the new item. I have purchased many dozens of technology related books over the course of the last 20ish years in my attempt to stay abreast of the bow way of technology change. I realized recently that I have not purchased a new technology book in several years. I got to thinking about why, and if I would be comfortable with my wife’s requests to toss all these old books. My first reaction was, I can’t get rid of these, they are books! But then I got to considering whether I had opened them anytime in the last 5 years (or 10 or 15!), and projecting if I would actually open them anytime in the foreseeable future. The short answer is, I really can’t see when I would open these books, ever again. So I asked myself why is that? And the answer is not that I have given up my desire to stay current, not exactly. Nor is it that all the technology books are hopelessly out of date (although some are). The reason I don’t think I’ll be using these books ever again has to do with the way the internet and Google specifically, has changed the way we learn.
Learning a new technology is easier today than ever before. You can google the subject and find not only the theoretical background of the subject, but also many practical implementation details and examples. For instance, along with myself, I know several people who are self-taught in SQL Server using only the resources available on the internet. And we are actually pretty competent at it. Given that experience, I know that I could also easily learn mySQL (I have had to learn some of that recently) or Java (again, I’ve dabbled in it) or Mongo DB, or any other no-SQL database new technology. Knowing that there are ample examples and many resources for new technologies, has allowed me to redefine how much I need to know before I can be confident that I can tackle a project in that new technology. I know that the syntax is a detail that will soon fall into place. Now that syntax may be in a book somewhere on my shelves, but it is also on the internet, just a few clicks away. I’ll opt for the easier and faster approach to getting that info anytime. So the books stay on my shelves, (or get donated as my wife is suggesting.)
Keeping current today in technology is a different thing than knowing all the depth and detail of a subject in previous years. Google is everywhere, and has almost everything, and it is not going away any time soon. Think calculators, and the way they were reviled for undermining the need to learn how to do math in your head. “You can’t always count of having a calculator!” was the refrain that was meant to show the importance of being a competent math mentalist. But today there are calculators everywhere, on my PC, my phone, and on my watch (if I had a google watch), and for that matter, so is Google! It seems reasonable to expect that the internet and google search will be with us for some time. People have accepted the pervasive and ubiquitous nature of internet, and it is changing the way we devote our limited brain space to information and knowledge. For me, I think that it is more important to know some critical details along with the broad understanding of the subject matter. I can offload the rest of the details to the internet to be retrieved when I need them. My local cache of info can be more specialized, in that it can be very deep and narrow in some areas while very broad and shallow in others. I don’t mind being shallow in broad areas, since even in the shallow areas, I know I can go to any depth I need very quickly with the help offered on the internet. That is how my view of knowledge has been transformed away from packing it into my head and onto my bookshelves, and using into the internet age. Others may have a different need of knowledge, and that is a discussion beyond my understanding. And while there may be a book on this subject, I’m sure I could google it.
March 6th, 2014 9:32 am Category: Operations Research, Optimization, Optimization Software, Profit Network, Profit Vehicle Planner, Profit Vehicle Router, Supply Chain Improvement, Supply Chain Optimization, Supply Chain Planning, by: Jim Piermarini
In the recent weeks, I have been thinking about testing our applications, like our popular Profit Network, or Profit Vehicle Planner. When we test, we run data sets that are designed to stress the system in different ways, to ensure that all the important paths through the code are working properly. When we test, our applications get better and better. There are many good reasons to test, most importantly, is to know that an improvement in one part of the code does not break a feature in a different part of the code.
I have been thinking about how we could test our code a bit more, and the means by which we could do that. I have been reading about automated testing, and its benefits. They are many, but the upshot is that if the testing is automated, you will likely test more often, and that is a good thing. To automate application testing requires the ability to churn out runs with nobody watching. And to do that, the application needs to be able to be kicked off and run in a way that there are no buttons or dialog boxes that must be manually clicked to continue. There can be no settings that must be manually set, or information reviewed to decide what to do next. In addition, the application must then save the results somewhere, either in the instance of the application, or to a log file, or to a database of some sort. Then finally, to really be testing, the results must be compared to the expected results to determine the pass/fail state of the test. This requires having a set of expected results for every test data set.
In looking at this process above, I see numerous similarities to the process used to run a sensitivity analysis, in that many runs are typically run, (so automation is a natural help) and the results need to be recorded. Sensitivity Analysis is a typical process for user of our Profit Network tool, and out Profit Planner and Profit Scheduler tool. An additional step in sensitivity analysis however, is that you may desire to change the input data in a systematic way (say Demand + 5%, and Demand -5%), and to the extent that it is indeed systematic, this too could be folded into the automation. The results analysis is different too, in that here you would like to look across the final sets of results at the differences, while in testing you just compare one set of test results to its expected results. I can foresee difficulty in automating the data changes, since each type of data may need to be changed in a very specific way. Never-the-less, even if the data changes are manual, they could be prepared ahead of the run, and the runs themselves could be grouped in a batch run to generate the results needed for a sensitivity analysis.
Constructing a harness that lashes up to an application where you can define the number of runs to be made, the setting for that run, the different data sets to be used, and the output location for results to be analyzed, would be useful not only for testing, but for the type of sensitivity analysis we do a lot of here at Profit Point.
I am going to encourage our developers to investigate this type of a system harness to be able to talk to and control our applications to be able to run them automatically, and have their results automatically stored in a data store for either test or sensitivity analysis.
Jim Piermarini | CEO Profit Point Inc.
Supply Chain Survey 2013:
Gaining Competitive Advantage
If you’re reading our blog, you are probably someone who is deeply interested in supply chain improvement. So we’d like to invite you to participate in this brief survey. And in return, we will send you exclusive, early access to the results of the survey along with our analysis .
Your insights and experiences are very important to us. And we are hosting the survey on a trusted, 3rd-party site so your responses will remain completely confidential. The survey is relatively short and should take only 3-4 minutes to complete. Please take a few moments to complete the Supply Chain Competitive Advantage Survey.
Start the Supply Chain Survey:
Gone are the days that supply chain was merely an expense. These days, savvy decision makers are gaining advantages over the competition by leveraging the data and tools available to them. In this survey, we will be exploring the methods, tools and processes that supply chain professionals utilize to gain competitive advantage via their supply chain.
Lesson 2: You may not know the best and / or ultimate design for a tool until you try it out for some time in the real world.
In my last blog post, I talked about the waterproof boots I received as a gift and how I never knew what I was missing out on until I received and started using those boots. In this blog post, I’d like to continue my story.
My waterproof boots were working just great for me. Our dog, Blue, loved walking out in the wet fields behind our house and I didn’t mind that my boots were getting muddy since I could easily wash them off. Several months after using my boots, I made an unfortunate discovery. My right foot was getting wet! Turns out my boots had developed a crack in the tread. While my boots had several features I really liked and duct tape worked as a temporary repair, I decided I had to replace my boots.
I thought about getting a new pair of the same brand / model but was concerned that there was a design flaw and that these boots were not sturdy enough to walk with on a regular basis. I decided to switch to a boot with a much better and stronger designed tread as well as one with the other features I really liked.
If I had gone to the store before owning and using the first pair of boots, I don’t think I could have articulated exactly what features I needed / wanted in a boot. It was only after having an extended real world experience with the boots that I was able to much more clearly and confidently articulate what I wanted in a boot.
This is a common theme with our supply chain change projects. Often these projects are a discovery process for us and our clients because neither of us definitively know a priori all the functionality that will ultimately end up in the finished tool. That is why our typical approach is to begin with a pilot project that includes the minimum scope required to implement the basic functionality. This allows for this process of discovery to unfold and while starting to deliver on the stream of anticipated benefits sooner rather than later. This allows for the future releases of the tool to have a very tight scope on only those items that we are both confident can be delivered and will achieve the anticipated benefits.
Are you ready to get started on this journey?
July 30th, 2012 12:56 pm Category: Enterprise Resource Planning, Global Supply Chain, Network Design, Operations Research, Optimization, Profit Network, Profit Vehicle Planner, Profit Vehicle Router, Risk Management, Supply Chain Improvement, by: Jim Piermarini
There is nothing like a bit of vacation to help with perspective.
Recently, I read about the San Diego Big Boom fireworks fiasco — when an elaborate Fourth of July fireworks display was spectacularly ruined after all 7,000 fireworks went off at the same time. If you haven’t seen the video, here is a link.
And I was reading an article in the local newspaper on the recent news on the Higgs: Getting from Cape Cod to Higgs boson read it here:
And I was thinking about how hard it is to know something, really know it. The data collected at CERN when they smash those particle streams together must look a lot like the first video. A ton of activity, all in a short time, and a bunch of noise in that Big Data. Imagine having to look at the fireworks video and then determine the list of all the individual type of fireworks that went up… I guess that is similar to what the folks at CERN have to do to find the single firecracker that is the Higgs boson.
Sometimes we are faced with seemingly overwhelming tasks of finding that needle in the haystack.
In our business, we help companies look among potentially many millions of choices to find the best way of operating their supply chains. Yeah, I know it is not the Higgs boson. But it could be a way to recover from a devastating earthquake and tsunami that disrupted operations literally overnight. It could be the way to restore profitability to an ailing business in a contracting economy. It could be a way to reduce the greenhouse footprint by eliminating unneeded transportation, or decrease water consumption in dry areas. It could be a way to expand in the best way to use assets and capital in the long term. It could be to reduce waste by stocking what the customers want.
These ways of running the business, of running the supply chain, that make a real difference, are made possible by the vast amounts of data being collected by ERP systems all over the world, every day. Big Data like the ‘point-of’sale’ info on each unit that is sold from a retailer. Big Data like actual transportation costs to move a unit from LA to Boston, or from Shanghai to LA. Big Data like the price elasticity of a product, or the number of products that can be in a certain warehouse. These data and many many other data points are being collected every day and can be utilized to improve the operation of the business in nearly real time. In our experience, much of the potential of this vast collection of data is going to waste. The vastness of the Big Data can itself appear to be overwhelming. Too many fireworks at once.
Having the data is only part of the solution. Businesses are adopting systems to organize that data and make it available to their business users in data warehouses and other data cubes. Business users are learning to devour that data with great visualization tools like Tableau and pivot tables. They are looking for the trends or anomalies that will allow them to learn something about their operations. And some businesses adopting more specialized tools to leverage that data into an automated way of looking deeper into the data. Optimization tools like our Profit Network, Profit Planner, or Profit Scheduler can process vast quantities of data to find the best way of configuring or operating the supply chain.
So, while it is not the Higgs boson that we help people find, businesses do rely on us to make sense of a big bang of data and hopefully see some fireworks along the way.
“With every passing year, the amount and variety of information available to make business decisions continues its exponential growth. As a result, business leaders have an opportunity to exploit the possibilities inherent in this rich, but complex, stream of information. Alternatively, they can continue with the status quo, using only their good business sense and intuition and thereby risk being left in the dust by competitors. Top-tier companies have learned to harness the available data with powerful decision support tools to make fast, robust trade-offs across many competing priorities and business constraints.”
Read the complete article here: Face Complexity – Making Sound Business Decisions
Here at Profit Point we regularly hear from clients with well established Enterprise Resource Planning (ERP) systems that they need something more. ERP systems are excellent for doing certain things including:
- Providing central repositories of data
- Enabling cross functional work processes within and across companies
- Costing of goods
- Planning resources and materials at a high level
However the more complicated your business work processes and manufacturing production processes the less sufficient a standard ERP system will be in providing the best decision support functionality. Some of the complications that require decision support systems (DSS) and which we have been helping clients deal with lately include:
- Work processes to handle make to order versus make to stock material assignments
- Allocation of inventory to customer orders when in an oversold position
- Sequence dependent setups / cleanings of manufacturing equipment
- Scheduling of production sequenced through a “product wheel”
DSS are necessary because of the complexity of first finding a feasible solution and then having some means of sorting through the huge number of feasible options to find a “good” or “optimal” solution. DSS help in these kinds of situations to:
- Reduce costs
- Reduce manufacturing lead times
- Improve customer service
- Increase revenue
ERP systems are a necessary part of being able to deliver a DSS by providing the data necessary for making the decisions in question but don’t have the following:
- Ability to be tailored to a specific work process or manufacturing environment
- Advanced analytical capability to sort through the complexity and volume of options to get to a “good” or “optimal” solution
- Graphical user interface tools to be able to allow a user to visualize the data in a way that gives them the insights needed to make decisions
At Profit Point we specialize in listening to our clients needs and then building DSS to unlock improvement opportunities which enable our clients to outdistance the competition.
Okay. I am an anomaly. I live in Utah and drink coffee. The majority of the people that live in Utah do not drink coffee, and that is OK, but I do. So, is there a shortage of coffee Cafés in Utah? No. There are many cafés and several that serve outstanding coffee.
We have an exceptional espresso café downtown, located on a side street off of Main. They roast their own coffee and use triple certified organic grown beans. It is the type of place the local coffee lovers go to hang out and have good conversation over a morning or afternoon latté or espresso. Possibly the best coffee I have ever had. What is interesting to me is that a large percentage of the residents in my area do not even know that this café exists.
So what is my point? When it comes to outstanding services or products most people are unaware of what is available, primarily because it does not fit into their lifestyle or what they’re accustomed to. I believe you can transfer this similarity to the business world. Manufacturing logistics and transportation people become accustomed to doing things a certain way. Over time they may become blind to ideas for improving the supply chain. They are unaware of an exceptional Supply Chain Café, even when it is located just seconds from a combination of keystrokes and Google.
It is not their fault they are missing the best latté available. We, as consultants, who prepare those delightful solutions from the Supply Chain Café menu, have probably not done the finest job of promoting our services and software to your neighborhood, but that is changing.
There are many empty cups in the supply chain, waiting to be filled with successful solutions. Supply Chain and Logistic managers tackle difficult supply chain problems every day, but they are so focused on getting their job done and making it through the day that they have little time to think of alternatives that may improve their processes and well being. I am not sure how we can help everyone, so let’s focus on the window shoppers. These are the ones that are aware of the café, but have never been inside. Maybe you are one?
If you are reading this blog, then you must be a window shopper. I am guessing you are looking for a better espresso. OK, you found “Profit Point”, although you may not know what we do. Guess what? Help is on its way. We can share our menu with you. We just published four videos that will introduce you to the Profit Point team and what we do. Embrace three minutes out of your day, select one of the videos, and watch it. Learn how we help companies improve their supply chain, by serving the best coffee with a smile.
Yes, you can improve your supply chain with our help. The supply chain solution that you are looking for, is about to be yours. And if you place an order, we can fill your cup to the top, with the “good triple certified” stuff. If you cannot seem to find that special item on our Supply Chain menu, then no fear, we love special orders.
So, is there a shortage of Supply Chain Cafés? No. You just need to find the one that serves the optimal latté. I know it’s out there somewhere.
October 28th, 2010 3:32 pm Category: Publications, by: Editor
This month’s issue of Supply Chain Solutions magazine features an article by Dr. Alan Kosansky and Dr. Joe Litko entitled Leverage Value.
“Executive-level business decisions include a broad range of interconnected variables leading to an extensive array of options. In the supply chain arena, this often plays out as a tradoff between operating costs, working capital, asset utilization and customer service levels.”
This article looks at the challenges faced by executives in making these decisions and the value of modeling future scenarios to make better decisions. You can read the complete article here.
What is a Monte Carlo model and what good is it? We’re not talking a type of car produced by General Motors under the Chevy nameplate. “Monte Carlo” is the name of a type of mathematical computer model. A Monte Carlo is merely a tool for figuring out how risky some particular situation is. It is a method to answer a question like: “what are the odds that such-and-such event will happen”. Now a good statistician can calculate an answer to this kind of question when the circumstances are simple or if the system that you’re dealing with doesn’t have a lot of forces that work together to give the final result. But when you’re faced with a complicated situation that has several processes that interact with each other, and where luck or chance determines the outcome of each, then calculating the odds for how the whole system behaves can be a very difficult task.
Let’s just get some jargon out of the way. To be a little more technical, any process which has a range of possible outcomes and where luck is what ultimately determines the actual result is called “stochastic”, “random” or “probabilistic”. Flipping a coin or rolling dice are simple examples. And a “stochastic system” would be two or more of these probabilistic events that interact.
Imagine that the system you’re interested in is a chemical or pharmaceutical plant where to produce one batch of material requires a mixing and a drying step. Suppose there are 3 mixers and 5 dryers that function completely independent of one another; the department uses a ‘pool concept’ where any batch can use any available mixer and any available dryer. However, since there is not enough room in the area, if a batch completes mixing but there is no dryer available, then the material must sit in the mixer and wait. Thus the mixer can’t be used for any other production. Finally, there are 20 different materials that are produced in this department, and each of them can have a different average mixing and drying time.
Now assume that the graph of the process times for each of the 8 machines looks somewhat like what’s called a ‘bell-shaped curve’. This graph, with it’s highest point (at the average) right in the middle and the left and right sides are mirror images of each other, is known as a Normal Distribution. But because of the nature of the technology and the machines having different ages, the “bells” aren’t really centered; their average values are pulled to the left or right so the bell is actually a little skewed to one side or the other. (Therefore, these process times are really not Normally distributed.)
If you’re trying to analyze this department, the fact that the equipment is treated as a pooled resource means it’s not a straightforward calculation to determine the average length of time required to mix and dry one batch of a certain product. And complicating the effort would be the fact that the answer depends on how many other batches are then in the department and what products they are. If you’re trying to modify the configuration of the department, maybe make changes to the scheduling policies or procedures, or add/change the material handling equipment that moves supplies to and from this department, a Monte Carlo model would be the best approach to performing the analysis.
In a Monte Carlo simulation of this manufacturing operation, the model would have a clock and a ‘to-do’ list of the next events that would occur as batches are processed through the unit. The first events to go onto this list would be requests to start a batch, i.e. the paperwork that directs or initiates production. The order and timing for the appearance of these batches at the department’s front-door could either be random or might be a pre-defined production schedule that is an input to the model.
The model “knows” the rules of how material is processed from a command to produce through the various steps in manufacturing and it keeps track of the status (empty and available, busy mixing/drying, possibly blocked from emptying a finished batch, etc.) of all the equipment. And the program also follows the progress and location of each batch. The model has a simulated clock, which keeps moving ahead and as it does, batches move through the equipment according to the policies and logic that it’s been given. Each batch moves from the initial request stage to being mixed, dried and then out the back-door. At any given point in simulated time, if there is no equipment available for the next step, then the batch waits (and if it has just completed mixing it might prevent another batch from being started).
What sets a Monte Carlo model apart however is that when the program needs to make a decision or perform an action where the outcome is a matter of chance, it has the ability to essentially roll a pair of dice (or flip a coin, or “choose straws”) in order to determine the specific outcome. In fact, since rolling dice means that each number has an equal chance of “coming up”, a Monte Carlo model actually contains equations known as “probability distributions”, which will pick a result where certain outcomes have more or less likelihood of occurrence. It’s through the use of these distributions, that we can accurately reflect those skewed non-Normal process times of the equipment in the manufacturing department.
The really cool thing about these distributions is that if the Monte Carlo uses the same distribution repeatedly, it might get a different result each time simply due to the random nature of the process. Suppose that the graph below represents the range of values for the process time of material XYZ (one of the 20 products) in one of the mixers. Notice how the middle of the ‘bell’ is off-center to the right (it’s skewed to the right).
So if the model makes several repeated calls to the probability distribution equation for this graph, sometimes the result will be the 2.0-2.5 hrs, other times 3.5-4.0 hrs, and on some occasions >4hrs. But in the long run, over many repetitions of this distribution, the proportion of times for each of the time bands will be the values that are in the graph (5%, 10%, 15%, 20%, etc.) and were used to define the equation.
So to come back to the manufacturing simulation, as the model moves batches through production, when it needs to determine how much time will be required for a particular mixer or dryer, it runs the appropriate probability equation and gets back a certain process time. In the computer’s memory, the batch will continue to occupy the machine (and the machine’s status will be busy) until the simulation clock gets to the correct time when the process duration has completed. Then the model will check the next step required for the batch and it will move it to the proper equipment (if there is one available) or out of the department all together.
In this way then, the model would continue to process batches until it either ran out of batches in the production schedule that was an input, or until the simulation clock reached some pre-set stopping point. During the course of one run, the computer would have been monitoring the process and recording in memory whatever statistics were relevant to the goal of the analysis. For example, the model might have kept track of the amount of time that certain equipment was block
ed from emptying XYZ to the next step. Or if the aim of the project was to calculate the average length of time to produce a batch, the model would have been following the overall duration of each batch from start to finish in the simulated department.
The results from just one run of the Monte Carlo model however are not sufficient to be used as a basis for any decisions. The reason for this is the fact that this is a stochastic system where chance determines the outcome. We can’t really rely on just one set of results, because just through the “luck of the draw” the process times that were picked by those probability distribution equations might have been generally on the high or low side. So the model is run repeatedly some pre-set number of repetitions, say 100 or 500, and results of each of these is saved.
Once all of the Monte Carlo simulations have been accumulated, it’s possible to make certain conclusions. For example, it might turn out that the overall process time through the department was 10 hrs or more on 8% of the times. Or the average length of blocked time, when batches are prevented from moving to the next stage because there was no available equipment, was 12 hrs; or that the amount of blocked time was 15hrs or more on 15% of the simulations.
With information like this, a decision maker would be able to weigh the advantages of adding/changing specific items of equipment as well as modifications to the department’s policies, procedures, or even computer systems. In a larger more complicated system, a Monte Carlo model such as the one outlined here, could help to decrease the overall plant throughput time significantly. At some pharmaceutical plants for instance, where raw materials can be extremely high valued, decreasing the overall throughput time by 30% to 40% would represent a large and very real savings in the value of the work in process inventory.
Hopefully, this discussion has helped to clarify just what a Monte Carlo model is, and how it is built. This kind of model accounts for the fundamental variability that is present is almost all decision making. It does not eliminate risk or prevent a worst-case scenario from actually occurring. Nor does it guarantee a best-case outcome either. But it does give the business manager added insight into what can go wrong or right and the best ways to handle the inherent variability of a process.
Profit Point has recently added another tool for analyzing and managing businesses and supply chains – system dynamics modeling. System dynamics focuses on feedback effects in complex systems. That is what distinguishes it from other simulation and modeling techniques. Feedback means that although X influences Y, it is also true that Y influences X even if this influence is mediated by a string of causal relations.
Here is a typical System Dynamics drawing from Wikipedia. In this diagram there is a flow of individuals from a category called potential adopters into a category called adopters. The rate of adoption is controlled by the ‘valve’ called ‘new adopters’. Many simulation modeling approaches would treat this as a ‘one-way street’ and attempt to model the system by controlling the valve with some external data in the hope of reproducing behaviors observed in the real system.
The point of System Dynamics is to explicitly include the feedback which causes the observed behavior. The model above could be a portion of a model of product adoption for the pharmaceutical industry. Another classic example is a predator – prey population model. When prey are plentiful, the predator population increases, which reduces the prey, which in turn reduces the predators. The extension of this classic model to a system of a business and its resources is fairly natural.
The System Dynamics approach has also been widely applied in typical supply chain scenarios. The effects of feedback can be studied in the context of inventory management and shipping policies. The figure below is of a classic System Dynamics approach to modeling the supply chain from raw materials production through shipment of the finished goods. Feedback is represented by the directed arrows from upstream processes back to the downstream processes. In many cases these represent control mechanisms meant to keep the system in balance or within limits. Successful feedback usually dampens oscillatory behavior – stock-outs, long supply, or obsolete inventory items, for instance.
System dynamics models are classified as continuous simulation models and are typically built in languages specifically designed for these types of models. They are quite different from the usual simulation models that are based on discrete entities and systems that jump from event to event in time. System dynamics models are based on equations that describe (usually) continuous flows of material, people, money, influence, etc. The models are highly useful as policy analysis tools – revealing the often unintended consequences of business rules. The models can examine periods of years or decades, while including external effects, e.g. the economy, and all the significant interactions within a business and with the outside world.
We look forward to bringing this exciting, powerful, and well-established analysis tool to bear on problems and opportunities of our clients.
This article was written by Joe Litko, Profit Point’s Business Optimization Practice Leader.
Operations Research (
- Business insight: Providing quantitative and business insight into complex problems.
- Business performance: Improving business performance through analysis and recommendations about business processes.
- Cost reduction: Decreasing cost or investment.
- Decision making: Assessing the likely outcomes of decision alternatives and uncovering better alternatives.
- Forecasting: Providing a better basis for forecasting and planning.
- Improved scheduling: Efficiently scheduling staff, vehicles, and machinery.
- Planning: Applying quantitative techniques in support of operations, tactical planning, and strategic planning.
- Pricing: Dynamically pricing products and services.
- Productivity: Helping companies and organizations become more productive.
- Profits: Increasing revenue or return on investment, increasing market share.
- Project development: Demonstrating feasibility and workability and assisting with training.
- Quality: Improving quality, and quantifying and balancing qualitative considerations.
- Recovery: Gaining greater control, achieving turn-around.
- Resources: Gaining greater utilization from limited equipment, facilities, money, and personnel.
- Risk: Managing and reducing risk.
- Throughput: Increasing speed or throughput, decreasing delays.