Archive for the ‘Operations Research’ Category

20150721_IndustryTodayTheFutureOfSupplyChainNetworkDesignProfit Point’s recent article in Industry Today, “The Future of Supply Chain Network Design,” describes how to fully leverage the new advances in a traditional supply chain optimization process to include not only your internal supply chain, but the supply chains of your competitors, as well.

Supply chain network design optimization tools have become well integrated into modern business decision-making processes at leading edge companies. The tools are used to rigorously analyze and make the best decisions in response to both short-term events such as weather disruptions, spot sales opportunities, utility outages and to longer-term strategy issues, such as capacity expansion or mergers and acquisitions. These analytical approaches and technologies can be game changers. The newest versions of SCND tools have been expanded: businesses can now analyze not just their own operations, but also the sum of multiple supply chains in the competitive marketplace, creating a new way to integrate competitive risk into the design of your supply chain.

Please contact us if you’d like to learn more about new ways to leverage traditional ideas.

At Profit Point network design analysis, answering such questions as
• how many facilities a business needs,
• how large they should be and where they should be located, and
• how they should change over time, etc.
is one of our specialties. We have performed this type of analysis for a range of companies across multiple industry types and geographical regions, and we have developed our own network design-focused software package to help us do this type of study. (And we teach folks how to use the software as well, so they can answer their own network design questions, if they want to pursue that.)

Our modeling “toolbox”, our Profit Network software, is designed to be flexible and data-driven, so that the user can focus more attention on a key part of the supply chain where the questions must be answered, without having to define more detail than is really desired in other areas to the supply chain.
One of the key elements in many of the models we or our clients construct is the bill of materials. This data specifies the materials that are required to produce goods along the supply chain, be they intermediate materials or finished goods. For instance, if you are making a finished good such as a loaf of bread, the bill of materials would specify the quantities of flour, yeast, salt and other ingredients that would go into a batch.

To get trustworthy results from a model, it must require that the bill of materials (BOM) data be defined, and be used, in deriving the solution. (In some models we have encountered the BOM is just a suggestion, or products can be created from thin air if the BOM data in not defined.)

The BOM logic must also be able to capture the reality of a situation. The BOM may need to vary from one machine to another within the same facility. Or it might need to vary over time – as an example, when agricultural or dairy products are ingredients to a manufacturing process, the ingredients might have different characteristics over the seasons of the year, thus requiring different input quantities over time to produce a consistent, standardized output.

We work closely with our clients to ensure that our software is matched to their needs, and that it gives them the flexibility they need as their businesses change.

In developing a supply chain network design there are many criteria to consider – including such factors as the impact of the facility choices on
• Cost of running the system,
• current and future customer service,
• ability to respond to changes in the market, and
• risk of costly intangible events in the future
to name a few.

Frequently we use models to estimate revenues / costs for a given facility footprint, looking at costs of production, transportation, raw materials and other relevant components. We also sometimes constrain the models to ensure that other criteria are addressed – a constraint requiring that DCs be placed so that 80% of demand be within a day’s drive of a facility, for instance, might be a proxy for “good customer service”.

Some intangibles, such as political risk associated with establishing / maintaining a facility in a particular location, are difficult to measure and include in a trade off with model cost estimates. Another intangible of great interest for many companies, and that has been difficult to make tangible, is water risk. Will water be available in the required quantities in the future, and if so, will the cost allow the company to remain competitive? For many industry groups water is the most basic of raw materials involved in production, and it is important to trade off water risk against other concerns.

As I wrote in a previous blog published in this forum,

There are several risks that all companies face, to varying degrees, as global water consumption increases, including
• Physical supply risk: will fresh water always be available in the required quantities for your operations?
• Corporate image risk: your corporate image will likely take a hit if you are called out as a “polluter” or “water waster”
• Governmental interference risk: governmental bodies are becoming increasingly interested in water consumption, and can impose regulations that can be difficult to deal with
• Profit risk: all of the above risks can translate to a deterioration of your bottom line.

The challenge has been: how to quantify such risks so that they can be used to compare network design options.

Recently a post entitled “How Much is Water Worth” on LinkedIn highlighted a website developed by Ecolab that offers users an approach to monetization of water risks. This website allows the user to enter information about their current or potential supply chain footprint – such as locations of facilities and current or planned water consumption – and the website combines this information with internal information about projected GDP growth for the country of interest, the political climate and other factors to calculate a projected risk-adjusted cost of water over the time horizon of interest.

This capability, in conjunction with traditional supply chain modeling methods, gives the planner a tool that can be used to develop a more robust set of information that can be used in decision-making.
For more details visit the website waterriskmonetizer.com

Recently I had the opportunity to speak to an operations management class for MBA students in the Goizueta Business School at Emory University.  The class is intended to give the students an introduction to a variety of problems that they might encounter during their careers, and to management science techniques that might be applied to them, using Excel as a solution platform.  The professor had asked me to address the course topic from the point of view of one who had used these methods in the real world, and I was glad to do so, recounting my work in supply chain network design, hydro power generation scheduling, routing of empty shipping containers, natural gas supply contract management and various other problems.

During Q&A one of the students asked how a company should determine the appropriate source of resources to use for solving these types of problems – should it be in-house expertise or an outside consultant?

As I told him, to me, this depends on a number of factors, and I gave an example, based on our experience: In our practice we perform supply chain network design studies, and we also license the network design software that we use to our clients, if they desire. A number of clients have engaged us to first do an analysis for them, and then they have licensed the software so that they can then perform future projects themselves, using our initial project as a base.  Many of these clients have used the software very effectively.

Those that have been most successful at using the software in-house, and at performing management science projects in-house in general, have several common characteristics-

  • They are committed to tactical and strategic planning as tools for meeting their business goals,
  • They have enough work in this area, and related areas, to keep an analyst or group of analysts busy full time, due to such factors as
    • The scale and scope of their operations
    • The speed of innovation in their industry
    • The level of complexity of their supply chain and variety of products made, and
    • Their desire for a “continuous improvement” approach as opposed to a “one-time reorganization” approach
  • They have a commitment to maintaining personnel who
    • have the proper skills and training to address these problems, and
    • are allowed the time to work on these problems, rather than being constantly pulled off for “firefighting” short term or operational problems.

Most companies can make good use of management science solution methods, but, as you think about how to do this, try to make a realistic determination of your internal priorities, so you can decide between insourcing and outsourcing, or a mixture of the two.

In the recent weeks, I have been thinking about testing our applications, like our popular Profit Network, or Profit Vehicle Planner.  When we test, we run data sets that are designed to stress the system in different ways, to ensure that all the important paths through the code are working properly.  When we test, our applications get better and better. There are many good reasons to test, most importantly, is to know that an improvement in one part of the code does not break a feature in a different part of the code.

ApplicationHarness

I have been thinking about how we could test our code a bit more, and the means by which we could do that. I have been reading about automated testing, and its benefits. They are many, but the upshot is that if the testing is automated, you will likely test more often, and that is a good thing.  To automate application testing requires the ability to churn out runs with nobody watching. And to do that, the application needs to be able to be kicked off and run in a way that there are no buttons or dialog boxes that must be manually clicked to continue. There can be no settings that must be manually set, or information reviewed to decide what to do next. In addition, the application must then save the results somewhere, either in the instance of the application, or to a log file, or to a database of some sort. Then finally, to really be testing, the results must be compared to the expected results to determine the pass/fail state of the test. This requires having a set of expected results for every test data set.

 

In looking at this process above, I see numerous similarities to the process used to run a sensitivity analysis, in that many runs are typically run, (so automation is a natural help) and the results need to be recorded. Sensitivity Analysis is a typical process for user of our Profit Network tool, and out Profit Planner and Profit Scheduler tool.   An additional step in sensitivity analysis however, is that you ApplicationHarness1may desire to change the input data in a systematic way (say Demand + 5%, and Demand -5%), and to the extent that it is indeed systematic, this too could be folded into the automation. The results analysis is different too, in that here you would like to look across the final sets of results at the differences, while in testing you just compare one set of test results to its expected results.  I can foresee difficulty in automating the data changes, since each type of data may need to be changed in a very specific way.  Never-the-less, even if the data changes are manual, they could be prepared ahead of the run, and the runs themselves could be grouped in a batch run to generate the results needed for a sensitivity analysis.

Constructing a harness that lashes up to an application where you can define the number of runs to be made, the setting for that run, the different data sets to be used, and the output location for results to be analyzed, would be useful not only for testing, but for the type of sensitivity analysis we do a lot of here at Profit Point.

I am going to encourage our developers to investigate this type of a system harness to be able to talk to and control our applications to be able to run them automatically, and have their results automatically stored in a data store for either test or sensitivity analysis.

Jim Piermarini  |  CEO Profit Point Inc.

 

Building applications, especially custom ones, carries with it the burden of answering the question: Does this do what the customer wants?

With complicated systems with many interacting features and business rules, answering this question can be daunting. In fact, evaluating the answer can be daunting too, from the perspective of the customer. Having the sales guy check some boxes in a questionnaire, or watching a demo just doesn’t leave you with the assurance that the application with handle all the business requirements, from either perspective, the vendors or the customer. Everyone I have spoken to who has sold complex software, or who has participated in the purchasing process of software has expressed the same doubt. They are just not sure that the tool will be a good fit. As we all know, that doubt does not always prevent the purchase of the software, as each organization has its own level of risk tolerance, and trust in the vendor’s brand or reputation. Often these other considerations can outweigh the amorphous doubt that some folks might feel. How can one quantify that doubt? Frankly, it’s a quandary.
This thought got us at Profit Point thinking… Wouldn’t it be great if there was another way to evaluate the goodness of fit or an application, or the appropriateness of the parameter settings, to match the business needs of an organization. Would it be great if there was a way to eliminate (or greatly reduce) the doubt, and replace it with facts. Either a business rule is obeyed or it is not. Either a decision is made according to the requirements, or it is not. Let’s eliminate the doubt, we thought, and the world would be a better place. (well a little bit anyway).

There are many processes for testing an application as it is being developed, with writing test scripts, and evaluating the results. All these are based on testing little pieces of code, to ensure that each function or sub routine does what it should do in each case of input data. These processes work fine in our opinion, but only when the sub of function is able to be considered independently form the others. When the system has functions that interact heavily, then this approach doesn’t reduce the doubt that the functions may conflict or compete in a way that the whole system suffers. How then to evaluate the whole system? Could we treat the entire application as one black box, and evaluate the important business cases, and evaluate the results? This is exactly what we have done, with the effect of reducing the doubt to zero about the suitability of the application for a business.
With several of our clients we have worked out what seems to be a great process of testing a complex software solution for suitability to the business requirement. In this case, the detailed level function testing methods were not open to us, since the solution relied on a Linear Programming technique.
This process is really just an amplification of the standard testing process.

  1. Define the test case, with the expected results
  2. Construct the test data
  3. Build or configure the application
  4. Run the Test using the Test Data and Evaluate the results – Pass or Fail

This is the standard process for testing small functions, where the expected results are clear and easy to imagine. However, in some systems where there many interacting rules and conflicting priorities, it may not be simple to know what the expected results should be without the help of the tool’s structure to evaluate them. Such is the case with many of our application, with layer upon layer of business rules and competing priorities… The very reason for using an LP based approach makes testing more complex.
In the revised process, we have, for each new business requirement:

  1.  Construct the test case with the test data
  2. Build or configure the application
  3. Set the expected results using the results of the first pass build
  4. Re-factor the code and test until all test are passing
Profit Point's Software Testing Process

Profit Point’s Software Testing Process

In my next blog I will show you the simple excel based tools we use to facilitate the test evaluation.

In practice, the process works well, new versions of the application go into production without any surprises, and with full confidence of the application management team that all the business requirements are 100% met.

No doubt – no doubt a better process.

By Jim Piermarini

In a recent sailing trip to Croatia, we lost our sailboat.   Sounds outrageous, but it really wasn’t difficult at all.  It was early evening when we anchored in the harbor and took our dinghy to shore for dinner.  A few hours later, seeing the wind pick up, we returned to the spot where we thought we had left the boat and it had vanished.  It was dark, very dark.  Looking for the boat on the dark ocean at night was like looking for a needle in a haystack.  After hours of searching, we finally found the boat headed out to sea, we had not let out sufficient line for the anchor.  The harbor was surrounded by rocky cliffs, and we had no idea what course the boat had taken, and if it had incurred any damage in its renegade voyage.  We shook a mechanic out of bed to evaluate if any damage had been done.  After it was all said and done, we were very lucky, the boat was fine.

I can’t help but liken this to a manufacturing supply chain, without a business process to chart the way, without software helping to navigate and support the process and without people in place trained to captain the process, the business, like an unanchored sailboat, drifts into sometimes dangerous territory.   Yet, this scenario is not atypical for many companies.

How do you know if the anchor is set on your supply chain?  Here are some attributes:

  1. A clear and documented business process that serves as the guide to how you operate under normal conditions as well as defines flows for unexpected changes and events.  As conditions change, the process should be reevaluated and assessed in an ongoing fashion.
  2. Software that supports the business process and enables users to react quickly to unexpected events making your business adaptable and flexible.  Software should be tailored to your business needs, one that is one size fits all does not necessarily work for all products and customers.
  3. Trained people who are living the business process and using the software giving your business a competitive edge.  Additionally, attention to detail and data driving the software is crucial and can have a big impact on the business.

 

To learn more about Profit Point’s Global Supply Chain services, please contact us.

Lesson 2:  You may not know the best and / or ultimate design for a tool until you try it out for some time in the real world.

In my last blog post, I talked about the waterproof boots I received as a gift and how I never knew what I was missing out on until I received and started using those boots.  In this blog post, I’d like to continue my story.

My waterproof boots were working just great for me.  Our dog, Blue, loved walking out in the wet fields behind our house and I didn’t mind that my boots were getting muddy since I could easily wash them off.  Several months after using my boots, I made an unfortunate discovery.  My right foot was getting wet!  Turns out my boots had developed a crack in the tread.  While my boots had several features I really liked and duct tape worked as a temporary repair, I decided I had to replace my boots.

I thought about getting a new pair of the same brand / model but was concerned that there was a design flaw and that these boots were not sturdy enough to walk with on a regular basis.  I decided to switch to a boot with a much better and stronger designed tread as well as one with the other features I really liked.

If I had gone to the store before owning and using the first pair of boots, I don’t think I could have articulated exactly what features I needed / wanted in a boot.  It was only after having an extended real world experience with the boots that I was able to much more clearly and confidently articulate what I wanted in a boot.

This is a common theme with our supply chain change projects.  Often these projects are a discovery process for us and our clients because neither of us definitively know a priori all the functionality that will ultimately end up in the finished tool.  That is why our typical approach is to begin with a pilot project that includes the minimum scope required to implement the basic functionality.  This allows for this process of discovery to unfold and while starting to deliver on the stream of anticipated benefits sooner rather than later.  This allows for the future releases of the tool to have a very tight scope on only those items that we are both confident can be delivered and will achieve the anticipated benefits.

Are you ready to get started on this journey?

Here I am with Blue and my new waterproof boots

What kind of risks are you prepared for?

As a supply chain manager, you have profound control over the operations of your business. However, it is not without limits, and mother nature can quickly and capriciously halt even the smoothest operation. Or other man-made events can seemingly conspire to prevent goods from crossing borders, or navigating traffic, or being produced and delivered on time. How can you predict where and when your supply chain may fall prey to unforeseen black swan events?

Prediction is very difficult, especially about the future. (Niels Bohr, Danish physicist)  But there are likely some future risks that your stockholders are thinking about that you might be expected to have prepare for. The post event second guessing phrase: “You should have known, or at least prepared for” has been heard in many corporate supply chain offices after recent supply chain breaking cataclysmic events: tsunami, hurricane, earthquake, you name it.

  • What will happen to your supply chain if oil reaches $300 / barrel? What lanes will no longer be affordable, or even available?
  • What will happen if sea level rises, causing ports to close, highways to flood, and rails lines to disappear?
  • What will happen if the cost of a ton of CO2 is set to $50?
  • What will happen if another conflict arises in the oil countries?
  • What will happen if China’s economy shrinks substantially?
  • What will happen if China’s economy really takes off?
  • What will happen if China’s economy really slows down?
  • What will happen if the US faces a serious drought in the mid-west?

What will happen if… you name it, it is lurking out there to have a potentially dramatic effect on your supply chain.

As a supply chain manager, your shareholders expect you to look at the effect on supply, transportation, manufacturing, and demand. The effect may be felt in scarcity, cost, availability, capacity, government controls, taxes, customer preference, and other factors.

Do you have a model of your supply chain that would allow you to run the what-if scenario to see how your supply chain and your business would fare in the face of these black swan events?

Driving toward a robust and fault tolerant supply chain  should be the goal of every supply chain manager. And a way to achieve that is to design it with disruption in mind.  Understanding the role (and the cost) of dual sourcing critical components, diversified manufacturing and warehousing, risk mitigating transportation contracting, on-shoring/off-shoring some manufacturing, environmental impacts, and customer preferences, just to begin the list, can be an overwhelming task. Yet, there are tools and processes that can help with this, and if you want to be able to face the difficulties of the future with confidence, do not ignore them.  The tools are about supply chain planning and modelling. The processes are about risk management, and robust supply chain design. Profit Point helps companies all over the world address these and other issues to make some of the of the best running supply chains anywhere.

The future is coming, are you ready for it?

There is nothing like a bit of vacation to help with perspective.

Recently, I read about the San Diego Big Boom fireworks fiasco — when an elaborate Fourth of July fireworks display was spectacularly ruined after all 7,000 fireworks went off at the same time. If you haven’t seen the video, here is a link.

And I was reading an article in the local newspaper on the recent news on the Higgs: Getting from Cape Cod to Higgs boson read it here:

And I was thinking about how hard it is to know something, really know it. The data collected at CERN when they smash those particle streams together must look a lot like the first video. A ton of activity, all in a short time, and a bunch of noise in that Big Data. Imagine having to look at the fireworks video and then determine the list of all the individual type of fireworks that went up… I guess that is similar to what the folks at CERN have to do to find the single firecracker that is the Higgs boson.

Sometimes we are faced with seemingly overwhelming tasks of finding that needle in the haystack.

In our business, we help companies look among potentially many millions of choices to find the best way of operating their supply chains. Yeah, I know it is not the Higgs boson. But it could be a way to recover from a devastating earthquake and tsunami that disrupted operations literally overnight. It could be the way to restore profitability to an ailing business in a contracting economy. It could be a way to reduce the greenhouse footprint by eliminating unneeded transportation, or decrease water consumption in dry areas. It could be a way to expand in the best way to use assets and capital in the long term. It could be to reduce waste by stocking what the customers want.

These ways of running the business, of running the supply chain, that make a real difference, are made possible by the vast amounts of data being collected by ERP systems all over the world, every day. Big Data like the ‘point-of’sale’ info on each unit that is sold from a retailer. Big Data like actual transportation costs to move a unit from LA to Boston, or from Shanghai to LA. Big Data like the price elasticity of a product, or the number of products that can be in a certain warehouse. These data and many many other data points are being collected every day and can be utilized to improve the operation of the business in nearly real time. In our experience, much of the potential of this vast collection of data is going to waste. The vastness of the Big Data can itself appear to be overwhelming. Too many fireworks at once.

Having the data is only part of the solution. Businesses are adopting systems to organize that data and make it available to their business users in data warehouses and other data cubes. Business users are learning to devour that data with great visualization tools like Tableau and pivot tables. They are looking for the trends or anomalies that will allow them to learn something about their operations. And some businesses adopting more specialized tools to leverage that data into an automated way of looking deeper into the data. Optimization tools like our Profit Network, Profit Planner, or Profit Scheduler can process vast quantities of data to find the best way of configuring or operating the supply chain.
So, while it is not the Higgs boson that we help people find, businesses do rely on us to make sense of a big bang of data and hopefully see some fireworks along the way.

A husband, two kids and a golden retriever later…  I am back to implementations in Supply Chain planning and scheduling.  To my surprise, the same challenges I encountered 10 years ago remain in force today:  data, defining business processes, data, implementing software, data, training people, data, supporting the change to a new system and data.

Data collection remains one of the cornerstones of success of a supply chain planning or scheduling implementation.  Though scores of data may exist in a company’s business, harnessing it to feed into a planning or scheduling model can be extremely complex and time consuming.  Interestingly, the data collection process often drives an elucidation of manufacturing practices and process flows, and clients learn what they do and don’t know about their business.  This may seem backwards and risky in terms of getting things out of order.  In a perfect world, a thorough understanding of manufacturing and business processes would pave the way towards building an Advanced Planning and/or Scheduling System.  In reality, they often happen in tandem and are evolutionary in nature.

Deciding how data will be housed, derived and propagated early on in an implementation will pay off in the long run.   Establishing a systematic, automated way to update and propagate data is equally important as the decision of what software system to use.  It is worth the investment to take the time to put this automation in place as a greater and greater number of products are added to a system the data will remain manageable and scalable.

From PC to Cloud, emails to tweets, networking happy hours to LinkedIn, it is nice to know some things stay the same.

Rich Guy

The rise of zombies in pop culture has given credence to the idea that a zombie apocalypse could happen. In a CFO zombie scenario, CFO’s would take over entire companies, roaming the halls eating anything living that got in their way. They would target the brains of supply chain managers and operations people. The proliferation of this idea has led many business people to wonder “How do I avoid a CFO zombie apocalypse?”

Supply chain managers are seeking and developing new and improved ways to exploit the volumes of data available from their ERP systems. They are choosing advanced analytics technologies to understand and design efficient sustainable supply chains. These advanced analytics technologies rely on the use of optimization technology. I am using the mathematical concept of “optimization” as opposed to non-mathematical process of making something better.

Mathematical optimization technology is at the heart of more than a few supply chain software applications. These applications “optimize” some process or decision. Optimization-base programs, for example, those frequently found in strategic supply chain network planning, factory scheduling, sales and operations planning and transportation logistics use well-known mathematical techniques such as linear programming to scientifically determine the “best” result. That “best solution” is usually defined as minimizing or maximizing a single, specific variable, such as cost or profit. However, in many cases the best solution must account for a number of variables or constraints. Advanced analytics technologies can improve a company’s bottom line – and it can improve revenue, too! CFO’s like this.

Advanced analytics technologies provide easy-to-use, optimization-based decision support solutions to solve complex supply chain and production problems.  And, these solutions can help companies quickly determine how to most effectively use limited resources and exploit opportunities.

So, from my perspective, there are seven practical reasons to embrace advanced analytics technologies:

  1. Your company saves money, increases profits.
  2. You get to use all your ERP system’s data.
  3. It’s straightforward and uncomplicated.
  4. You have the tools to discover great ideas and make better decisions.
  5. At the end of the day, you know the total cost of those decisions.
  6. You have a roadmap to make changes.
  7. You avoid the CFO zombie apocalypse

The summer issue of Manufacturing Today includes an article authored by Ted Schaefer and Alan Kosansky entitled Face Complexity – Making Sound Business Decisions.

“With every passing year, the amount and variety of information available to make business decisions continues its exponential growth. As a result, business leaders have an opportunity to exploit the possibilities inherent in this rich, but complex, stream of information. Alternatively, they can continue with the status quo, using only their good business sense and intuition and thereby risk being left in the dust by competitors. Top-tier companies have learned to harness the available data with powerful decision support tools to make fast, robust trade-offs across many competing priorities and business constraints.”

Read the complete article here: Face Complexity – Making Sound Business Decisions

Enabling the Virtual Company

October 5th, 2010 4:34 pm Category: Operations Research, Optimization, Optimization Software, by: Jim Piermarini

At Profit Point we are in the ‘Science of Better’, and we are always looking for new ways to do business, both for our clients, and for ourselves.  When we started, we had the challenge of being a virtual company, that is, we have never had a corporate office space. Since 1995, each of us has always worked from home.  While there are numerous benefits of this style of company architecture, including having a family that actually knows who you are, and keeping the company’s overhead to a minimum, it also has its drawbacks. Like forcing each person to make the deliberate decision about when to start work, and harder still, when to stop work each day. We knew when we started this company that we wanted to keep our overhead costs low, so a virtual office seemed like the natural choice.

More recently, we have been faced with another challenge, how to reduce the cost of the projects we do. Projects in the supply chain business require a certain amount of industry and company specific knowledge.  Until recently, we had been building into our projects ample on-site time where the project team could gel and collaborate and build the trust that is needed for the free flow of ideas.  But the world has changed, and we have changed with it.   No longer are big travel budgets a normal part of the projects we see. So the challenge was: how to reduce the travel expense line item, without sacrificing the project speed or quality?

In the consulting business, there is sometimes no substitute for ‘face-time’.  So travel to the customer site perforce happens.  Over the course of the last 15 years, I have seen a marked drop in the amount of time that we need to travel, going from 60-70% a decade ago to less than 20% currently, and this has been brought about primarily by two factors: 1) Companies simply do not want to pay the travel expenses. Since 9/11, most major companies have been slashing their travel budgets, and expect their consultants to follow suit.  One particular project comes to mind where I had seen travel expenses that were as much as the consulting bill each month. But in general, we see pressure to reduce the travel expenses that are generated by projects across the board.  2) ‘Remote Touch’ Technology has provided the means to travel less.  There are some great remote desktop control tools that allow two or more people to have a telephone or VOIP conversation, and look at the same computer screen, to discuss and collaborate on ideas and tools.  These web based telephony and remote control tools have eliminated the need for travel to a greater extent than you might think.   Many of our projects today have only two face to face meetings, one to kick it off, and one to present the results or close it out.  Some of our clients are handled successfully without any face time. I must say though, that in our experience, low face-time projects only work well within the culture and language: that is, when language and culture barriers exist in the project team, face-time is the best way to bridge these gaps, and mitigate the risk of project overruns and delays.

In business, technology comes into being as a means to enable better business processes.  The processes that we use that are enabled by this remote touch technology includes an agile approach to solving business problems or developing software solutions.  We use several readily available web based tools every day in our business, and boy have they allowed us to reduce the travel expenses.  These include:

GotoMeeting.com

This is the best remote touch tool out there in our opinion.  Until a robust free app comes along, this will remain the best value for the money.  The best part of the app is the recent addition of the integrated VOIP, where you can use a head set (I would recommend the Logitech ClearChat PC Wireless Headset: http://www.logitech.com/en-us/webcam-communications/internet-headsets-phones/devices/4226) to join the integrated telecon line.  This has the advantage of freeing up your phone, and being instantly connected to the telecon as soon as you start it.  No more long telecon numbers with their passcodes! We use this many times every day, and it is the primary reason why we can travel less.

Box.net

This is a simple to use and secure web based file storage and sharing application that fosters and supports collaboration with people both in your company and externally.  We love this app, and my clients seem to as well.  Just drop a file into this app, and share it securely with anyone with an email address. Use it when email attachments just will not do, due to size limitations, or just when the email hassle is too much.

PivotalTracker.com

This is a terrific project management tool that is designed for agile projects, and makes it simple to create and manage user stories for tool development.  While inviting new members can be a hassle (since their email seems to get caught in many spam filters), once they are in, these folks have made a stellar user interface to manage the tasks in a project of nearly any size. Use it to track bugs too. We have done several projects using this tool. and we will be using it for many more.  Great tool.

TableauSoftware.com

If you like to look at data, like we like to look at data, then you will want to look at Tableau.  You can think of it like a pivot table / chart on steroids.  You open it, connect to you data, (whereever or what ever data you’ve got, it can connect to it), and then you start to explore your data like you’ve never been able to before. Like a pivot table, you can drag and drop fields, aggregate data along dimensions,  and make sums, etc, but the really cool part of Tableau is the part where it suggests new ways of the looking at the data.   Go ahead, make maps, heat charts, time phased graphs, whatever.  Then you can assemble the graphs into a dashboard. Dashboards are the best.  Want to see a ton of data distilled down into a very compact visually stunning view suitable for management? Get a copy of Tableau, and you can make that view in minutes.

Used appropriately, these tools, and others like them, have enabled us to travel less, and work faster and better. (and more!)

If you have other great apps like these that enable better business processes, I would love to hear about them.

Fat-Tailed Probability Distributions

January 18th, 2010 5:11 pm Category: Operations Research, by: John Hughes

Don’t forget the corollary to Murphy’s Law, that Murphy was an Optimist.

Given the recent negative turn of events in the economy as a whole, and on Wall Street in particular, it is critically important for decision makers to always be aware of those ‘Black Swan’ occurrences that might be lurking in the future ready to derail the best-laid plans and forecasts. Such incidents are extreme and/or catastrophic in nature. They are the outliers that we think we can safely ignore. These are the events that when we talk about them in hindsight we begin by saying “Who would have thought that …” Well, although we’ve all heard of Murphy’s Law “if it can go wrong, it will”; as recent events have shown us, Murphy was an optimist!

Although the use of the term ‘black swan‘ probably originated in the 17th century, it has more recently come into vogue as a result of the book “The Black Swan: The Impact of the Highly Improbable” by Nassim Nicholas Taleb. Since Black Swan events are so very rare, we think we can forget about them when making plans and forecasts. The problem is that when they occur, they can wreak complete devastation on an organization or business. They cannot be ignored; they need to be properly evaluated and anticipated so that the organization will survive the event, and hopefully rebound and prosper in the future.

An example of a Black Swan Event would be the destruction and devastation resulting from Hurricane Katrina in New Orleans. Another obvious one would be the attacks of 9/11 and the collapse of the World Trade Towers. And let’s not forget the Tylenol tampering scare. At the time, society was shocked in its naivety that anybody would ever do such a thing. And these examples are not meant to imply that a Black Swan need be something negative. There can be tectonic and unexpected events, such as the sudden collapse of the U.S.S.R and the fall of the Berlin Wall, that are generally considered as being of a positive nature.

Although various methods are available to analyze random or unpredictable processes, the analytical modeling techniques that are typically used today are frequently insufficient to meeting the challenge of the Black Swan. In order to provide insight into the risks associated with some situation or decision, modelers have at their disposal tools ranging from “merely” solving a closed-form algebraic equation of a probability distribution, to the use of a full blown Monte Carlo simulation program. But fundamental to many analyses is the Normal Probability Density Function, the familiar ‘bell-shaped’ curve where the two tails of the graph (at the far left and right) taper down to equal 0. The problem is that since the late 1800s, researchers have recognized that this curve, with its ‘near-0’ tails does not accurately model Black Swan events.

Such extreme circumstances demand the use of a group of probability functions that are being called “fat-tailed” or stable-Paretian distributions. These, functions, based on the work of Vilfredo Pareto in the late 19th century, give higher probabilities of occurrence to events in the tails of the curve. A specific example of one is the Cauchy Distribution. Whereas the Normal curve approaches 0 at plus or minus 3.5 standard deviations, a Cauchy (depending on its parameters) is still not close to 0 at plus or minus 5 standard deviations.

Gen. Carl Strock of the Army Corps of Engineers, addressed a press conference shortly after Hurricane Katrina regarding the New Orleans levee system. He said, “… when the project was designed … we figured we had a 200 or 300 year level of protection. That means that the event we were protecting from might be exceeded every 200 or 300 years. That is a 0.05% likelihood. So we had an assurance that 99.5% of this would be okay. We unfortunately have had that 0.5% activity here.” The General’s analysis was based on a Normal distribution. If however a fat-tailed distribution had been used, that 300 years would have been much less, perhaps in the range of 60 to 80 years, and perhaps remedial actions would have been taken to avert the disaster that nearly destroyed the city of New Orleans.

The most generalized version of the equation of a fat-tailed probability function is below.


The parameter is what determines the thickness of the two tails, what is called the kurtosis of the function. Generally, as  decreases, tail thickness increases. In fact, the standard Normal Distribution is merely a special case of this equation where the parameters have certain specific values.

The problem with these fat-tailed distributions is that, depending on the specific values chosen for the parameters (which determine the exact shape of the graph), they may not be solvable algebraically. With the Normal Distribution, it’s possible to mathematically solve the equation and state that the probability of a certain event is some specific value. However, fat-tailed distributions do not lend themselves to this kind of a closed-form analysis. To be able to estimate the chances of specific events, numerical methods such as Monte Carlo simulations or binomial decision trees are required.

So the conclusion is that in analyzing any decision or plan that involves random processes, it is critical to realize and anticipate both the worst and best case scenarios. The business person or decision maker should discuss with the modeler/analyst what might really happen under a wide range of possible scenarios. Realizing the drawbacks of the Normal Distribution, together they need to decide on whether or not a probability function with a fat-tail (embodying this kind of “the sky’s the limit” thinking) is appropriate. And if a fat-tail distribution is required, the modeler/analyst must decide on the best numerical technique to address the needs of the decision maker.

This article was written by John Hughes, Profit Point’s Production Scheduling Practice Leader.

To learn more about our supply chain optimization services, contact us here.

What is a Monte Carlo model and what good is it? We’re not talking a type of car produced by General Motors under the Chevy nameplate. “Monte Carlo” is the name of a type of mathematical computer model. A Monte Carlo is merely a tool for figuring out how risky some particular situation is. It is a method to answer a question like: “what are the odds that such-and-such event will happen”. Now a good statistician can calculate an answer to this kind of question when the circumstances are simple or if the system that you’re dealing with doesn’t have a lot of forces that work together to give the final result. But when you’re faced with a complicated situation that has several processes that interact with each other, and where luck or chance determines the outcome of each, then calculating the odds for how the whole system behaves can be a very difficult task.

Let’s just get some jargon out of the way. To be a little more technical, any process which has a range of possible outcomes and where luck is what ultimately determines the actual result is called “stochastic”, “random” or “probabilistic”. Flipping a coin or rolling dice are simple examples. And a “stochastic system” would be two or more of these probabilistic events that interact.

Imagine that the system you’re interested in is a chemical or pharmaceutical plant where to produce one batch of material requires a mixing and a drying step. Suppose there are 3 mixers and 5 dryers that function completely independent of one another; the department uses a ‘pool concept’ where any batch can use any available mixer and any available dryer. However, since there is not enough room in the area, if a batch completes mixing but there is no dryer available, then the material must sit in the mixer and wait. Thus the mixer can’t be used for any other production. Finally, there are 20 different materials that are produced in this department, and each of them can have a different average mixing and drying time.

Now assume that the graph of the process times for each of the 8 machines looks somewhat like what’s called a ‘bell-shaped curve’. This graph, with it’s highest point (at the average) right in the middle and the left and right sides are mirror images of each other, is known as a Normal Distribution. But because of the nature of the technology and the machines having different ages, the “bells” aren’t really centered; their average values are pulled to the left or right so the bell is actually a little skewed to one side or the other. (Therefore, these process times are really not Normally distributed.)

If you’re trying to analyze this department, the fact that the equipment is treated as a pooled resource means it’s not a straightforward calculation to determine the average length of time required to mix and dry one batch of a certain product. And complicating the effort would be the fact that the answer depends on how many other batches are then in the department and what products they are. If you’re trying to modify the configuration of the department, maybe make changes to the scheduling policies or procedures, or add/change the material handling equipment that moves supplies to and from this department, a Monte Carlo model would be the best approach to performing the analysis.

In a Monte Carlo simulation of this manufacturing operation, the model would have a clock and a ‘to-do’ list of the next events that would occur as batches are processed through the unit. The first events to go onto this list would be requests to start a batch, i.e. the paperwork that directs or initiates production. The order and timing for the appearance of these batches at the department’s front-door could either be random or might be a pre-defined production schedule that is an input to the model.

The model “knows” the rules of how material is processed from a command to produce through the various steps in manufacturing and it keeps track of the status (empty and available, busy mixing/drying, possibly blocked from emptying a finished batch, etc.) of all the equipment. And the program also follows the progress and location of each batch. The model has a simulated clock, which keeps moving ahead and as it does, batches move through the equipment according to the policies and logic that it’s been given. Each batch moves from the initial request stage to being mixed, dried and then out the back-door. At any given point in simulated time, if there is no equipment available for the next step, then the batch waits (and if it has just completed mixing it might prevent another batch from being started).

What sets a Monte Carlo model apart however is that when the program needs to make a decision or perform an action where the outcome is a matter of chance, it has the ability to essentially roll a pair of dice (or flip a coin, or “choose straws”) in order to determine the specific outcome. In fact, since rolling dice means that each number has an equal chance of “coming up”, a Monte Carlo model actually contains equations known as “probability distributions”, which will pick a result where certain outcomes have more or less likelihood of occurrence. It’s through the use of these distributions, that we can accurately reflect those skewed non-Normal process times of the equipment in the manufacturing department.

The really cool thing about these distributions is that if the Monte Carlo uses the same distribution repeatedly, it might get a different result each time simply due to the random nature of the process. Suppose that the graph below represents the range of values for the process time of material XYZ (one of the 20 products) in one of the mixers. Notice how the middle of the ‘bell’ is off-center to the right (it’s skewed to the right).


So if the model makes several repeated calls to the probability distribution equation for this graph, sometimes the result will be the 2.0-2.5 hrs, other times 3.5-4.0 hrs, and on some occasions >4hrs. But in the long run, over many repetitions of this distribution, the proportion of times for each of the time bands will be the values that are in the graph (5%, 10%, 15%, 20%, etc.) and were used to define the equation.

So to come back to the manufacturing simulation, as the model moves batches through production, when it needs to determine how much time will be required for a particular mixer or dryer, it runs the appropriate probability equation and gets back a certain process time. In the computer’s memory, the batch will continue to occupy the machine (and the machine’s status will be busy) until the simulation clock gets to the correct time when the process duration has completed. Then the model will check the next step required for the batch and it will move it to the proper equipment (if there is one available) or out of the department all together.

In this way then, the model would continue to process batches until it either ran out of batches in the production schedule that was an input, or until the simulation clock reached some pre-set stopping point. During the course of one run, the computer would have been monitoring the process and recording in memory whatever statistics were relevant to the goal of the analysis. For example, the model might have kept track of the amount of time that certain equipment was block
ed from emptying XYZ to the next step. Or if the aim of the project was to calculate the average length of time to produce a batch, the model would have been following the overall duration of each batch from start to finish in the simulated department.

The results from just one run of the Monte Carlo model however are not sufficient to be used as a basis for any decisions. The reason for this is the fact that this is a stochastic system where chance determines the outcome. We can’t really rely on just one set of results, because just through the “luck of the draw” the process times that were picked by those probability distribution equations might have been generally on the high or low side. So the model is run repeatedly some pre-set number of repetitions, say 100 or 500, and results of each of these is saved.

Once all of the Monte Carlo simulations have been accumulated, it’s possible to make certain conclusions. For example, it might turn out that the overall process time through the department was 10 hrs or more on 8% of the times. Or the average length of blocked time, when batches are prevented from moving to the next stage because there was no available equipment, was 12 hrs; or that the amount of blocked time was 15hrs or more on 15% of the simulations.

With information like this, a decision maker would be able to weigh the advantages of adding/changing specific items of equipment as well as modifications to the department’s policies, procedures, or even computer systems. In a larger more complicated system, a Monte Carlo model such as the one outlined here, could help to decrease the overall plant throughput time significantly. At some pharmaceutical plants for instance, where raw materials can be extremely high valued, decreasing the overall throughput time by 30% to 40% would represent a large and very real savings in the value of the work in process inventory.

Hopefully, this discussion has helped to clarify just what a Monte Carlo model is, and how it is built. This kind of model accounts for the fundamental variability that is present is almost all decision making. It does not eliminate risk or prevent a worst-case scenario from actually occurring. Nor does it guarantee a best-case outcome either. But it does give the business manager added insight into what can go wrong or right and the best ways to handle the inherent variability of a process.

This article was written by John Hughes, Profit Point’s Production Scheduling Practice Leader.

To learn more about our supply chain optimization services, contact us here.

Profit Point has recently added another tool for analyzing and managing businesses and supply chains – system dynamics modeling. System dynamics focuses on feedback effects in complex systems. That is what distinguishes it from other simulation and modeling techniques. Feedback means that although X influences Y, it is also true that Y influences X even if this influence is mediated by a string of causal relations.

Here is a typical System Dynamics drawing from Wikipedia. In this diagram there is a flow of individuals from a category called potential adopters into a category called adopters. The rate of adoption is controlled by the ‘valve’ called ‘new adopters’. Many simulation modeling approaches would treat this as a ‘one-way street’ and attempt to model the system by controlling the valve with some external data in the hope of reproducing behaviors observed in the real system.

The point of System Dynamics is to explicitly include the feedback which causes the observed behavior. The model above could be a portion of a model of product adoption for the pharmaceutical industry. Another classic example is a predator – prey population model. When prey are plentiful, the predator population increases, which reduces the prey, which in turn reduces the predators. The extension of this classic model to a system of a business and its resources is fairly natural.

The System Dynamics approach has also been widely applied in typical supply chain scenarios. The effects of feedback can be studied in the context of inventory management and shipping policies. The figure below is of a classic System Dynamics approach to modeling the supply chain from raw materials production through shipment of the finished goods. Feedback is represented by the directed arrows from upstream processes back to the downstream processes. In many cases these represent control mechanisms meant to keep the system in balance or within limits. Successful feedback usually dampens oscillatory behavior – stock-outs, long supply, or obsolete inventory items, for instance.

System dynamics models are classified as continuous simulation models and are typically built in languages specifically designed for these types of models. They are quite different from the usual simulation models that are based on discrete entities and systems that jump from event to event in time. System dynamics models are based on equations that describe (usually) continuous flows of material, people, money, influence, etc. The models are highly useful as policy analysis tools – revealing the often unintended consequences of business rules. The models can examine periods of years or decades, while including external effects, e.g. the economy, and all the significant interactions within a business and with the outside world.

We look forward to bringing this exciting, powerful, and well-established analysis tool to bear on problems and opportunities of our clients.

This article was written by Joe Litko, Profit Point’s Business Optimization Practice Leader.

To learn more about our Business Optimization services, contact us.

Dwight Collins, Profit Point’s Green Supply Chain expert, attended the Sustainable Energy Conference at Cornell University and was interviewed by the Cornell Chronicle for his work on sustainable operations research. The conference, entitled “Sustainable Energy Systems: Investing in Our Future,” provided a full slate of talks that outlined the relationship between energy and climate challenge and considered the viability of an array of solutions ranging from conservation, petroleum and coal to nuclear, solar, wind, geothermal, hydroelectric and biofuels sources.

Collins, who teaches sustainable operations management at the Presidio School of Management, noted that the “the OR profession is missing out on some major opportunities for leadership in the field of sustainable business.”

Operations Research (O.R.) creates value all around us – in business, the military, health care, and the public sector. Organizations ranging in size from small businesses to large are using O.R. to do more with less. The availability of faster computers and our collaboration with IT departments gives us more and more opportunities to create value using O.R. O.R. improves effectiveness. And there are many, many other benefits.

  • Business insight: Providing quantitative and business insight into complex problems.
  • Business performance: Improving business performance through analysis and recommendations about business processes.
  • Cost reduction: Decreasing cost or investment.
  • Decision making: Assessing the likely outcomes of decision alternatives and uncovering better alternatives.
  • Forecasting: Providing a better basis for forecasting and planning.
  • Improved scheduling: Efficiently scheduling staff, vehicles, and machinery.
  • Planning: Applying quantitative techniques in support of operations, tactical planning, and strategic planning.
  • Pricing: Dynamically pricing products and services.
  • Productivity: Helping companies and organizations become more productive.
  • Profits: Increasing revenue or return on investment, increasing market share.
  • Project development: Demonstrating feasibility and workability and assisting with training.
  • Quality: Improving quality, and quantifying and balancing qualitative considerations.
  • Recovery: Gaining greater control, achieving turn-around.
  • Resources: Gaining greater utilization from limited equipment, facilities, money, and personnel.
  • Risk: Managing and reducing risk.
  • Throughput: Increasing speed or throughput, decreasing delays.

To learn more about how Profit Point’s consultants can help improve supply chain, contact us here:

(866) 347-1130 or
(435) 487-9141

Send us an Email

Source: INFORMS

If you are reading this article, chances are that you job responsibilities include a diverse and challenging set of decision making responsibilities. You may be focused on infrastructure planning, operations planning, scheduling or some other aspect of the supply chain. No matter, the challenges you face on a daily basis require significant domain knowledge and a 360 degree view of your supply chain.

So as you face critical supply chain decisions, how do you know the choices you are making will result in the best solution? If your not sure, then operations research (O.R.) can help. Here’s five simple rules of thumb to help you decide whether you could benefit from OR:

1) You face complex decisions. Are you faced with more decision factors than you can get your arms around? Do your key operational computer systems lack intelligence? O.R. professionals can analyze complex situations and build intelligence into key systems, so as to reveal the truly best options. But before you act, talk with an O.R. professional to confirm that taking an O.R. approach will pay dividends.

2) You’re having problems with processes. One or more of your processes is broken or needs to work a lot better. Many small, day-to-day decisions are not being made well, and it’s having an impact on your bottom line. O.R. can help improve these processes and their outcomes dramatically.

3) You’re troubled by risk. Do you want to limit or reduce risk? Assessing the risk of a new project or contract can be tricky. O.R. can help you quantify risk, which is key to controlling it. O.R. professionals can assist you in planning how best to balance risk against the gain you expect.

4) Your organization is not making the most of its data. Do you track information about your organization and have data that is begging to be used for decision making? O.R. specializes in working with data – extracting the most valuable information from what’s currently collected, and showing what additional data you could collect to increase the value even further.

5) You need to beat stiff competition. Others in your field are probably using O.R. to gain competitive advantage. Can you afford not to? An O.R. professional can help you stay on top with the latest methods – and open up important new sources of advantage.

To learn more about how Profit Point’s consultants can help improve supply chain, contact us here:

(866) 347-1130 or
(435) 487-9141

Send us an Email

Source: INFORMS

Contact Us Now

610.645.5557

Contact Us

Contact UsInfo

Please call:
+1 (610) 645-5557

Meet our Team

Our Clients

Published articles

  • A Fresh Approach to Improving Total Delivered Cost
  • Filling the Gap: Tying ERP to the Business Strategy
  • 10 Guidelines for Supply Chain Network Infrastructure Planning
  • Making Sound Business Decisions in the Face of Complexity
  • Leveraging Value in the Executive Suite
  • Should you swap commodities with your competitors?
  • Supply Chain: Time to Experiment
  • Optimization Technology Review
  • The Future of Network Planning: On the Verge of a New Cottage Industry?
  • Greening Your Supply Chain… and Your Bottom Line
  • Profit Point’s CEO and CTO Named a "Pro to Know" by Supply & Demand Chain Executive Magazine