A Ticking Time Bomb at Lock 52?

February 3rd, 2017 6:50 am Category: Optimization, by: John Hughes

In an article back in 2015 (“The Weak Link in the Chain?) I discussed the importance of what’s known as the Soo locks at Sault Ste. Marie, Michigan to barge transportation on the Great Lakes. This is a real choke point for marine shipments on the Great Lakes. Thus the Supply Chains of many businesses in the region are critically dependent on this system of damn and Locks. Another important choke point for barge transport in the Upper Midwest region happens to be Lock 52 on the Ohio River at Brookport in Southern Illinois. No. 52 sits about 930 miles downstream from Pittsburg and 23 miles upstream from the junction with the Mississippi.

As reported in a New York Times article, No. 52 was built in 1929 and is in serious need of replacement. In fact, the Army Corps of Engineers has identified 25 failure points at No. 52 and it’s nearby sister damn No. 53 which is slightly further downstream on the Ohio. Both were built on foundations of wood pilings pounded into the sandy river bottom, the lock walls are cracking and sagging, metal is rusted, and seals are leaking. A critical crane that is used to operate the damn dates from 1937. When repairs are needed, replacement parts are frequently no longer available and have to be custom made in area machine shops. A replacement damn at Olmsted Illinois, slightly downstream from No. 53 is way behind schedule and vastly over budget.

The operational efficiency of both No. 52 and 53 has a direct impact on a large sector of the economy. This critical bit of infrastructure actually handled more goods in tonnage in 2015, than the Soo Locks on the Great Lakes. If there was a failure at either of these two structures, the price of corn and any other grain-based products would skyrocket. And not only grain, prices of many other bulk materials such as coal, cement, scrap metal, iron and aluminum ingots, that are moved using river barges would rise. In addition, there would be increased pollution caused by shifting these marine cargoes to truck and rail, not to mention the toll that this extra traffic would take on the alternate transportation networks. For example, a typical barge configuration is 1 tow boat pushing 13 barges. It’s estimated that to shift this amount of cargo to alternate modes would require 225 train cars or 1,050 trucks.

The age of these damns, and the fact that they have many manual steps in their operation, has in fact led to delays and backups of river traffic resulting in millions of dollars of lost productivity. Recently, to transit both damns was taking up to 5 days: a distance of only 100 miles. And on Sept. 14, all traffic was halted for an additional 15 hours while emergency repairs were made to No. 52.

Since such a huge part of our economy is riding on the continued use of these two aging structures, we can only hope that nothing truly catastrophic happens to them before the new Olmsted facility is up and running. The condition of these damns truly highlights the importance of the need for increased spending on infrastructure in order to keep many Supply Chains functioning efficiently.

  • Comments Off on A Ticking Time Bomb at Lock 52?
  • Add comment

Adventures in refactoring: The Gambler’s perspective

December 2nd, 2016 4:12 pm Category: Optimization, by: Steve Cutler

Earlier this year I embarked on a project to do some major work on one of the applications that I inherited. I had been working with the code for several months and had a good grasp of how it worked, its layout, and all its main functionality. However, the overall architecture of the code and the way it was laid out was making it increasingly difficult to do any work on it. One of the main features was becoming particularly problematic and it happened to be the feature that got the most enhancement requests and bug fixes. With each bug fix and enhancement that was added, the code became more and more difficult to work with.

Around this time, my wife and I spent some time traveling with her grandparents. Her grandparents have a healthy love for “Country Western” music. Their house and cars are well stocked with albums from Merle Haggard, Johnny Cash, and Willie Nelson, among others. As we traveled with them, we listened nonstop to a compilation CD of Kenny Rogers’ Greatest Hits. We will never know for certain if we were treated\subjected to so much Kenny, because of their love of the album, or because they didn’t know how to work the “newfangled” stereo in their truck. My money is on the latter! One song on the album “The Gambler”, stuck out to me and some of its lyrics became a bit of a theme song for me as I continued to work on the project.

You’ve got to know when to hold’em,
Know when to fold’em,
Know when to walk away,
Know when to run.

These lyrics kept coming to mind as I was trying to figure out if a code refactor was really in our best interest. We had a customer for the software, and we had made several releases to them. By all accounts the software worked well and we were getting close to the end of a development phase. So it was tempting to just keep the code as it was and implement the last couple of requests. On the other hand, the code was so fragmented and difficult to work with that the task of squeezing in a couple of simple feature requests (that should have taken a couple of hours to implement) would take days to thread in. To make matters worse, these changes usually resulted in several bug fixes which also took more time than they should have. And this was on code that I was familiar with and actively working on! What would happen several months or a year down the road when we needed to fix a bug, add another feature, or launch a new phase of development on the application? Would we hold’em? Fold‘em? Walk away? Or would it be time to run?

Developers at heart like clean starts. With a clean start, you get to make the choice of using new technologies and trying out different development patterns. And I am no exception. However, I wanted the decision of what to do to be based on what was best for the company, the application and our customers, not my desire to start clean and toy around with groovy new technologies.

It turns out that the conventional wisdom is to avoid a rewrite at all cost, favoring instead small incremental changes to the code. This approach was described by Joel Spolsky CEO of Stack Exchange in his blog post “Things You Should Never Do, Part 1”. While the blog post is quite old, it is still valid and very heavily cited in other articles on this topic. In this mindset, the old code may be ugly, inefficient, and hard to read and understand but it is preferable because it has been tested more thoroughly, has been run on real-world computers by real-world users and contains real-world bug fixes for scenarios that aren’t always obvious. And focusing efforts on rewriting code from scratch may very well leave the door open for a competitor to eat your lunch.

While all of this makes sense, you should never say never. In a post on Stack Exchange Michael Meadows made some excellent observations on when it may be appropriate to do a full rewrite. He lists many things to consider including deployment concerns, new developer ramp up time on the project, project size, and automation among others. Some of the more salient points for my scenario were:

  • Simple bug fixes take too long because of the complexity of existing code
  • New Features take too long, and cost too much because of the interdependence of the code base
  • The coupling of components is so high that changes to a single component cannot be isolated from other components
  • A redesign of a single component results in a cascade of changes not only to adjacent components, but indirectly to all components

With all of this information in mind, the decision was made to focus our efforts on a major rewrite, not of the whole application, but on one of the main features that was also the most problematic. However, because the code lacked isolation of features, the cascading changes eventually made it so almost the whole code base would need some sort of change. In the end, the decision was finally made to walk away from the old code.

When trying to decide how to clean up ugly or cumbersome code, there are many factors to consider. And there may not be one right answer. There can be a lot of good reasons to avoid a full rewrite of the code and if you can make the needed changes in small incremental steps, that is most likely the best approach. However, there are also good reasons to consider a full rewrite of the code when incremental changes are more expensive and time consuming.

In the end, the Gamblers wisdom is true, you need to know when to hold’em, know when to fold’em, know when to walk away, and know when to run.

  • Comments Off on Adventures in refactoring: The Gambler’s perspective
  • Add comment

Holiday Season and Supply Chain Management

December 1st, 2016 6:39 pm Category: Inventory Management, Supply Chain Planning, Warehouse Optimization, by: Karen Bird

It’s that time of year when supply chains that support the retail sector are tested. Planning, Procurement, Inventory Management and Logistics teams all have to work together to be successful. It all starts with placing bets on what, where and when consumers are going to buy during the holiday season months in advance. Some of the questions that have to be answered are: What will be the hot items for the holiday season? Will consumers purchase in-store or on-line? When during the period between November 1st and December 31st will consumers make their purchases?

As an example companies that have manufacturing capacity constraints in the fall, the Demand Planners have to lock in their forecasts in May or June. Once the demand plan is locked in, then the downstream supply chain teams can make their plans and weigh options to fulfill the forecast. Supply Planners weigh In-House versus Contract Manufacturing.  Securing contract manufacturing early is less expensive and can be the difference between being in-stock and out of stock.  In addition, if there are any long lead time raw materials needed to produce the final products they have to be bought by the Procurement team as early as summer time. You can increase the lead times needed if any raw materials or finished products have to be imported from overseas.

Once the products are produced, they have to be stored in warehouses until they need to be delivered to your local store shelf or your doorstep. Sometimes the standard warehouse that a producer uses runs out of capacity during peak season, so overflow warehouses have to be contracted. Determining where to stage inventory is one challenge that Logistics professionals have to handle in an evolving environment with on-line shopping continuing to increase in comparison to in-store shopping. Cyber Monday sales this year hit a record of $3.45B and is almost on par with in-store purchases made on Black Friday. The analysis can get murky because some consumers purchase on-line and pick-up in-store.

When you shop this holiday season either in-store or on-line, think of the Supply Chain “elves” behind the scenes making your products available at your local store or delivering them to your doorstep. The products don’t get to you by accident, it takes months of work to make every purchase possible. Happy Holiday Shopping!!

  • Comments Off on Holiday Season and Supply Chain Management
  • Add comment

A Hero making Network Bridge

November 29th, 2016 5:37 pm Category: Network Design, Optimization, by: Jim Piermarini

Like many people I have a home network for which it falls to me – the Dad – to be the administrator. And like many home network administrators, over the years, I have tried many different routers, network bridges, and wi-fi access points to improve reception, coverage, and distance. I have tried Dlink, EnGenius, and LinkSys models, among others. And until now, I have always been disappointed in the ability of these name brands to fulfill my need of a good network bridge.

The situation:

I have an office in the barn, where I get my internet access via Charter cable modem. This works great for business service, where I get speeds of 55-65 meg down and 4 meg up. However, I needed to extend coverage to the house (which is separated from the barn, no wires between them, nor any possibility), to keep the family supplied with their fix of Netflix movies. ‘Come on Dad, you have good internet connection in the barn, why not here in the house too?’ was heard a little too often, and it offered me an opportunity to fill a role of the conquering internet hero!
So I looked into the technology of making a wireless bridge to make the connection. A wireless bridge consists of a device on the barn network that broadcasts a wireless signal and a device on the house network that receives and broadcasts a wireless signal. Each of these devices need to ‘know’ about the other to make the connection secure. Getting a speedy connection in the network bridge has been my constant pursuit for the last several years, with many different configurations of hardware to get a reasonable speed. The spot of internet hero was still open and waiting for me to shine.

 

G-speed:

dlinkMy first attempt was to use two old dlink (D-Link DIR-xxx IEEE 802.11 a/b/g) routers in bridge mode ( where DHCP is off, and each acts as an access point only). I tested the throughput using speed test (www.speedtest.net) from the house network, assuming that the bottleneck in speed would be in the bridge. The barn unit was about 70—80 feet away from the house unit. I was able to get consistent speeds of about 15-20 meg down and 3 meg up. No so great. Not really hero stuff here.

 

N 300:

engeniusI tried two EnGenius units (IEEE 802.11 n 300), which promised greater distance and speed due to the n specification, but I got about the same results. A bit better than g, about 20-22 meg steady, but not the dramatic knight on the white horse saving the day I was looking for.. Where was the promise of the new n specification for speed and coverage? Not applicable to this application apparently… Still, no hero adulation coming from the house.

 

 

AC 1750:

linksysac1750Then I bought two Linksys AC1750 AC access points, which can be used in bridge mode. These replaced my existing bridge n devices, and I had some rather lengthy configuration sessions, (it turns out that only ONE of the units gets put into bridge mode…. Who knew?) The connection was made, and I got a consistent 30 meg down, 4 up. So pretty good. At least it seemed that I could take some credit for the improvement, but still not hero material.

But I was sure there was a better solution. Where was the directional antenna? How does the unit control where the signal is going? If it doesn’t control that, isn’t much of the power wasted to directions not toward the other unit?

 

 

Ubiquity:

I pondered these questions until I came across this unit: the Ubiquity NanoBeam 5AC 16. These were only about $100 each, so I bought two of them.nanobeam

The advertised specs on these things are to be marveled at. At first I thought they were kidding… they measured their range in km, not meters. And my two units were separated by less than 100 feet!nanobeammodels

Finally, these seemed to offer a hope of a bridge that would give me the kind of results I was looking for. They are designed to be pointed at each other. That makes sense. The parabolic dish shape suggests that the wireless beam is directed toward its target. So far so good. But what does it take to set them up? The Linksys was very time consuming and frustrating, was I going to stumble on the verge of putting on the hero mantle?

Results:

The moment of truth came quickly. Set up was pretty easy! Just plug them in, and point them toward each other (there are blue light indicators to show signal strength, to help with beam alignment).
The UI of the device offers two really cool features… one for tracking the link throughput (which is what will give me hero status) and the other for seeing the signal to noise ratio.nanobeamgraphnoise

Andnanobeamthroughput

AS you can see, I am able to get some SERIOUS speed, and thus the network bridge was no longer the bottle neck for the internet speed in the house. My speed is:

speedtestresult

That is officially fast! This is essentially the same speed as in the barn, and that is enough to get me hero laurels as the Dad administrator. The hero’s quest complete, the family is happy to enjoy many a cozy evening with their Netflix account humming.

 

As always, I am happy to relate what we are thinking about here at Profit Point, 

Jim Piermarini and internet hero

Jim-Piermarini.jpg

 

  • Comments Off on A Hero making Network Bridge
  • Add comment

A Lesson in Change-Management from the Recent Election

November 23rd, 2016 11:19 am Category: Global Supply Chain, Supply Chain Improvement, by: Ted Schaefer

In the wake of the recent election, there has been a lot of talk about the types of changes we’ll be facing over the next few years.  The continuing analysis of the election and a recent plane ride have given me a good refresher course on some of the critical factors that enable a successful change in an organization or doom it to failure.

The day after the election, I was traveling home to Houston and I took the advice that I give to my two college-age sons: if you really want to know what’s going on with a particular issue, make sure you get at least two different points of view and the truth is likely to fall somewhere in the middle.  So I bought a copy of The New York Times and The Wall Street Journal to read their analyses of the election results.  Needless to say, the newspapers had some fairly different interpretations of the same sets of facts.  Sometimes, it was a matter of drawing different conclusions from the same set of data and other times it was a matter of which facts were emphasized and in what order that would lead a reader in two different directions depending on which paper I was reading.

The following week, I was traveling home after our project team delivered the final presentation to our executive sponsors.  Our team had recommended a number of changes to the client’s supply chain; some fairly straight forward and others that would require a significant change in culture.  As it happens, I sat next to a gentleman who helps companies change cultures.  We had a good conversation, helped by our third row-mate, who bought drinks for the row, about a number of different things.  However, one thing that stuck with me was his premise that an organization’s results are determined by its culture.  In this organizational model, actions drive results, but beliefs drive actions.  Thus, to change the results in a company, one must change the beliefs held by the people who impact the results.

Once again, I was reminded that the key to a successful change is the people who run the process.  If they are not engaged and if they don’t believe that the change will be a good one, you’re in for a very rough ride.  Further, when trying to understand the current beliefs that drive the actions that drive the success of your change, it’s best to seek out more than one source of information.

  • Comments Off on A Lesson in Change-Management from the Recent Election
  • Add comment

Impact of 3-D Printing on Supply Chains

November 16th, 2016 5:02 pm Category: Optimization, by: John Hughes

One of the really intriguing developments in technology lately is the concept of 3-D printing. Who knows where it will lead … will General Motors or Toyota someday simply print whole automobiles as one large and seamless piece of plastic? Maybe Frigidaire will simply print up a bunch of new refrigerators.
But let’s not get too ‘blue-sky’ right now, since a more immediate and mundane use could have a huge impact on Supply Chains of a whole swath of businesses ranging from automobile repair to plumbing supplies. I had my car in for service recently. Once the mechanic diagnosed the problem I was told that he would need to get the part from his local Toyota dealer that he normally deals with. But it turned out that the dealer also didn’t have the part, and he would need to get it from some higher echelon location in Toyota’s supply chain.
Well I can envision a future where the Toyota dealer could simply pull the specs for the replacement part off of a secure company website, and print off the part in a matter of an hour or two. The new item would probably be made out of plastic but maybe in the far distant future they would give me the option of getting my replacement in metal if I wanted to spend some extra money.
I can see this kind of thing playing out in lots of scenarios. A plumbing problem in your home necessitates a new widget but your house is old and none of its components are the standard size and replacements are hard to find. Using 3-D printing, the plumbing supply house could simply download the necessary design and produce the part as needed.
Can you imagine how this might impact long-distance supply chains that reach across international boundaries half-a-world away? Using this technology, companies wouldn’t need to keep so much money tied up in replacement parts stored in multi-echelon supply chains. They would simply keep some amount of plastic stored at a relatively forward location that could be made into any one of a hundred different parts as demand would dictate.  The impact of this would be huge. There are whole sectors of the economy solely geared toward storing and moving parts and replacements forward along a supply chain. For better or worse, 3-D printing is going to have a tremendous impact on just what it means to “manufacture” something.

  • Comments Off on Impact of 3-D Printing on Supply Chains
  • Add comment

Freeze Dried Fuzzy Logic

October 27th, 2016 3:06 pm Category: Optimization, by: Deanna Wenstrup

fuzzylogicblog

 

My hobby is backpacking. When you backpack you carry everything on your back, so you quickly become obsessed with how much everything weighs. One of the biggest challenges I typically face is meal planning – food is heavy, bulky and often perishable.  As a result of this challenge, backpackers typically carry freeze dried food.

Freeze-drying is a complex and amazing process.  Food is frozen to between -30 to -50 degrees Fahrenheit. Once frozen, the freeze dryer creates a vacuum in the food chamber. The food is gradually warmed and since water in a vacuum cannot exist in a liquid state, any water in the food sublimates turning directly from a solid to a gas. The gas accumulates as ice on the sides of the food chamber.

Recently I purchased a home freeze dryer so I could freeze dry a wide variety of foods to take on our trips. It is a simple to use, yet complex piece of technology. I place the food in the machine, push the start button and it lets me know when the food is ready. The entire process is automatic and controlled by a computer all thanks to fuzzy logic.

Fuzzy logic is an approach to computing based on “degrees of truth” rather than the Deanna-Wenstrupusual “true or false” (1 or 0) Boolean logic on which the modern computer is based.

The fuzzy sets theory was first proposed by UC Berkeley professor Lotfi Zadeh in 1965. This led to fuzzy logic, which was proposed in 1973. Fuzzy sets theory has to do with mathematical sets or groups of items known as elements. In most mathematical sets, an element either belongs to the set or it doesn’t. For example, a Blue Jay would belong to a set of birds, but a bat would not since it is a mammal. In fuzzy logic, elements can belong to sets in varying degrees. So since a bat has wings, it might belong to a set of birds — but only to a certain extent.

Fuzzy logic is a way to program machines so they look at the world in a more human way, with degrees of truth. Instead of hard parameters and strict data sets where the answer is either true or false (1 or 0), fuzzy logic assumes a more practical approach. Using numbers, it incorporates non-definitive words like “slightly” or “almost” into its decision-making processes.

As a result, the use of fuzzy logic in freeze dryers helps to ensure properly freeze dried food because it gives the appliance the ability to make judgment calls similar to those a person might make. Rather than just running for a fixed period of time like 12 hours, the freeze dryer adjusts drying time and vacuum pressure based on factors such as rate of sublimation and moisture content in the food

Although we probably don’t realize it, fuzzy logic is all around us and used in a variety of everyday items:

  • Air Conditioners: Old AC’s used simple on-off mechanism. When the temperature dropped below a preset level, the AC was turned off. When it rose above a preset level, the AC was turned on. There was a slight gap between the two preset values to avoid high-frequency on-off cycling. An example would be “When the temperature rises above 70 F, turn on the unit, and when the temperature falls below 69 F, turn off the unit”. Using Fuzzy Rules like “If the ambient air is getting warmer, turn the cooling power up a little; if the air is getting chilly, turn the cooling power down moderately” etc. The machine will become smoother as a result of this and give more consistent comfortable room temperatures.
  • Automatic Gear Transmission System: It uses several variables like speed, acceleration, throttle opening, the rate of change of throttle opening, engine load and assigns a weight to each of these. A Fuzzy aggregate is calculated from these weights and is used to decide whether to shift gears.
  • Washing Machine: Sense the load size, detergent amount etc. Keep a track of the water clarity. At the start of the cycle, the water will be clean and will allow light to pass through it easily. As the wash cycle proceeds, the water becomes discolored and allows less light to pass through it. This information is used and control decisions are made.
  • Reading: Hand-written input and interpreting the characters for data entry.
  • Cell phone texting uses previously sent texts to determine the probable next word in the sentence you are typing
  • Television: A Fuzzy logic scheme uses sensed variables such as ambient lighting, time of the day and user profile to adjust parameters such as screen brightness, color, contrast, and sound.
  • Criminal Search System: Helps in a criminal investigation by analyzing photos of the suspects along with their characteristics like “tall, young-looking..” from witnesses to determine the most likely criminals.
  • Online Disease Diagnostic System: Analyses the user’s symptoms and tries to identify the disease he/she may be suffering from.

So the next time you look at an appliance and ponder how it works, it might be looking at you and thinking the same thing.

 

Meaningful Metrics in Supply Chain

October 10th, 2016 11:39 pm Category: Global Supply Chain, MIMI, Scheduling, Supply Chain Planning, by: Karen Bird

For years we’ve been hearing about Big Data. Now the call is to make the data visible and actionable. Easier said than done. Remember when we wanted our music, phone and camera on one device instead of having to carry multiple devices? Data Visualization is that desirable right now and it is challenging to do well. Here’s why:

 

Challenge #1: Properly defining the question that you want the data to answer

In the world of supply chain the leaders typically want all of the data summed up into Good News or Bad News. For example, at the end of a monthly S&OP meeting one of the key questions that gets asked is Can Sales continue to promote product A? For Operations to give a Yes or No answer, a timeframe has to be defined. Once the timeframe is agreed, then Operations can answer the question by building a heat map for every product or family of product (if that makes the data more manageable). The heat map then can be given to Sales at the end of the monthly S&OP.

metricsblog1

Challenge #2: Cleaning up dirty data

This is where most organizations get stuck. Cleaning up the data is tedious work but it has to be done or the metric is useless. Take heart, sometimes identifying and fixing the issues with the data is meaningful on its own. Also, think about what decisions that dirty data is influencing on a daily basis or the time that it takes to explain variances every month.

 

Challenge #3: Developing graphics that tell the story at a glance with the push of a button

You have to work with your audience to determine what graphics work for them. I find that it’s best to create something and then get feedback. This step can be a bit of trial and error but once you have the design locked in, then you need a skilled developer to automate the report out. The end users really appreciate if they can easily run the reports and generate the charts and graphs on-demand with the push of button. If it is complicated or requires many manual key strokes to generate the charts and graphs then that report out will not be sustainable.

 

Challenge #4: Making the data actionable

Congratulations on making it to this step. You have put so much effort into getting here and now all you have to do is summarize thousands or even millions of data points across multiple parameters in a way that helps the receivers of the results to take action. If you can monetize the results by showing costs or savings that will give direction to the receivers of the output to either keep doing what they’ve been doing or give them incentive to make a change. Or, if you can summarize the data into categories that is meaningful to the audience then they will know where to focus their time and energy to make improvements.

Here is an example of a chart that answers the question: How good is my schedule? This chart in addition to five other supporting charts can be generated on-demand in 30 seconds.

metricsblog2

At Profit Point, we work with our clients to overcome the challenges with Data Visualization and develop Meaningful Supply Chain Metrics. Contact us at www.profitpt.com or at 610-645-5557 and we will be happy to assist you.

  • Comments Off on Meaningful Metrics in Supply Chain
  • Add comment

The benefits of using Bing Map API and Telerik controls for WPF

August 21st, 2016 11:44 pm Category: Profit Vehicle Planner, by: Steve Cutler

In December of 2014 Microsoft officially discontinued their popular mapping and routing software Microsoft MapPoint. With a number of internally developed products that make use of MapPoint it quickly became obvious that the time had come to modernize these applications to make use of more modern cloud-based technologies. After assessing the different options available to us we  came to settle on using Bing Maps as a replacement of the now defunct Microsoft Map Point.

Out of the box, the Bing Map API provides developers with AJAX controls that can easily be embedded in a website. And with the addition of the Bing Maps Representational State Transfer (REST) API, developers can easily add address geocoding, calculate routes (including turn by turn directions) from multiple points and even factor in current traffic levels to the calculated drive time. And to make things even better, Bing Maps is all cloud based so our users no longer have to worry about how current their maps are.

Bing’s Map API is primarily written for web development and integrates well with HTML5 and a host of different JavaScript libraries. All of this is well and good, however, we needed to integrate Bing Maps into a WPF application. To be fair Microsoft does provide an SDK that allows you to accomplish this very task. In my scenario, though, we were already making use of the excellent Telerik UI for WPF controls in other places within the application. And as luck would have it Telerik already had a fully functional WPF map control.

Bing’s Mapping API brings a lot of different features to the table and Telerik adds substantially to the offering all while cutting down on the amount of code a developer has to write to get things going. And since the Telerik controls were already included in the project this also saved us from adding yet another dependency to the mix.

All-in-all this solution provided huge benefits to us in terms of reduced footprint, the ability to customize controls and map behavior, reduce the amount of code required to do common tasks like geocoding an address or plotting a route on the map. And best of all our applications now enjoy a certain measure of being “future-proof”.

  • Comments Off on The benefits of using Bing Map API and Telerik controls for WPF
  • Add comment

Data, Data, Data!

August 8th, 2016 3:45 pm Category: Global Supply Chain, MIMI, Optimization, SAP Integration, Scheduling, Supply Chain Improvement, by: Mark Rockey

Here at Profit Point, we typically put in a fair amount of effort up front to scope out a project together with our client.  This typically helps us and our client to set appropriate expectations and develop mutually agreeable deliverables.  These are key to project success.  But another key element to project success is getting good quality data that will allow our clients to make cost effective decisions from the analysis work we are doing or the software tool we are implementing.

Decision support models are notoriously data hogs.  Whether we are working on a strategic supply chain network design analysis project or implementing a production scheduling tool or some optimization model, they all need lots and lots of data.

The first thing we do (which is usually part of our scoping effort) is identify each of the data types that will be required and what will be the source of this data.  To do this we start with what decisions are trying to be made and what data is required to make them successfully.  From there we identify if the data currently exists in some electronic form (such as an MRP system) or whether it will have to be collected and entered into some system (say a spreadsheet or database program) and then figure out how the data will get into the tool we are developing.

Second, we try to get sample data from each data source as early as possible.  This allows us to see if the assumptions that were made as part of the scoping effort were valid.  There is nothing like getting your hands on some real data to see if what you and your team were assuming is really true!  Often there are some discoveries and revelations that are made by looking at real data that require design decisions to be made to be able to meet the project deliverables.

Third, to help with data validation we find it extremely helpful to be able to visualize the data in an appropriate way.  This could take the form of graphs, maps, Gantt charts, etc. depending on the type of data and model we are working on.  On a recent scheduling project, we had the schedulers review cycle times in a spreadsheet but it wasn’t until they saw the data in Gantt chart form that they noticed problems with the data that needed correcting.

Identifying data sources, getting data as early as possible and presenting the data in a visualized form are absolutely required to make a project successful.  Omitting any of these steps will at least add to the project cost and / or duration or possibly doom the project to failure.

Idempotency – a key to better code

August 7th, 2016 9:54 am Category: Aspen SCM, MIMI, Optimization, Supply Chain Software, by: Jim Piermarini

Idempotency – a key to better code

Recently I found a term that intrigued me. Idempotency.
From the web I saw this definition that I liked: In computing, an idempotent operation is one that has no additional effect if it is called more than once with the same input parameters.

Much of my computing experience has been dedicated to the transformation of data. And I have written countless routines that transform data in one way or another, from constructing schedule applications to assigning inventory to orders. I support many applications that manage data in various ways with various syntaxes and data structures in various applications. Invariably, those sections of the code that keep me up at night and cause me the most angst in support, are those that were developed without this concept of Idempotency incorporated into their design. When one makes a routine that operates on a data set, if you do not consider what happens if that routine runs again, it will eventually cause you grief in a support call. I learned long ago that operations of these sort must be designed to not accumulate their effect on the data. They must be ‘smart’ enough to not cause errors when they are run again, because in complex systems the control of when the routines are run may not be under the control of the programmer (think jobs or multi user UI), or even if it is, the programmer may end up calling it again for other reasons. If the routine does not adhere to the concept of Idempotency, it can be very tricky to understand how the data go into the sate it is in when the user calls. Often my most difficult troubleshooting issues are these types of problems. So I read with keen interest about this concept that was well enough defined to help me keep the concept in the forefront when designing new applications.

Some examples when using Idempotency is critical are: netting inventory, re-scheduling activities, parsing address data into fields, and in some cases, adding records to a record set. In all these examples, the code needs to be aware of whether the transformation operation was already performed.

Adding Records to a data set: Let’s say you are accumulating records in a data set from various data sources, like names from each department’s employee databases. If you have already appended the finance department’s data to the master table, then appending it again will cause duplicates. Obviously there are many techniques to prevent duplicates in a database, but let’s explore how Idempotency can help. If the appending routine is designed with Idempotence, it can be run anytime, and as many times as you like without adverse effect (like duplicates). To incorporate this into the append routine, ensure your data set has a text field to hold the name of the source of the data. I usually put in the name of the action query or stored procedure that creates the records. Then the first part of the routine can query the data set to see if this action has been run previously, and if so, either terminate or remove the records before executing the append of new records. In this way, running the routine multiple times for the finance department will replace the finance department’s names in the master table.

Netting Inventory: When dealing with inventory, I typically read the value form the system of record, and netting happens there. However, let’s say you need to carry a book inventory value in your local system, and net that inventory as the user enters adjustments to it every day. The netting logic can be complex. It begins with the starting inventory, and adjustments are accumulated and applied to become a new starting inventory value. If the adjustments are applied to the starting inventory more than once then the value will drift away from reality making it unusable. To prevent this, and apply the concept of Idempotency, I carry three inventory fields: Inventory (both the starting inventory and the resulting adjusted inventory), Original Inventory, and Adjustments to Inventory. When the adjustment field changes via the UI, I replace the Original Inventory field with the contents of the Inventory field. After this I can apply the transformation (repeatedly) to the entire data set to calculate the Inventory = Original + Adjustment. Additionally, I time-stamp the record when the transformation is applied, and when the Adjustment is entered. The UI can compare the Adjustment time-stamp to the Transformation time-stamp to see how to treat the Adjustment, either as a replace or an accumulation. If the Adjustment time stamp is later than the Transformation time-stamp this means that the Transformation has not yet been run to use this Adjustment. In this case, the UI might accumulate any new user adjustment into the field. If the transformation has already been run, then the UI would replace the Adjustment.

Aspen SCM (MIMI) Scheduling routines: Another area where this concept of Idempotency is important is when using some scheduling techniques in Aspen SCM Plant Scheduler.  Sometimes it is interesting to move all the activities off of one or more reactor facilities to a temporary holding place (similar to a queue) to be able to re-schedule them one by one on the best reactor at the most appropriate time.  This is a powerful technique to allow the routine to prioritize activities to meet customer demand, and maximize the capacity utilization on the reactors.  However, if Idempotency is not considered during the design of this routine, the results can be devastating to the quality of the schedule.  Lets say the routine fails during the re-scheduling portion of the routine. The reactors are partially filled, and the temporary holding place is loaded with activities. Since multiple reactors are the source of the activities, the temporary holding facility would be overloaded in time, having activities that extend beyond the end of the scheduling horizon.  Executing the routine again when starting in this state would erase all of the activities on the temporary holding place, thus erasing much of the schedule.  Incorporating Idempotency into the routine would mean considering the path to recovering these activities in the case of a failure or re-running the routine.

 

It turns out there are several other related terms that are interesting as well: Again from the web: read about them here:
NULLIPOTENT: If an operation has no side effects, like purely displaying information on a web page without any change in a database (in other words you are only reading the database), we say the operation is NULLIPOTENT. All GETs should be nullipotent. Otherwise, use POST.

IDEMPOTENT: A message in an email messaging system is opened and marked as “opened” in the database. One can open the message many times but this repeated action will only ever result in that message being in the “opened” state. This is an idempotent operation.

NON-IDEMPOTENT: If an operation always causes a change in state, like POSTing the same message to a user over and over, resulting in a new message sent and stored in the database every time, we say that the operation is NON-IDEMPOTENT.

Reading about and exploring these terms has reinforced and put a name to a concept that through experience I have come to understand has major consequences. Now that I can name the concept, hopefully I can be more concise in explaining to others the need this concept addresses, and write better code too.

 

Jim Piermarini – Profit Point Inc.

  • Comments Off on Idempotency – a key to better code
  • Add comment

Preserving Entrepreneurship in a Merger Crazy World

July 24th, 2016 11:30 am Category: Optimization, by: John Hughes

I recently read 2 separate articles about recent mergers in the Consumer Package Goods (CPG) market that seem to reach contradictory conclusions: and this has gotten me thinking about just who is right. The two items that I’m referring to are “With Competition in Tatters, the Rip of Inequality Widens” (In the New York Times of July 13), and “Invasion of the Bottle Snatchers” (The Economist magazine July 9th). Both of these touch on the pending merger of Anheuser-Bush InBev (AB) and SABMiller into a single organization.
The Times article states that the beer merger will restrict the ability of a whole slew of small craft brewing companies to distribute their products through AB’s ubiquitous distribution network. This was a service that I certainly was never aware that AB provided to small companies. The Times feels that this is simply a continuation in the decline of competition and entrepreneurship in the American economy with negative impacts on a whole range of social and economic problems.
The Economist is much more sanguine about the consolidation in the CPG industry. They point to the fact that generally speaking, the giant CPG companies are quite slow and lead-footed in responding to consumer tastes. Small, entrepreneurial firms on the other hand are very adept at using various aspects of the internet such as social media, and online reviews to get their message out and build brand loyalty. Building distribution channels can be accomplished through online sales. And if a consumer is willing to pay more for a product it probably will not be for a traditional big brand: in a survey by Deloitte one-third of American consumers said they were willing to pay at least 10% more for the “craft” version of a particular good. The Economist article clearly feels that entrepreneurship is alive and well: I would add that the stream of new consumer products regularly on display on the television show “Shark Tank”
So who’s right and what can be done to preserve ‘the little guy’ and foster entrepreneurship? I suppose they’re both correct. I think the key is that when regulators consider a proposed corporate merger, they must be aware of ALL the markets that will be affected in order to minimize any “collateral damage” to the economy. It is not enough to look at only ways in which the candidate companies directly compete, but rather all of the markets that they service and all of the other organizations that will be affected in a post-combination world. So if large companies like AB do in fact provide services to smaller firms, then regulators need to incorporate the impact of a proposed merger on the larger economy as a whole when deciding whether the new combination should be allowed to proceed.
Something like this already happens frequently. In the case of the pending (as of mid-2016) Dow/DuPont merger for example, the proposal would actually create at least 2 new independent companies and not just 1 single behemoth: this so as to insure competition in particular markets. So in the AB/SABMiller situation, regulators should include its impact on the craft brewers when determining their legal position. And such calculations should be made part of the regulatory decision making process for any future consolidations of CPG companies and anywhere else in the larger economy.

  • Comments Off on Preserving Entrepreneurship in a Merger Crazy World
  • Add comment

Lessons from the Economic Crisis in Venezuela

July 14th, 2016 6:03 pm Category: Global Supply Chain, Optimization, Supply Chain Planning, Sustainability, by: Deanna Wenstrup

Several years ago I started collecting coins. I love the beauty of a nicely preserved coin; just looking at the year on the coin takes me back to that time and place in history.

In addition to the usual proof sets most numismatists collect, I also like to collect coins that reflect unique times in history such as US steel war pennies and Japanese occupation dollars.

A few years ago I started collecting hyperinflation currency – currency issued by a country during a time of hyperinflation. Hyperinflation is extremely rapid or out of control inflation – a situation where the price increases are so out of control that the concept of inflation is essentially meaningless. Hyperinflation often occurs when there is a large increase in the money supply not supported by gross domestic product growth, resulting in an imbalance in the supply and demand for the money. This causes prices to increase, as the currency loses its value. Soon even the largest denominations of the country’s currency has so little buying power that consumers need to bring a wheelbarrow of currency just to buy a loaf a bread. To respond to this, the government begins to issue larger and larger denomination bills. Finally the denominations reach such ludicrous levels and have so little value that the currency totally collapses. One of the most famous examples of hyperinflation occurred in Germany between January 1922 and November 1923. By some estimates, the average price level increased by a factor of 20 billion, doubling every 28 hours.

One of my favorite pieces of hyperinflation currency in my collection is my 100 Trillion Dollar bill from Zimbabwe when by 2008 their inflation rate had reached 231,150,888.87%. Use of the Zimbabwean dollar as an official currency was effectively abandoned on April 12, 2009.

Venezuela is currently experiencing hyperinflation. According to estimates released by the International Monetary Fund, inflation in Venezuela is projected to increase 481% this year and by 1,642% next year. To put that in perspective, in February of 2016 acurrency McDonald’s Happy Meal in Caracas cost $146 dollars at the official exchange rate of 6.3 bolivars per dollar

So how does a country with more oil reserves than Saudi Arabia end up with armed guards on food delivery trucks, 3 mile long lines to buy basic food, people dying due to lack of basic medical supplies and more poverty than Brazil?

1)      Price Controls

As part of his Bolivarian Socialist Revolution, Chavez implemented price controls on goods. The government set prices at which retailers can sell goods.  If you violate a price control and sell your goods for a higher price, the government will jail the business owner and nationalize (seize) their business. As a result of these price controls, it cost farmers more to grow the product than they could sell it and it cost factories more to produce an item then they were allowed to sell it. The logical conclusion to this scenario occurred – the farmers stopped growing crops and the manufacturing facilities stopped producing goods. The government’s response – jail the business owners and seize their factories and farms. The Venezuelan government was totally unqualified to run these factories and farms; as a result, they have all been shuttered.  This lead to a huge imbalance in trade and Venezuela started to import almost everything from basic foods to medical supplies. This works great as long as the government has the huge revenue income required to support those types of subsidies.

2)      Oil Prices have Fallen

For years, the country has been deeply dependent on its vast oil reserves, which account for 96 percent of export earnings and nearly half its federal budget. That was manageable when oil was selling at more than $100 dollars a barrel. Venezuela now needs oil prices to reach $121 per barrel to balance its budget however oil is hovering around $50 per barrel.  Add to that the fact that oil from Venezuela is very thick and difficult to refine making it not as desirable as light sweet such as Brent Crude. This has forced Venezuela to import oil to blend with their oil to make it saleable in the current market.

3)      Crippling Foreign Debt

Since 2005, Venezuela has borrowed $50 billion from China as part of an oil-for-loans agreement. Venezuela exports more than 600,000 barrels a day to China, however nearly 50 percent of that goes to paying off its existing debt to China. The situation has gotten so bad that Venezuela is selling its gold reserves to pay foreign debt obligations.

4)      Currency is in Freefall

Venezuela’s bolivar recently fell past 1,000 per U.S. dollar in the black market. That means that the country’s largest denomination note of 100 bolivars is now worth less than 10 U.S. cents. The currency has lost 81 percent of its value in the last 12 months. This makes a bad situation much worse for a country that imports almost every basic need. For comparison purposes to truly understand how bad hyperinflation is getting in Venezuela, a  doctor working in the country’s national health care system makes $15 per month. As of this writing, a 1kg bag of apples in Caracas costs $18. A liter of whole milk was $5.14. A 40” flat screen TV was $5,889. (U.S. dollars, assuming an exchange rate of 0.15748 USD per Venezuelan Bolívar. Source is a crowd-sourced cost-of-living comparison site).

Sadly for the good people of Venezuela, it is almost inevitable that their currency, the Bolivar, is destined for my hyperinflation currency collection. But what is the lesson that we as Supply Chain professionals can take from this tragic situation? Perhaps that supply chains and markets must be free to find their own price and value; and that governments cannot run a government properly, much less a factory.

 

  • Comments Off on Lessons from the Economic Crisis in Venezuela
  • Add comment

Add Total Delivered Cost Variances to Manage Your Supply Chain

July 11th, 2016 6:06 pm Category: Distribution, Global Supply Chain, Network Design, Optimization, Supply Chain Optimization, Transportation, by: Ted Schaefer

Add Total Delivered Cost Variances to Manage Your Supply Chain

It is often said that you can only improve what you measure. To that end, there has been a lot of progress in performance tracking and activity-based costing over the past 10 years.  With the advent of better activity-based costing, leading companies generate monthly manufacturing variance reports at a detailed and actionable level.  However, this does not appear to be the case in the supply chains of many of those same companies.  At the end of this post, I’ll recommend some specific supply chain metrics to guide your supply chain improvement.

We routinely find that many companies have a very limited understanding of their supply chain costs: what they are, where they come from or why they’re happening.  In a typical engagement with a new client, one of the first things we do is develop a picture of their supply chain current state with respect to flows, cost and service.  We work with the client to gather all of the available information, which is much too often a very formidable task, until we can assign the cost from each operation that touches a product or intermediate from the time it is a raw material until it is delivered as a final product to the customer.

TDC Example

When the project team first presents the results to management, we invariably hear, “We don’t do that,” or “Those costs must be wrong.”  Unfortunately, we sometimes hear, “There is no way we’re losing that much money at that customer.”

Clearly, there are times when the team learns something new and we have to adjust the costs.  However, in the majority of cases we walk through the elements of the costs with management and the realization sets in that the numbers are correct and the costs really are that high.  Now that we have all seen the true picture of the supply chain we can align on the effort required to improve it.

Supply chain managers, like their manufacturing counterparts, should demand ongoing metrics at the operational level that are actionable if they want to drive improvement in their supply chains.  Reports that provide only the total freight spend, total warehouse spend or total person-hours worked in the supply chain vs. the plan don’t contain enough actionable information to drive performance.

I propose the following metrics as a starting point for managing the total delivered cost to the customer base and welcome your feedback on any metrics that I might have missed or that might replace one I’ve suggested.

Total Delivered Cost Supply Chain Metrics, a Start:

  • Actual vs. target for shipping containers
    • Actual loaded vs. the maximum allowable capacity for the commodity and shipping container combination

     

  • Actual vs planned cost to serve variance reports at the customer/product level of detail with specific variances called out for
    • Cost of Goods Sold (COGS)
    • Mode exception (shipped by a premium mode of transport vs. the planned mode)
    • Sourcing exception (shipped from a different location than the planned source)
    • Fill exception (the difference in cost if the shipping container were filled to the maximum allowable capacity)
    • Volume variance (total volume shipped vs. the planned volume to allocate fixed costs)
    • Mix variance (change in the mix of products shipped vs. the plan and its impact on cost)
    • Price variance (change in the price charged by carriers and other logistics service providers vs. the planned price)

With this set of metrics a supply chain manager should be able to quickly understand the reason for any changes in the total delivered cost to each customer, and thus the gross margin.  Now that we can measure it, we can manage it.

 

  • Comments Off on Add Total Delivered Cost Variances to Manage Your Supply Chain
  • Add comment

Supply Chain Network Design for Competitive Advantage

June 7th, 2016 9:09 am Category: Global Supply Chain, Network Design, Optimization, Profit Network, Supply Chain Optimization, Supply Chain Planning, by: Alan Kosansky

The Future of Supply Chain Network Design

Most leading companies perform supply chain network design (SCND) analysis to define the structure of their supply chain as well as the key operations that will be performed at each location in the resulting network. The analysis includes suppliers, manufacturing, warehousing, and distribution. In fact, a number of Fortune 100 companies require such analysis before approving capital to add manufacturing or warehousing infrastructure. Those on the cutting edge are also using SCND analysis on a continual basis to understand the true delivered cost to supply each customer as well as the price required from the customer to achieve profitability goals. Advances in network design modelling and optimization have also opened the door to a broader analysis that focuses on the collective supply chains of all competitors in a marketplace and how best to supply the market at the lowest cost and maximum profit.

Twenty-five years ago, the optimization tools to analyze and strategically set one’s supply chain infrastructure were new and untested. Those companies on the cutting edge had the vision to employ this technology to gain competitive advantage. They analyzed the trade-offs among raw material, production, transportation, distribution and inventory costs to understand the most cost effective way to meet their customer demand at higher customer service levels. In today’s world, what was once the domain of a few has become a “best practice” in supply chain management. Most supply chain managers are on the band wagon and perform some sort of optimization based supply chain analysis when considering major capital investment, key strategic initiatives or when their network has grown too large and unwieldy through acquisition and growth. That does not mean that the world has caught up to the thought leaders….rather the thought leaders continue to push the envelope and are using SCND to do more for their organization than they did in the past.

In particular, there are two areas where the best supply chain managers are focusing their attention with regard to SCND: First, they are making SCND an evergreen business process that is fully integrated into all strategic and tactical decisions related to network infrastructure and product sourcing. Second, they are expanding the scope of their supply chain analysis to not only include their own supply chain network, but also to analyze the supply chain dynamics of their major competitors and how the entire market is being served.

Sustained Supply Chain Network Design
In too many cases, SCND analysis is a one-and-done process. The reality is that it’s often difficult to assemble the data required to perform the analysis, but this prevents companies from assessing ongoing risks and opportunities. Through a carefully designed data management process integrated to the right set of tools, leading businesses are putting into place sustainable business processes to continually revisit their supply chain network structure and operations.

Good data is the driver of good supply chain analysis. Many companies struggle with understanding the true activity costs associated with one or more of the following: raw material supply, production, packaging, warehousing, distribution and inventory. When running a significant supply chain analysis and design project it is often the case that the bulk of time and effort is spent on gathering, organizing and validating the input data that drives the analysis. Unfortunately, too often that effort is wasted, as that data is used once and then forgotten. However this need not be the case.

Those implementing best practices have extended data warehouses to include key activity based operations and cost data that is used in their strategic and tactical network optimization. The data is kept evergreen through continual data management processes and is always available for the next what-if scenario. These what-if scenarios might include:

• Short term: How best to tactically respond to unexpected events like strikes, weather disruptions, major capacity losses, etc…?
• Mid-term: How do I evaluate new sales opportunities for large additional volumes and how will these impact my ability to deliver across the supply chain?
• Long Term: How do I evaluate new merger and acquisition opportunities? How do I plan for capacity expansions?

Companies who maintain the proper data and do not have to start from scratch on each new what-if analysiscan use a tried and true process to respond more quickly and more accurately to the opportunities that constantly and continually present themselves.

Extending Supply Chain Network Design to Competitive Markets

If you have used optimization SCND to analyze a portion of your business in the past couple of years, then you are running with the herd. If you have implemented sustainable business process to refresh and maintain the critical data and are able to run supply chain network what-if analysis at a moment’s notice, then you are running at the front of the herd. Those running way out in front are also using SCND analysis to understand the true delivered cost of supplying product to each customer and managing their business profitability accordingly.

Advances in network design modelling, optimization, and game theory have recently opened the door to a broader analysis that focuses on the collective supply chains of all competitors in a marketplace. These tools can be used to which customer/product combinations should be targeted and at what price to maximize profit. There are three key steps to being able to accomplish this.

1. Understand your own Total Delivered Cost to each customer.

Understanding your true total deliver cost to each of your customers enables you to analyze and determine the profit you are earning from each customer. It also partially informs your pricing decisions, especially in competitive situations or when the demand is greater than your ability to supply. Not only does this analysis determine the profitability by customer, it also determines the impact of adding or dropping a customer, thus answering the question, “Even though it’s low margin business, can we afford to lose the volume?”
2. Estimate competitor costs for supplying to a shared set of customers

While pricing information is largely influenced by your own internal costs for producing, delivering and selling to your customers, it is also heavily influenced by the market conditions and at what price your competitors are able and willing to sell their competitive products to the same customers. In order to understand the market dynamics, you need to be able to reasonably estimate your competitor’s costs. For example, if you are in an industry where transportation delivery costs are significant, then regionally located manufacturing will have an impact on price and profitability. Understanding which customers are more profitable for you and which are more costly for your competitors to serve enables you to develop a winning strategy.
3. Use cutting edge optimization technology to model the competitive market

While most businesses are good at determining pricing and identifying profitable customers intuitively and on and ad hoc basis, few have put into place the rigorous business processes and analytics to be able to do it accurately on a routine basis. This requires a deep understanding of your total delivered cost, your supply chain sourcing options, and the flexibility you have on both the cost and revenue side of the equation. It also requires a better understanding of your competitors supply chain, and what they may or may not be able to do, based on their own costs.

Conclusion

Supply chain network design optimization tools have become well integrated into modern business decision making processes at leading edge companies. They are used to rigorously analyze and make the best decision in response to both short-term events such as weather disruptions, spot sales opportunities, capacity outages) and long-term strategy, such as capacity expansion or mergers and acquisitions. These analytical approaches and technologies have recently been extended to enable businesses to analyze not just their own operations, but the sum of multiple supply chains in the competitive market place. It is exciting work and adding additional millions of dollars to bottom line profit each year.

  • Comments Off on Supply Chain Network Design for Competitive Advantage
  • Add comment

Are You a Leader in Improving Your Business Results?

December 1st, 2015 5:11 pm Category: Distribution, Global Supply Chain, Green Network, Inventory Management, Network Design, Optimization, Optimization Software, Scheduling, Solver Optimization, Supply Chain Improvement, Supply Chain Optimization, Supply Chain Planning, Transportation, Vehicle Routing, Warehouse Optimization, by: Gene Ramsay

Profit Point has been helping companies apply mathematical techniques to improve their business decisions for 20 years now, and it is interesting to review some of the advances in technology that have occurred over this time that have most enabled us to help our clients, including:
• The ability for companies to capture, store and access increasingly larger amounts of transaction and anecdotal data that quantify the behavior and motivation of customers, manufacturers, suppliers and other entities
• The improvement in analytical capabilities that help make optimized choices, in such areas as the solving mixed integer optimization problems, and
• The improvement of computing technology, allowing us to perform calculations in a fraction of the time required just a few years ago

A recent post on the Data Science Central website highlights the use of advanced techniques based on these advances by on-line marketplace Amazon, which is generally acknowledged as one of the most tech-savvy companies on the planet. 21 techniques are listed that Amazon uses to improve both their day-to-day operations and planning processes, including supply chain network design, delivery scheduling, sales and inventory forecasting, advertising optimization, revenue / price optimization, fraud detection and many others. For a complete list see the link below:

http://www.datasciencecentral.com/profiles/blogs/20-data-science-systems-used-by-amazon-to-operate-its-business

Like our customer Amazon, Profit Point is committed to using these techniques for the benefit of our clients – we have been concentrating on implementing business improvement for our clients, including optimization in various forms, since our very beginning. Are you, like Amazon, using the best methods to seize the opportunities that are available today?

  • Comments Off on Are You a Leader in Improving Your Business Results?
  • Add comment

How successful organizational change is like a long, windy bike ride

October 20th, 2015 1:51 pm Category: Distribution, Global Supply Chain, Optimization, Supply Chain Improvement, Supply Chain Optimization, by: Ted Schaefer

Over the past week I’ve had two experiences that made me think about what’s required for a successful Organizational Change.  The first was our CSCMP Roundtable tour of a family-owned food distribution company that had built a large, successful regional business by leveraging their founder’s focus on customer satisfaction and valuing his employees as cornerstone of the business.  The company had recently been purchased by another family-owned company and was in the midst of a successful wholesale change in IT systems and work processes.  Having seen many organizations struggle with such a large change, I asked our host about the secret of their organizational change.  In a word, he said, “Culture.”

Immediately after the new owner had completed the purchase, they spent a lot of time with the employees reassuring them that the values of the company wouldn’t change even though the way that they did their jobs might change dramatically.  In the end, the two companies’ cultures valued the same things: customer satisfaction and their employees.  With that in mind the change management effort began as an inclusive effort with a clear set of goals for the new work processes.  Not that there weren’t any bumps in the road, but the two once-separate organizations were able to push towards the new way of doing business as a common team.

IMG_0841So what does that have to do with a bike ride on a windy day?  That’s where the second experience of the week comes in.  Over the weekend, I completed a two-day 176 mile ride around Galveston Bay.  Just like a good organizational change-management effort the first day was preceded by a lot of training and preparation and accompanied by excitement and adrenalin.  We had some tough slogs, particularly one 21 mile stretch directly into a 15 mph headwind.  It was grueling, but we knew it was coming and grunted our way through it.  But then came the pay-off, the headwind became a tailwind as we sailed down the coast on our way to the finish line for Day 1.  Again, like an organizational change, we had some tough slogs, but our preparation paid off and we were able to celebrate as we hit our first big milestone.

The second day of the ride promised to be a tough one.  We had already ridden 97 miles on the first day, winds were blowing at almost 20 mph and were forecast to be mostly in our face all the way back to our starting point on Day 1.  I knew it would be a challenging day, so I decided that speed was not very important; just finishing.  In addition, I knew that I needed to find some like-minded riders so we could work together into the wind.  Luckily fate smiled upon me and I found a couple of riders that were taking the same approach to the ride.  We teamed up, taking turns pulling from the front so that the other two could draft and waiting for each other when we had flat tires.  We also got to celebrate when we turned away from the wind and had it at our backs for short stretches before turning into it again.  The parallels to a successful organizational change jumped out at me.

  • We made a realistic assessment of the challenges ahead
  • We set goals that were within our reach, given the challenges
  • We found allies with the same mind-set and worked as a team towards a common goal
  • We celebrated success when we had a downwind leg
  • We finished as a team

I hope to see you out on the road, be it organizational change or riding your bike into the wind.  Good luck, and let me know if you need someone to help pull on the next ride.

IMG_0843

  • Comments Off on How successful organizational change is like a long, windy bike ride
  • Add comment

The Weak Link in the Chain?

October 15th, 2015 2:14 pm Category: Distribution, Global Supply Chain, Transportation, by: John Hughes

Recent events during the summer of 2015 have exposed a major vulnerability in the supply chains of many U.S. manufacturers located in the industrial belt of the American Midwest.  Iron ore as well as many other bulk commodities such as grain and coal, is shipped from Northern Minnesota and Michigan via vessels on Lake Superior through the Soo Locks at Sault Ste. Marie Michigan and then south to the lower Great Lakes region.  And for 20 days this past August these vessels that normally transit the Soo choke point experienced long delays and backups because the 2 primary and largest locks were unusable or intermittently closed for maintenance.

The trouble started when the MacArthur lock had to be shutdown unexpectedly in early August because of a set of gates that did not close properly, thus diverting its normal traffic to the adjacent Poe lock.  This closure eventually lasted almost 20 days.  Then, according to the newspaper USA Today, the Poe had to be briefly shut as well.  (There is a 3rd lock that is still available but it is functionally obsolete and rarely used these days.)  With both locks out of commission or only sporadically open, 100 vessels were delayed at least 166 hours during the height of the summer shipping season: imagine the cost to shippers as well as the disruptions on the receiving end of those shipments.

The Soo locks are a critical link in the U.S. transportation network: according to the Detroit News, 3985 ships hauling 77.5 million tons of iron ore, coal, grain and other cargoes transited the locks in 2014.  A large part of the production in the Great Lakes region of the Midwest is directly or secondarily tied to the manufacture of steel and other basic commodities which in turn rely on marine delivery of raw materials via the Great Lakes.  The Soo Locks are so important that during World War II, troops were send to guard them against any sabotage.

Given the vulnerability of this critical asset, and the deteriorating state of the country’s infrastructure generally, you would think that Congress and the Army Corps of Engineers would be moving quickly to either build a new lock or modernize the existing ones.  However no significant institutional movement or progress is underway right now.  Just how critical the Soo is becomes clear when you realize that only the 47-year-old Poe is big enough to handle the 1000-foot vessels that today carry roughly 70% of the freight on the Upper Great Lakes.  The impact of any prolonged outage of this asset would certainly have catastrophic consequences on many companies’ supply chains

  • Comments Off on The Weak Link in the Chain?
  • Add comment

Technology Books are so yesterday…

September 3rd, 2015 3:28 pm Category: Optimization, by: Jim Piermarini

Jim-Piermarini.jpg

New technologies spring up each year, seemingly every day. There are new programming languages, frameworks, and processes. There are new personal productivity gadgets and apps. There are new internet connectivity method (routers, access points, bridges, cable modems, etc). It can all be pretty daunting if you think you need to stay ahead of all this change.
Back in the day, I used to go to a brick and mortar bookstore and peruse the computer books, and end up purchasing one or several books to read cover to cover to learn about the new item. I have purchased many dozens of technology related books over the course of the last 20ish years in my attempt to stay abreast of the bow way of technology change. I realized recently that I have not purchased a new technology book in several years. I got to thinking about why, and if I would be comfortable with my wife’s requests to toss all these old books. My first reaction was, I can’t get rid of these, they are books! But then I got to considering whether I had opened them anytime in the last 5 years (or 10 or 15!), and projecting if I would actually open them anytime in the foreseeable future. The short answer is, I really can’t see when I would open these books, ever again. So I asked myself why is that? And the answer is not that I have given up my desire to stay current, not exactly. Nor is it that all the technology books are hopelessly out of date (although some are). The reason I don’t think I’ll be using these books ever again has to do with the way the internet and Google specifically, has changed the way we learn.
Learning a new technology is easier today than ever before. You can google the subject and find not only the theoretical background of the subject, but also many practical implementation details and examples. For instance, along with myself, I know several people who are self-taught in SQL Server using only the resources available on the internet. And we are actually pretty competent at it. Given that experience, I know that I could also easily learn mySQL (I have had to learn some of that recently) or Java (again, I’ve dabbled in it) or Mongo DB, or any other no-SQL database new technology. Knowing that there are ample examples and many resources for new technologies, has allowed me to redefine how much I need to know before I can be confident that I can tackle a project in that new technology. I know that the syntax is a detail that will soon fall into place. Now that syntax may be in a book somewhere on my shelves, but it is also on the internet, just a few clicks away. I’ll opt for the easier and faster approach to getting that info anytime. So the books stay on my shelves, (or get donated as my wife is suggesting.)
Keeping current today in technology is a different thing than knowing all the depth and detail of a subject in previous years. Google is everywhere, and has almost everything, and it is not going away any time soon. Think calculators, and the way they were reviled for undermining the need to learn how to do math in your head. “You can’t always count of having a calculator!” was the refrain that was meant to show the importance of being a competent math mentalist. But today there are calculators everywhere, on my PC, my phone, and on my watch (if I had a google watch), and for that matter, so is Google! It seems reasonable to expect that the internet and google search will be with us for some time. People have accepted the pervasive and ubiquitous nature of internet, and it is changing the way we devote our limited brain space to information and knowledge. For me, I think that it is more important to know some critical details along with the broad understanding of the subject matter. I can offload the rest of the details to the internet to be retrieved when I need them. My local cache of info can be more specialized, in that it can be very deep and narrow in some areas while very broad and shallow in others. I don’t mind being shallow in broad areas, since even in the shallow areas, I know I can go to any depth I need very quickly with the help offered on the internet. That is how my view of knowledge has been transformed away from packing it into my head and onto my bookshelves, and using into the internet age. Others may have a different need of knowledge, and that is a discussion beyond my understanding. And while there may be a book on this subject, I’m sure I could google it.

  • Comments Off on Technology Books are so yesterday…
  • Add comment

A New Use for Oil Supertankers?

August 11th, 2015 8:33 am Category: Distribution, Global Supply Chain, Network Design, Supply Chain Planning, Transportation, by: John Hughes

By now, we’ve all probably heard about the fact that there is a worldwide glut of crude oil.

This is due to many factors of course, including the increased production of oil and natural gas in North America (especially as a result of fracking), as well as the rising proportion that renewable energy sources have come to represent in the overall energy marketplace.  And members of the OPEC cartel have made no secret that they are increasing or at least maintaining relatively high production levels so as to drive competitors out of the market and thus maintain their market share.  Therefore the supply of petroleum on world markets is high, thus driving down the price.

This oversupply of crude oil in relationship to demand has had a big impact on the supply chain for moving oil from supplier to customer.  Over the past decade, there has been a huge increase in the oil supply chain infrastructure.  Trading companies have built vast storage facilities in order to insulate themselves from the high prices they experienced in the past.  But now with the current glut of petroleum most of this on-shore storage capacity is full, and this has led to an interesting phenomenon.  Now, some trading companies are being forced to use their marine transportation assets, i.e. oil tankers and supertankers, as simply floating tank farms.  As the spot price of oil has collapsed, it now makes economic sense to simply load the vessels without a definite destination or customer in mind and store the oil at sea.

Such a strategy, of using transportation assets as de facto storage locations is very typical of any commodity type market where the market power of the customer is much stronger relative to the producer.  For example, this situation has long been typical in certain commodities that are delivered by rail, where customers simply leave product parked in cars out on a rail siding somewhere until it’s needed.

Of course, over time as normal market forces do their work, the relative bargaining positions of the buyer and seller can shift.  In the case of oil, various economic and political forces can quickly move the markets such that leaving tons of oil floating out on the seas in very expensive storage tanks no longer makes economic sense.  And when this happens, those vessels will soon be put back to the purpose for which they were truly built.

  • Comments Off on A New Use for Oil Supertankers?
  • Add comment
Contact Us Now

610.645.5557

Contact Us

Contact UsInfo

Please call:
+1 (610) 645-5557

Meet our Team

Our Clients

Published articles

  • A Fresh Approach to Improving Total Delivered Cost
  • Filling the Gap: Tying ERP to the Business Strategy
  • 10 Guidelines for Supply Chain Network Infrastructure Planning
  • Making Sound Business Decisions in the Face of Complexity
  • Leveraging Value in the Executive Suite
  • Should you swap commodities with your competitors?
  • Supply Chain: Time to Experiment
  • Optimization Technology Review
  • The Future of Network Planning: On the Verge of a New Cottage Industry?
  • Greening Your Supply Chain… and Your Bottom Line
  • Profit Point’s CEO and CTO Named a "Pro to Know" by Supply & Demand Chain Executive Magazine