Blog Archives

Data Collection Thoughts

One thing that seems to come up a lot in terms of continuous improvement activities is the need for data.  Sometimes there isn’t the right data, sometimes there isn’t enough context to the data, and sometimes there just isn’t any data recorded at all.   I’ve written about data in terms of metrics and measurement systems before, but this time I’m more talking about getting your hands on information that becomes clues to solve your production mysteries.  I don’t believe in substituting data for observation, just in using it narrow the observation lens.

So…what are some of my key thoughts for getting data to make sure you are working on the right stuff?

  • First, make sure your data tool can tell you what you need to know.   If it’s a log sheet of some sort, does it capture major sources of variation such as time, position, cycle, machine, tooling set, etc.?  If it’s a measles chart (or concentration diagram or whatever name you may use), can you really tell the difference in defect locations on it?
  • Next, be willing to sacrifice some clarity in some areas to get an overall picture of the process.  I like to start by targeting about 75% of the data that I’d like to have and adjust it from there if need be.  Most of the time I find that the extra detail I thought I needed wasn’t really necessary at this stage.  I can always build additional data collection if I can’t get what I want from the reduced set.
  • Also, try to make it as easy as possible.  If you can extract what you are looking for from existing shift logs or quality checks or some sort of automated means, go for it.  Adding a layer of work can sometimes lead to reduced data quality for everybody!
  • To go along with the previous item, remember data isn’t usually free.  If you don’t need the data collected indefinitely in the future, set an expiration date on the activity and free up the resources.
  • Lastly, to paraphrase myself, data isn’t a substitute for going to the gemba and seeing the problem for yourself.   Double check the data against what you are seeing with your own eyes to make sure that it can really help you.  This data won’t solve problems for you, but it can help you know which ones are the biggest.

I’m sure there are some other key points that I’ve left out, but there are a few for starters.

Now, I’m sure some of you are asking, “Why waste time on this and why not just go observe the process at the start?”  Good question.  I think this is more of a helping hand to make the best use of time for some operations.  If there are limited resources (and who doesn’t have limited resources?), deploying this in advance of a deep dive can help speed up the search for solutions.  If a process has a long cycle time or unusual frequency, something like this could help identify repeat issues vs one-offs.  I am always looking for the best way to use what I have at my disposal and sometimes it doesn’t fit the textbook methodology.

Guest Post: My First Kaizen Event

Today’s guest post comes from Danielle M.  She has been a dedicated student of Lean Manufacturing methodologies since 2006. It was love at first sight when she read the motto, “Everything has a place; everything in its place” in her first copy of The Toyota Way.

As an inspector at the end of a screen printing process, I’m was in charge of making sure we didn’t ship bad products. I had always enjoyed my job, but after taking part in a kaizen event I went home less tired and made fewer mistakes, ultimately making the customers happier and saving my employer money. Best of all, it felt like I actually made a difference.

Five days of improvement

We started with a training day. Jose, our Lean Director, asked six of us to meet in a conference room: Maria from engineering, A’isha from purchasing, Pete the controller, Ted from maintenance and Gerry, who ran the press that sent me finished parts.

Jose explained that a kaizen event is a concentrated five day effort to improve a factory process. A’isha said she didn’t know anything about the factory, but Jose said the point was to get new ideas from people who didn’t know the area. He called this being outside looking in.

Once we understood our goal – to improve my inspection operation – Jose had us make a plan. We decided to spend our first day gathering data. Then we’d go to the inspection area, ask questions and capture our ideas on flipcharts. At the end of day two, we’d put together a list of the ideas we wanted to try, then we’d implement as many as possible.

As-Is data

Between us we found out how many customer complaints came in each month, how many pieces were scrapped, the number of bad parts caught and our delivery performance. None of them were very good.

Generating ideas

Gerry and I showed the team how we did things on the press line, then people asked questions and made suggestions. Pretty quickly we’d filled a whole flipchart pad!

Back in the conference room we stuck the pages on the walls and made a list of the changes we could make. The quick and easy ideas we tried straight away; Maria worked on the harder ones with Ted.

We used the 5S system to arrange my tools on a shadow board so I knew where to find everything and to see if anything was missing. We labeled everything and cleaned up the area so was a nicer place to work.

One thing I asked for was to raise the inspection table. As it was, I had to bend over, which made my back ache, and I was putting a shadow over the piece I was looking at. Ted made the change in a couple of hours, and it makes such a difference!

Ted also installed a track lighting system over the top of the bench. This was really clever because it gave me the ability to vary the light, which helped me find the defects much more easily.

Gerry suggested I turn on a light whenever I find a defect. This would be his signal to stop the press and he’d be able to fix the problem right away. Jose called this an andon light.

The presentation

When we’d finished, Jose had us present everything to management. I was worried our ideas were too simple but they seemed impressed. Arnie, the Quality Manager, did say though that the proof would be in the numbers.

Afterwards

A month later we got new data and compared it with our “As-is” numbers. Complaints were down, we were scrapping almost nothing, I was finding more defects and our delivery performance was up.

Little did I know that Jose was so impressed with my performance on the kaizen team that he would ask me three months later to consider joining him as the Lean Coordinator in the company’s transformation process. I took his recommendation to apply for the position when it opened up and soon began my own transformation process into becoming a student of The Toyota Way.

Stay tuned to learn more about my personal journey in lean manufacturing!

Chicken and Eggs

This doesn’t have anything to do with poultry, but more with the age old question of which comes first.  (Although, if you’d like to talk about actual chickens, let me know.)  When pushing forward a continuous improvement mindset, one of the first obstacles is in understanding where your biggest problems are.  This is often an issue because the structure doesn’t exist to gather, compile, and filter data from the operation.  Using an A3 format as an example, it becomes very difficult to get past the first step if you can’t quantify where you are in relation to your ideal state.  Or, if you can’t quantify the relative impact of potential causes on the outcome you are measuring.

In general, this leaves you with a couple of choices.  Choice A is to go forward with what you have and make changes based on what information you have available.  Choice B is to put the brakes on for a while and focus your improvement efforts on improving the measurement and reporting systems.  Both of these options have upside and downside.

If you follow the path of Choice A, you can start down the path of training people in the methodology and mindset of Lean problem solving.  Those are good things, plus you get the visibility of “implementing Lean”.  The downside of this path is that you really don’t have a good idea of the relative scope of issues and you risk working on something that isn’t that impactful or has to be undone when seen in better context.

Following Choice B, you will most likely end up with a more whole understanding of what you should be working on and why.  However, you run the risk of losing support as others don’t see anything happening and people start to question when the “real work” will start.

So…which comes first…the problem solving or the measurement system?  The short answer that I have come across is this:  It depends.  In theory, an effective measurement system highlights the problems that need to be addressed and is a must to have in place.  In practice, not all organizations are patient enough to build the core of the metrics system without pushing the ‘execution’ phase along quickly.  One of the skills involved in leading Lean (or, really, leading anything), is the understanding of where you may or may not have cultural (or individual) support to be patient or you need to “just do it”, building context of the picture on the fly.  As uncomfortable as it may be, your people, your culture, and your environment trump the textbook roadmap almost every time.

Use Data that is Meaningful

A couple of months ago, Joe wrote a great blog on Problem Solving Pitfalls.  I have read that post  few times.  Partly because Joe and I made some of those mistakes together.  Partly because I think there is another to add to it.

Free image courtesy of FreeDigitalPhotos.net

Just because data is being collected does not mean it is useful.

Too many times I have watched people (including myself) use data because it was available.  It was not the data that would tell the story of the process.  You may have to decide what data would be helpful and devise a way to collect the data.  he data does not have to be what is available in a computer.  Having it captured by hand is a viable way to collect the data.  The data may only need to be collected for a specific amount of time during the problem solving process.  Once the problem has been solved, there may be no need to collect the data.

This can be difficult but it will be well worth the effort in the long run.  You will get a better picture of the problem you are trying to solve and in turn this will lead to an easier time getting to the root cause of the problem.

Fun With Charts

I’ve kind of talked about some of these things in other posts, but I felt like adding a visual.  Here is a chart of a metric that is currently in use.  The actual scale and what it is measuring is blanked out (for obvious reasons) but this is an actual data run with the required linear trend line added in Excel.  The relevant context is that this is a time based chart (x-axis) and that zero is better (data points closest to the bottom of the chart area).

First a question:  Is this process getting better or worse?

According to the trend line (and several people’s understanding of it) this process gets kudos for being “on a downward trend”.  Now, what if I just asked you to look at the last 10 data points?  Is it getting better or worse?

While it doesn’t quite pass the SPC chart test for number of points in a row in one direction, something clearly seems to be drifting in this process.  While it may just be in the realm of normal or explainable variance, it certainly requires a second look and the last 4 points are higher than all but points 2 and 3 in the first chart.  Now, what if I told you the data for the 2 highest points in the first chart were from an explainable, corrected special cause?

I am throwing this up here to highlight some of the more common issues in data analysis and communication.  Here are a couple of the key points to look for:

  • Overuse of the linear trend line in an Excel chart – Honestly, very little good can come from this function.  Skip it unless you have to use it.
  • Letting the overall behavior picture be clouded by a few special cause points – try cutting them out if you can to run a parallel look at your data…they shouldn’t be ignored, but their impact shouldn’t muddy the whole picture.
  • Having the pre-determined time period confuse the analysis – if a chart of data is based on something like a fiscal or calendar year or month, sometimes it loses or gains data points that make the current performance unclear.  Context is important, but the right context is critical.

More on Measurement Systems

One of the books that were recommended to me a couple months back was a book called “Scorecasting”.  It’s sort of Freakonomics meets sports statistics tome.   Without stealing the thunder of the book and getting too far in to the details, two points stood out to me as they could relate to other business data.

The first interesting point was the historical data showing umpires and officials to make fewer borderline calls late in games.  In a simple way of saying it, they were more likely to err on the side of not making a call that should have been made than in making a call that shouldn’t have been made.  The second piece that related to me was the chase of round numbers.  Again in simple terms, the rate of people who cross a round number (multiples of 5, 10, 100, etc) is much higher than those just under the line.  (The book offers a much better, more detailed description of these phenomena).

In most cases, the people who are a part of these activities aren’t attempting to undermine or “game” the system.  It seems to be more of a reflection of overall patterns of human behavior.  Where this gets interesting to me is in wondering how this behavior may influence business performance or metrics.  I don’t necessarily mean that a company may “manage” its earnings to match Wall Street commitments.  I am thinking more on a micro level of individuals changing their behavior around a performance level (efficiency, yield, throughput, etc.) or in how they select samples to measure.  Is the data that we are able to gather influenced by people who may not want to be the “cause” of attracting any extra attention?

The answer to that question, I know, is that the data absolutely is subject to human influence.  Unless the process is fully automated, at some point you have individuals who are responsible for gathering and recording data or issuing go/no-go decisions on quality or pressing the start and stop button on the machine.  Any of these folks can make the decision that ultimately influences what we see.  Does it make a huge difference?  I don’t know for sure and I have no clue on how to filter out the data collection process for every set of circumstances.   Ultimately it comes back to looking at data with a critical, but open mind.  Sometimes the toughest part of dealing with data is trying to know what it does and doesn’t say.  That may mean that the measurement system is skewing your data in ways you never expected.

Problem Solving Pitfalls

Just for the reader’s information, I’m going to start a run here for a couple weeks about problem solving.  Some of these points I have touched on in other avenues, but these seem to fit as their own mini-series.  This isn’t a “How To”, but more of ‘Some of The Stuff They Don’t Tell You’.  Pretty much any structured problem solving method will lean heavily on data.  The good news is that most companies have a pile of numbers that can be used to identify problems.  The bad news is that these numbers often seem to lack some key characteristics that would make them very useful.  There are some serious pitfalls to be aware of as you are digging through the data.

One of the usual suspects to look for in terms of using data is the context that it comes from.  Does the data have a time or sequential relevance?  How can you tell what has changed in the process or the product that may have driven the data?  Put another way, what are the known special causes that can be filtered out so that the unknown special and common cause variation remains.  The data itself can almost never be taken at face value as a reflection of a stable reality.

A second area to dig in to is how much the data reflects what you are trying to measure about the process.  How direct and traceable are the measures to the actual process?  Do the numbers have to get combined and factored into something or are they transparent?  Is the data a reflection of a leading or lagging indicator?  Are they timely or delayed?  How much do the financial reports reflect actual dollars vs. some sort of calculated dollar figure?  All of these are important to understand to determine where you should be spending your time and how you need to leverage resources.

Once you can harness the data, gather the context it comes from and understand exactly what it tells you, there is another key step…verifying your measurement system.  Whether this step occurs in the form of a Gage R&R, a human verification, or whatever your MSA may need to be, it has to be done.  You have to know that the data you are getting is a reflection of what is attempting to be measured.

More times than I’d like to recall I’ve been a part of activities where one or more of these steps were skipped.  While you hate to say that any activity where you learn something is a waste of time, there has been a lot of time wasted in chasing problems that weren’t really there or trying to improve performance on a less important process.  That ‘waste’ could very easily have been avoided by investing the upfront time to study what was really there.  Maybe my errors can help save you the effort of going down the wrong path in the future.

An Acceptable Range Does Not Equal A Baseline

When solving problems the first thing a person needs to understand is where they are starting from.  To do this they have to create a baseline.  A set of data for the current process and situation.  Without a baseline, a person will never know if they improved the process or made it worse.

When I say a baseline, I mean an understanding of data of the current situation.  I do not mean a range of what is considered normal.  A range does nothing but tell a person where they might expect the data to fall when creating a baseline under normal conditions.  A range can hide problems under the guise of being acceptable.  What if something is at a high end of an range and drops to a low end of the range?  This can still create problems.

For example, two parts have to fit together.  If both parts are at the high end of the range of their part variation they snap in perfectly.  Then one part drops to the low end of the range, while the other is at the high end of the range.  Now the parts don’t fit together and people are confused because both parts are within their acceptable range.  The issue is there never was a baseline created to understand both parts were at the high end and this condition created a good result.

The area I have the most frustration with this is in health care.  A person can go to the doctor wondering if they have hearing loss or damage.  The doctor tests you and says you are fine there is no damage or loss.  How do they know?  They never had a baseline from before to understand the person’s hearing is any different.  The doctors just tells the person they are fine because they are in the “normal” range.

The assumption is the range is built on lots of data over time and covers the 80-85% of the normal distribution of data, again assuming the data fits a normal distribution curve.  What if the person is someone at one of the extremes of the curve?  Doesn’t this change things?

I understand doctors need some tools to help them out.  That is what a range is a tool.  If a patient says something is not normal for them, the doctor can’t say they are normal because their test falls in a certain range.

Ranges are nice and can be helpful, but they are not a substitute for a baseline.  The baseline gives a more detailed picture.  Baselines help to problem solve and improve. So before judging if there is a problem, a person should ask, “Where did I start from?” or “What is my baseline?”

Data and Facts Are Not the Same

Last week Steve Martin had a great post about data and going to see what is actually happening over at theThinkShack blog.  It struck a nerve with me because it reflects something I seen happening on a regular basis.  I am tired of people trying to solve problems while sitting in a conference room.

I listen to comments like, “Well they aren’t using the right codes for the defect.” or “People just need to put the coding in the system properly and we could figure the problem out.”

WHAT!?!

Don’t misunderstand me.  Ten years ago you would have heard me say some of the same things.  So, I do have patience with teaching people to go and see.  Once I learned to go and see it became very freeing because I didn’t stress about what the data said.  I spoke to facts.

Data is a good thing.  I am not saying we should ignore data, but we need to know its place.  Data can help point us in the direction of problems.  It can tell us where we should go and look for facts.

Facts to me are what you actually see happen.  What you have observed.  It isn’t the hearsay you get in a conference room.   Facts explain what is actually happening and add deeper meaning to the data.

I lived a great example recently.  In a conference room, managers looked at the data and saw a problem that was happening.  They started talking about what was happening and why.  They asked if I would look into fixing it.  I said I would look at what is actually going on.  I spent 2 hours directly observing the work and realized the one problem they were talking about was actually several different problems out on the floor.  I asked the person actually doing the work to take a couple weeks worth of data based on what was actually happening.  The data showed they actually had 2 big problems that made up 80% of the total errors the original data showed.  I then did another hour of direct observation between an area that had the problem and an area that did not.  I was able to explain the problem with facts that I observed and data to support those facts to add concrete to what I observed.  At that point, there was some obvious ways to correct the situation.

Data and facts are different.  They are not substitutes for each other.  Data and facts can be a very strong combination when used together to understand a problem.

data directional

facts truths – use eyes – go and see

Link to Steve Martin’s Blog Post: http://thinkshack.wordpress.com/2011/03/07/garbage-in-wheat-and-soybeans-out/

Lack of Data Can Drive Poor Decisions

There seems to be a big problem with organizations having a lot of data, but not good data.  Data that can help them make good decisions for the business.

This lack of good data can cause organizations to be concentrating on the wrong opportunities to improve or grow.  Imagine working on the last issue of a Pareto chart and not the biggest issue because of poor data.  You would spend effort that could have been directed towards a bigger issue for the area.

Lack of good data can cause you to start looking for a countermeasure to a problem in an area that is not really where the problem is.

Organizations need to become better at getting useful data and information, not just any data and information.  Computers and automation has made it easier to collect and store data on anything and everything.  This is a form of waste.  Organizations should strive for collecting only useful data and information that can help to make good informed decisions.

The best way to overcome this is by directly observing the work and issues.  When directly observing, you will get a better picture of what is actually happening.  Usually, the data that you need to have in order to make a good decision becomes clearer.

As leaders we need to push to get people to go out and directly observe in order to drive more useful data and information for decision making.

Share

Follow

Get every new post delivered to your Inbox.

Join 1,113 other followers