On January 14, 2012 Dr. Yoji Akao (founder of QFD) and I keynoted the Hong Kong QFD Association's First Symposium.
The Symposium was attended by over 100 quality specialists from Hong Kong, China, and Macao. Dr. Akao spoke of the historical development of QFD, while I described my favorite
project on the development of a automatronic dinosaur for Jurassic Park.
Other notable speakers included Dr. K.S. Chin of City University of Hong Kong who spoke on QFD and product development, Dr. Y.K. Chan of the Six Sigma Institute who spoke on the application of six sigma in QFD projects, and Dr. Catherine Chan, president of the HKQFDA who spoke on total quality management and QFD.
The HKQFDA was established in 2011 by Catherine, a QFD Black Belt®, whose enthusiasm for QFD has led to some very interesting approaches to capturing the Voice of the Customer.
Hong Kong is an ideal location for combining Western quality thinking and the concept of customer as king with traditional Eastern approaches to work and life. Both the harmony and friction of these cultures is sure to generate improved approaches to developing new products and services.
QFD activities have been growing in Hong Kong in recent years due to efforts by Dr. Bob Hunt of Australia's Macquarie University, and a QFD White Belt® course I led sponsored by the Six Sigma Institute. The HKQFDA is sure to take the lead in QFD's growth in the region.
31 January 2012
23 January 2012
QFD and Six Sigma DMAIC
from the QFDI newsletter, January 2012
Efforts to standardize and strengthen product and process improvements are greatly welcomed in modern organizations, and nothing does it better than the DMAIC (define-measure-analyze-improve-control) approach in six sigma. This 21st version of Shewhart’s and Deming’s PDSA (plan-do-study-act) technique is the backbone of ongoing quality improvements in today’s leading companies.
The purpose of quality improvement is to eliminate the costs and losses associated with defects and deviations from targets. The process starts with defining those targets and how to measure them, and then determining what the current level of performance is. Next is to identifying the causes of why current performance fails to meet those targets.
Causal factors can be analyzed as those related to the 4 Ms, or those attributable to workers (men), equipment (machine), processes (method), or design (materials). Advanced thinkers may also include method of measurement (ex. poor gauging or measurement techniques), money (lack of funds to make desired improvements), and management (lack of top level support to invest in and support a quality culture). Let us know if you have identified other “M”s, please.
Once causal factors have been identified, data analysis helps focus on those with the strongest contribution to improving performance. Since DMAIC attends to current products and processes, data can be collected to statistically calculate the correlation between a causal factor and the undesirable effects of the defect.
Improvements to the undesirable effect are made by improving these highly correlated causal factors, and training workers, upgrades or better maintenance of equipment, new processes, or better design are investigated and tested. In addition to efficacy of these improvements, other feasibility constraints such as cost to implement, time to implement, etc. are considered in deciding what and when to implement.
Once the improvement is in place, standardization of the improvement is needed to prevent falling back to old ways. Thus, ongoing data collection helps control any deviations to the new process.
What Role Can QFD Play in DMAIC?
QFD has typically focused on new product development; the large time-consuming “houses” of classical QFD are considered overkill for addressing concerns with local problems associated with production or service delivery failures. Rather, QFD is an approach for identifying customer needs far upstream from production, even prior to design phases in order to define quality from a customer or user perspective and assure it is designed into the new product and quality assured during its build and delivery.
Efforts to standardize and strengthen product and process improvements are greatly welcomed in modern organizations, and nothing does it better than the DMAIC (define-measure-analyze-improve-control) approach in six sigma. This 21st version of Shewhart’s and Deming’s PDSA (plan-do-study-act) technique is the backbone of ongoing quality improvements in today’s leading companies.
The purpose of quality improvement is to eliminate the costs and losses associated with defects and deviations from targets. The process starts with defining those targets and how to measure them, and then determining what the current level of performance is. Next is to identifying the causes of why current performance fails to meet those targets.
Causal factors can be analyzed as those related to the 4 Ms, or those attributable to workers (men), equipment (machine), processes (method), or design (materials). Advanced thinkers may also include method of measurement (ex. poor gauging or measurement techniques), money (lack of funds to make desired improvements), and management (lack of top level support to invest in and support a quality culture). Let us know if you have identified other “M”s, please.
Once causal factors have been identified, data analysis helps focus on those with the strongest contribution to improving performance. Since DMAIC attends to current products and processes, data can be collected to statistically calculate the correlation between a causal factor and the undesirable effects of the defect.
Improvements to the undesirable effect are made by improving these highly correlated causal factors, and training workers, upgrades or better maintenance of equipment, new processes, or better design are investigated and tested. In addition to efficacy of these improvements, other feasibility constraints such as cost to implement, time to implement, etc. are considered in deciding what and when to implement.
Once the improvement is in place, standardization of the improvement is needed to prevent falling back to old ways. Thus, ongoing data collection helps control any deviations to the new process.
What Role Can QFD Play in DMAIC?
QFD has typically focused on new product development; the large time-consuming “houses” of classical QFD are considered overkill for addressing concerns with local problems associated with production or service delivery failures. Rather, QFD is an approach for identifying customer needs far upstream from production, even prior to design phases in order to define quality from a customer or user perspective and assure it is designed into the new product and quality assured during its build and delivery.
How To Handle VOC Issues — Lessons from Japan crisis: Anticipating Improbables with Irreversible Consequences
This is a QFDI newsletter from April 2011, discussing the danger of using ordinal scale math in FMEA, namely for computing risk priority number (RPN) for assessing black swan events. The topic is too important that we thought to share it again for those who missed it.
"The role of Quality in Fukushima nuclear crisis"
1. Centralized consensus vs. triage leadership in disaster preparedness and decision making.
One of the tenets of quality management is "Plan-Do-Check-Act." We find that when the planning has been done properly and consensus built among constituents, most processes will fulfill requirements, and the Check-Act serves to fine tune the process. In Japan, this consensus building is called "ne-mawashi" or going around the roots of a tree before transplanting it to make sure everything is ok.
While TQM experts praise consensus as good for planning, there is a downside that Dr. Deming warned about in chapter 6 of his book The New Economics. That is — "with shared responsibility, no one is responsible." Thus, ne-mawashi can lead to finger pointing and blame instead of collaboration, as well as increased murkiness in accountability and delay in critical actions.
2. This raises these quality questions:
(a) In a disaster, do we go back to Plan or do we go directly to Do-Check-Act (sometimes called Do-Redo) at the local level?
'Planning' may require subject matter experts who may not be optimally located since the exact location of the disaster may be unknown until after it occurs, and time which may be limited by threat to life or subsequent failures in other systems.
Also, in terms of 'planning' resources, are the same resources being competed for various emergency operations (such as fire, police, medical), or should different resources be planned? From a time perspective, should the priority be given to allocating the resources to take care of those who are still alive and need immediate assistance, or should the resources be expedited first to cooling nuclear fuel to address the medium term risk to the life and livelihood of survivors?
In the case of Japan, were certain needs more urgent than others? Such as the need to verify the emergency level vs. the need to issue a quick evacuation order; the need to determine resources for disaster relief vs. the need to add resources to prevent a nuclear event, etc. And how should those priorities be made, by whom, and when? Should such priorities have changed the way the leaders approach the 'planning,' 'doing,' and 'checking'?
(b) In disaster preparedness, how has the extent of the disaster be predicted?
If the disaster falls within the predicted parameters, the planned response may be sufficient. If the disaster rises to unanticipated levels, however, as is the case in Japan, the response plan can easily become insufficient.
"Beyond expectation" was how virtually everyone — from Tokyo Electric Power Company (the operator of Fukushima power plant) to the government nuclear power regulators and safety commission— described the March 11 earthquake and tsunami in Tohoku region, although retrospective review of historic data begins to hint otherwise.
The probability of a nuclear fatality was set in 2003 by the Japanese Nuclear Commission (JNC) to not exceed 1 × 10-6 per year or about 1 in a million years. On the Japanese nuclear event, Nassim Nicholas Taleb, author of The Black Swan, cautions, that model error causes underestimation of small probabilities and their contribution (see his web site). This highly improbable event with massive consequences is what Taleb calls a "Black Swan."
(c) Is standard FMEA practice adequate for for a Black Swan event?
In FMEA (Failure Modes and Effects Analysis) we try to account for this Black Swan by looking at not only frequency of occurrence, but also impact and detection. Assuming JNC's probability estimate for a nuclear fatality of 1 × 10-6, the likelihood of a M9.0 earthquake at less than 1 per 100 years or 1 × 10-2 (worst case prediction), and the likelihood of a 20 meter tsunami at less than 1 per 100 years or 1 × 10-2 (worst case prediction), the probability of all three occurring simultaneously would be 1 × 10-10, or 1 in 10,000,000,000 (one in ten billion).
"The role of Quality in Fukushima nuclear crisis"
1. Centralized consensus vs. triage leadership in disaster preparedness and decision making.
One of the tenets of quality management is "Plan-Do-Check-Act." We find that when the planning has been done properly and consensus built among constituents, most processes will fulfill requirements, and the Check-Act serves to fine tune the process. In Japan, this consensus building is called "ne-mawashi" or going around the roots of a tree before transplanting it to make sure everything is ok.
While TQM experts praise consensus as good for planning, there is a downside that Dr. Deming warned about in chapter 6 of his book The New Economics. That is — "with shared responsibility, no one is responsible." Thus, ne-mawashi can lead to finger pointing and blame instead of collaboration, as well as increased murkiness in accountability and delay in critical actions.
2. This raises these quality questions:
(a) In a disaster, do we go back to Plan or do we go directly to Do-Check-Act (sometimes called Do-Redo) at the local level?
'Planning' may require subject matter experts who may not be optimally located since the exact location of the disaster may be unknown until after it occurs, and time which may be limited by threat to life or subsequent failures in other systems.
Also, in terms of 'planning' resources, are the same resources being competed for various emergency operations (such as fire, police, medical), or should different resources be planned? From a time perspective, should the priority be given to allocating the resources to take care of those who are still alive and need immediate assistance, or should the resources be expedited first to cooling nuclear fuel to address the medium term risk to the life and livelihood of survivors?
In the case of Japan, were certain needs more urgent than others? Such as the need to verify the emergency level vs. the need to issue a quick evacuation order; the need to determine resources for disaster relief vs. the need to add resources to prevent a nuclear event, etc. And how should those priorities be made, by whom, and when? Should such priorities have changed the way the leaders approach the 'planning,' 'doing,' and 'checking'?
(b) In disaster preparedness, how has the extent of the disaster be predicted?
If the disaster falls within the predicted parameters, the planned response may be sufficient. If the disaster rises to unanticipated levels, however, as is the case in Japan, the response plan can easily become insufficient.
"Beyond expectation" was how virtually everyone — from Tokyo Electric Power Company (the operator of Fukushima power plant) to the government nuclear power regulators and safety commission— described the March 11 earthquake and tsunami in Tohoku region, although retrospective review of historic data begins to hint otherwise.
The probability of a nuclear fatality was set in 2003 by the Japanese Nuclear Commission (JNC) to not exceed 1 × 10-6 per year or about 1 in a million years. On the Japanese nuclear event, Nassim Nicholas Taleb, author of The Black Swan, cautions, that model error causes underestimation of small probabilities and their contribution (see his web site). This highly improbable event with massive consequences is what Taleb calls a "Black Swan."
(c) Is standard FMEA practice adequate for for a Black Swan event?
In FMEA (Failure Modes and Effects Analysis) we try to account for this Black Swan by looking at not only frequency of occurrence, but also impact and detection. Assuming JNC's probability estimate for a nuclear fatality of 1 × 10-6, the likelihood of a M9.0 earthquake at less than 1 per 100 years or 1 × 10-2 (worst case prediction), and the likelihood of a 20 meter tsunami at less than 1 per 100 years or 1 × 10-2 (worst case prediction), the probability of all three occurring simultaneously would be 1 × 10-10, or 1 in 10,000,000,000 (one in ten billion).
Welcome!
Welcome to the QFD Institute blog, hosted by our staff. This forum is an opportunity to hear the "voice" of QFD practitioners, both new and veterans.
QFD is useful in any industry and application to acquire, analyze, solve, and assure the quality of the "voice of the customer."
Post any question you have to start a conversation from which we all can learn and improve our technique.
QFD is useful in any industry and application to acquire, analyze, solve, and assure the quality of the "voice of the customer."
Post any question you have to start a conversation from which we all can learn and improve our technique.
As a kickoff, we’ll feature recent popular QFDI newsletters. (Sign up for free subscription to the QFDI newsletter on this page!)
But the blog will be a more informal and interactive format. So, drop us a note by email or the comment box. Your input is always welcome.
But the blog will be a more informal and interactive format. So, drop us a note by email or the comment box. Your input is always welcome.
- mayumi
Subscribe to:
Posts (Atom)