Download as pdf or txt
Download as pdf or txt
You are on page 1of 42

About

 the  HELM  Project  


HELM   (Helping   Engineers   Learn   Mathematics)   materials   were   the   outcome   of   a   three-­‐year   curriculum  
development  project  undertaken  by  a  consortium  of  five  English  universities  led  by  Loughborough  University,  
funded   by   the   Higher   Education   Funding   Council   for   England   under   the   Fund   for   the   Development   of   Teaching  
and  Learning  for  the  period  October  2002  –  September  2005,  with  additional  transferability  funding  October  
2005  –  September  2006.  
HELM  aims  to  enhance  the  mathematical  education  of  engineering  undergraduates  through  flexible  learning  
resources,  mainly  these  Workbooks.  
HELM  learning  resources  were  produced  primarily  by  teams  of  writers  at  six  universities:  Hull,  Loughborough,  
Manchester,  Newcastle,  Reading,  Sunderland.  
HELM   gratefully   acknowledges   the   valuable   support   of   colleagues   at   the   following   universities   and   colleges  
involved  in  the  critical  reading,  trialling,  enhancement  and  revision  of  the  learning  materials:    
Aston,  Bournemouth  &  Poole  College,  Cambridge,  City,  Glamorgan,  Glasgow,  Glasgow  Caledonian,  Glenrothes  
Institute   of   Applied   Technology,   Harper   Adams,   Hertfordshire,   Leicester,   Liverpool,   London   Metropolitan,  
Moray   College,   Northumbria,   Nottingham,   Nottingham   Trent,   Oxford   Brookes,   Plymouth,   Portsmouth,  
Queens   Belfast,   Robert   Gordon,   Royal   Forest   of   Dean   College,   Salford,   Sligo   Institute   of   Technology,  
Southampton,   Southampton   Institute,   Surrey,   Teesside,   Ulster,   University   of   Wales   Institute   Cardiff,   West  
Kingsway  College  (London),  West  Notts  College.  
 

HELM  Contacts:  
Post:  HELM,  Mathematics  Education  Centre,  Loughborough  University,  Loughborough,  LE11  3TU.  
Email:  [email protected]          Web:  http://helm.lboro.ac.uk  
 

HELM  Workbooks  List  


1   Basic  Algebra   26   Functions  of  a  Complex  Variable  
2   Basic  Functions   27   Multiple  Integration  
3   Equations,  Inequalities  &  Partial  Fractions   28   Differential  Vector  Calculus  
4   Trigonometry   29   Integral  Vector  Calculus  
5   Functions  and  Modelling   30   Introduction  to  Numerical  Methods  
6   Exponential  and  Logarithmic  Functions   31   Numerical  Methods  of  Approximation  
7   Matrices   32   Numerical  Initial  Value  Problems  
8   Matrix  Solution  of  Equations   33   Numerical  Boundary  Value  Problems  
9   Vectors   34   Modelling  Motion  
10   Complex  Numbers   35   Sets  and  Probability  
11   Differentiation   36   Descriptive  Statistics  
12   Applications  of  Differentiation   37   Discrete  Probability  Distributions  
13   Integration   38   Continuous  Probability  Distributions  
14   Applications  of  Integration  1   39   The  Normal  Distribution  
15   Applications  of  Integration  2   40   Sampling  Distributions  and  Estimation  
16   Sequences  and  Series   41   Hypothesis  Testing  
17   Conics  and  Polar  Coordinates   42   Goodness  of  Fit  and  Contingency  Tables  
18   Functions  of  Several  Variables   43   Regression  and  Correlation  
19   Differential  Equations   44   Analysis  of  Variance  
20   Laplace  Transforms   45   Non-­‐parametric  Statistics  
21   z-­‐Transforms   46   Reliability  and  Quality  Control  
22   Eigenvalues  and  Eigenvectors   47   Mathematics  and  Physics  Miscellany  
23   Fourier  Series   48   Engineering  Case  Study  
24   Fourier  Transforms   49   Student’s  Guide  
25   Partial  Differential  Equations   50   Tutor’s  Guide  
 
©  Copyright    Loughborough  University,  2015
 
 
 
Production  of  this  2015  edition,  containing  corrections  and  minor  
revisions  of  the  2008  edition,  was  funded  by  the  sigma  Network.    

 
 
 
Contents 46
Reliability and
Quality Control
46.1 Reliability 2
46.2 Quality Control 21

Learning outcomes
You will first learn about the importance of the concept of reliability applied to systems and
products. The second Section introduces you to the very important topic of quality control
in production processes. In both cases you will learn how to perform the basic calculations
necessary to use each topic in practice.
 

Reliability 46.1 

Introduction
Much of the theory of reliability was developed initially for use in the electronics industry where
components often fail with little if any prior warning. In such cases the hazard function or conditional
failure rate function is constant and any functioning component or system is taken to be ‘as new’.
There are other cases where the conditional failure rate function is time dependent, often proportional
to the time that the system or component has been in use. The function may be an increasing function
of time (as with random vibrations for example) or decreasing with time (as with concrete whose
strength, up to a point, increases with time). If we can develop a lifetime model, we can use it to
plan such things as maintenance and part replacement schedules, whole system replacements and
reliability testing schedules.

#
• be familiar with the results and concepts met
in the study of probability
Prerequisites
• understand and be able to use continuous
Before starting this Section you should . . .
probability distributions
"
' !
$
• appreciate the importance of lifetime
distributions

Learning Outcomes • complete reliability calculations for simple


systems
On completion you should be able to . . .
• state the relationship between the Weibull
distribution and the exponential distribution
& %

2 HELM (2015):
Workbook 46: Reliability and Quality Control
®

1. Reliability

Lifetime distributions
From an engineering point of view, the ability to predict the lifetime of a whole system or a system
component is very important. Such lifetimes can be predicted using a statistical approach using
appropriate distributions. Common examples of structures whose lifetimes we need to know are
airframes, bridges, oil rigs and at a simpler, less catastrophic level, system components such as cycle
chains, engine timing belts and components of electronic systems such as televisions and computers.
If we can develop a lifetime model, we can use it to plan such things as maintenance and part
replacement schedules, whole system replacements and reliability testing schedules.
We start by looking at the length of use of a system or component prior to failure (the age prior to
failure) and from this develop a definition of reliability. Lifetime distributions are functions of time
and may be expressed as probability density functions and so we may write
Z t
F (t) = f (t) dt
0

This represents the probability that the system or component fails anywhere between 0 and t.

Key Point 1
The probability that a system or component will fail only after time t may be written as
R(t) = 1 − F (t)
The function R(t) is usually called the reliability function.

In practice engineers (and others!) are often interested in the so-called hazard function or conditional
failure rate function which gives the probability that a system or component fails after it has been in
use for a given time. This function may be defined as
 
P(failure in the interval (t, t + ∆t))/∆t
H(t) = lim
∆t→o P(survival up to time t)
 
1 F (t + ∆t) − F (t)
= lim
R(t) ∆t→o ∆t
1 d
= F (t)
R(t) dt
f (t)
=
R(t)

HELM (2015): 3
Section 46.1: Reliability
This gives the conditional failure rate function as
 Z t+∆t 
 t f (t) dt/(∆t) 
H(t) = lim   
∆t→o R(t) 

f (t)
=
R(t)

Essentially we are describing the rate of failure per unit time for (say) mechanical or electrical
components which have already been in service for a given time. A graph of H(t) often shows a high
initial failure rate followed by a period of relative reliability followed by a period of increasingly high
failure rates as a system or component ages. A typical graph (sometimes called a bathtub graph) is
shown below.
H(t)

0 t
Early life and End of life and
random failure Useful life and random failure random failure

Figure 1
Note that ‘early life and random failure’ includes failure due to defects being present and that ‘end
of life and random failure’ includes failure due to ageing.
The reliability of a system or component may be defined as the probability that the system or
component functions for a given time, that is, the probability that it will fail only after the given
time. Put another way, R(t) is the probability that the system or component is still functioning at
time t.

The exponential distribution


We have already met the exponential distribution in the form
f (t) = λe−λt , t≥0
However, one of the simplest distributions describing failure is the exponential distribution
1 −t/µ
f (t) = e , t≥0
µ
where, in this case, µ is the mean time to failure. One property of this distribution is that the hazard
function is a constant independent of time - the ‘good as new’ syndrome mentioned above. To show
that the probability of failure is independent of age consider the following.
Z t  t
1 −t/µ 1 −t/µ
F (t) = e dt = − µe = 1 − e−t/µ
0 µ µ 0

4 HELM (2015):
Workbook 46: Reliability and Quality Control
®

Hence the reliability (given that the total area under the curve F (t) = 1 − e−t/µ is unity) is
R(t) = 1 − F (t) = e−t/µ
Hence, the hazard function or conditional failure rate function H(t) is given by
1 −t/µ
F (t) µ
e 1
H(t) = = −t/µ =
R(t) e µ
which is a constant independent of time.
Another way of looking at this is to consider the probability that failure occurs in the interval (τ, τ +t)
given that the system is functioning at time τ . This probability is

F (τ + t) − F (τ ) (1 − e−(τ +t)/µ ) − (1 − e−τ /µ )


=
R(t) e−τ /µ
= 1 − e−t/µ

This is just the probability that failure occurs in the interval (0, t) and implies that ageing has no
effect on failure. This is sometimes referred to as the ‘good as new syndrome.’
It is worth noting that in the modelling of many complex systems it is assumed that only random
component failures are important. This enables us to assume the use of the exponential distribution
since initial failures are removed by a ‘running-in’ process and the time to ultimate failure is usually
long.

Example 1
The lifetime of a modern low-wattage electronic light bulb is known to be expo-
nentially distributed with a mean of 8000 hours.
Find the proportion of bulbs that may be expected to fail before 7000 hours use.

Solution
We know that µ = 8000 and so

Z t
1 −t/8000
F (t) = e dt
0 8000
 t
1 −t/8000
= − 8000e
8000 0
= 1 − e−t/8000

Hence F (7000) = 1 − e−7000/8000 = 1 − e−0.675 = 0.4908 and we expect that about 49% of the
bulbs will fail before 7000 hours of use.

HELM (2015): 5
Section 46.1: Reliability
Task
A particular electronic device will only function correctly if two essential compo-
nents both function correctly. The lifetime of the first component is known to be
exponentially distributed with a mean of 6000 hours and the lifetime of the second
component is known to be exponentially distributed with a mean of 7000 hours.
Find the proportion of devices that may be expected to fail before 8000 hours use.
State clearly any assumptions you make.

Your solution

Answer
The assumption made is that the components operate independently.
For the first component F (t) = 1 − e−t/6000 so that F (8000) = 1 − e−8000/6000 = 1 − e−4/3 = 0.7364
For the second component F (t) = 1 − e−t/7000 so that F (8000) = 1 − e−8000/7000 = 0.6811
The probability that the device will continue to function after 8000 hours use is given by an expression
of the form P(A ∪ B) = P(A) + P(B) − P(A ∩ B)
Hence the probability that the device will continue to function after 8000 hour use is
0.7364 + 0.6811 − 0.7364 × 0.6811 = 0.916
and we expect just under 92% of the devices to fail before 8000 hours use.

An alternative answer may be obtained more directly by using the reliability function R(t):
The assumption made is that the components operate independently.
For the first component R(t) = e−t/6000 so that R(8000) = e−8000/6000 = e−4/3 = 0.2636
For the second component R(t) = e−t/7000 so that R(8000) = e−8000/7000 = 0.3189
The probability that the device will continue to function after 8000 hour use is given by
0.2636 × 0.3189 = 0.0841
Hence the probability that the device will fail before 8000 hours use is 1 − 0.0841 = 0.916
and we expect just under 92% of the devices to fail before 8000 hours use.

6 HELM (2015):
Workbook 46: Reliability and Quality Control
®

2. System reliability
It is reasonable to ask whether, in designing a system, an engineer should design a system using
components in series or in parallel. The engineer may not have a choice of course! We may represent
a system consisting of n components say C1 , C2 . . . , Cn with reliabilities (these are just probability
values) R1 , R2 . . . , Rn respectively as series and parallel systems as shown below.

C1

C2
C1 C2 Cn

Cn

Series Design Parallel Design


Figure 2
With a series design, the system will fail if any component fails. With a parallel design, the system
will work as long as any component works.
Assuming that the components are independent, we can express the reliability of the series design as
RSeries = R1 × R2 × · · · × Rn
simply by multiplying the probabilities.
Since each reliability value is less than one, we may conclude that a series design is less reliable than
its least reliable component.
Similarly (although by no means as clearly!), we can express the reliability of the parallel design as
RParallel = 1 − (1 − R1 )(1 − R2 ) . . . (1 − Rn )
The derivation of this result is illustrated in Example 3 below for the case n = 3 . In this case,
the algebra involved in fairly straightforward. We can conclude that the parallel design is at least as
reliable as the most reliable component.
Engineers will sometimes include ‘redundant’components in parallel to improve reliability. The spare
wheel of a car is a well known example.

HELM (2015): 7
Section 46.1: Reliability
Example 2
Series design
Consider the three components C1 , C2 and C3 with reliabilities R1 , R2 and R3
connected in series as shown below

C1 C2 C

Find the reliability of the system where R1 = 0.2, R2 = 0.3 and R3 = 0.4.

Solution
Since the components are assumed to act independently, we may clearly write
RSeries = R1 × R2 × R3
Taking R1 = 0.2, R2 = 0.3 and R3 = 0.4 we obtain the value RSeries = 0.2 × 0.3 × 0.4 = 0.024

Example 3
Parallel design
Consider the three components C1 , C2 and C3 with reliabilities R1 , R2 and R3
connected in parallel as shown below

C1

C2

C

Find the reliability of the system where R1 = 0.2, R2 = 0.3 and R3 = 0.4.

Solution
Observing that Fi = 1 − Ri , where Fi represents the failure of the ith component and Ri represents
the reliability of the ith component we may write
FSystem = F1 F2 F3 → RSystem = 1 − F1 F2 F3 = 1 − (1 − R1 )(1 − R2 )(1 − R3 )
Again taking R1 = 0.2, R2 = 0.3 and R3 = 0.4 we obtain
FSystem = 1 − (1 − 0.2)(1 − 0.3)(1 − 0.4) = 1 − 0.336 = 0.664

8 HELM (2015):
Workbook 46: Reliability and Quality Control
®

Hence series system reliability is less than any of the component reliabilities and parallel system
reliability is greater than any of the component reliabilities.

Task
Consider the two components C1 and C2 with reliabilities R1 and R2 connected
in series and in parallel as shown below. Assume that R1 = 0.3 and R2 = 0.4.

C1
C1 C2

C2

Series configuration Parallel configuration


Let RSeries be the reliability of the series configuration and RParallel be the reliability
of the parallel configuration

(a) Why would you expect that RSeries < 0.3 and RParallel > 0.4?
(b) Calculate RSeries
(c) Calculate RParallel

Your solution

Answer
(a) You would expect RSeries < 0.3 and RParallel > 0.4 because RSeries is less than any of the
component reliabilities and RParallel is greater than any of the component reliabilities.
(b) RSeries = R1 × R2 = 0.3 × 0.4 = 0.12
(c) RParallel = R1 × R2 − R1 R2 = 0.3 + 0.4 − 0.3 × 0.4 = 0.58

HELM (2015): 9
Section 46.1: Reliability
3. The Weibull distribution
The Weibull distribution was first used to describe the behaviour of light bulbs as they undergo the
ageing process. Other applications are widespread and include the description of structural failure,
ball-bearing failure and the failure of a variety of electronic components.

Key Point 2
If the random variable X is defined by a probability density function of the form
β
f (x) = αβ(αx)β−1 e−(αx)
then X is said to follow a Weibull distribution.

The hazard function or conditional failure rate function H(t), which gives the probability that a
system or component fails after it has been in use for a given time, is constant for the exponential
distribution but for a Weibull distribution is proportional to xβ−1 . This implies that β = 1 gives a
constant hazard, β < 1 gives a reducing hazard and β > 1 gives an increasing hazard. Note that α
is simply a scale factor while the case β = 1 reduces the Weibull distribution to
f (x) = αe−αx
which you may recognize as one form of the exponential distribution.
Figure 3 below shows the Weibull distribution for various values of β. For simplicity, the graphs
β
assume that α = 1. Essentially the plots are of the function f (x) = βxβ−1 e−x

3
β=8

1
β=4

β=2
β=1
0 1 2 3 4

The Weibull distribution with β = 1, 2, 4 and 8


Figure 3
In applications of the Weibull distribution, it is often useful to know the cumulative distribution
function (cdf). Although it is not derived directly here, the cdf F (x) of the Weibull distribution,
β
whose probability density function is f (x) = αβ(αx)β−1 e−(αx) , is given by the function

10 HELM (2015):
Workbook 46: Reliability and Quality Control
®

β
F (x) = 1 − e−(αx)
The relationship between f (x) and F (x) may be seen by remembering that
d
f (x) = F (x)
dx
β
Differentiating the cdf F (x) = 1 − e−(αx) gives the result
β
d (αx)β βe−(αx) β β
f (x) = F (x) = = αβ βxβ−1 e−(αx) = αβ(αx)β−1 e−(αx)
dx x
Mean and variance of the Weibull distribution
The mean and variance of the Weibull distribution are quite complicated expressions and involve the
use of the Gamma function, Γ(x) . The outline explanations given below defines the Gamma function
and shows that basically in terms of the application here, the function can be expressed as a simple
factorial.
It can be shown that if the random variable X follows a Weibull distribution defined by the probability
density function
β
f (x) = αβ(αx)β−1 e−(αx)
then the mean and variance of the distribution are given by the expressions
 2 (  2 )
1 1 1 2 1
E(X) = Γ(1 + ) and V(X) = Γ(1 + ) − Γ(1 + )
α β α β β

where Γ(r) is the gamma function defined by


Z ∞
Γ(r) = xr−1 e−x dx
0

Using integration by parts gives a fairly straightforward identity satisfied by the gamma function is:
Γ(r) = (r − 1)Γ(r − 1)
Using this relationship tells us that
Γ(r) = (r − 1)(r − 2)(r − 3) . . . Γ(1) for integral r
but
Z ∞ Z ∞  ∞
1−1 −x −x −x
Γ(1) = x e dx = e dx = −e =1
0 0 0

so that
Γ(r) = (r − 1)(r − 2)(r − 3) . . . (3)(2)(1) = (r − 1)! for integral r
Essentially, this means that when we calculate the mean and variance of the Weibull distribution
using the expressions
 (  2 )
1 1 1 2 1
E(X) = Γ(1 + ) and V(X) = Γ(1 + ) − Γ(1 + )
α β α β β
we are evaluating factorials and the calculation is not as difficult as you might imagine at first sight.
Note that to apply the work as presented here, the expressions

HELM (2015): 11
Section 46.1: Reliability
1 2
(1 + ) and (1 + )
β β
1
must both take positive integral values (i.e. must be an integer) in order for the factorial calculation
β
to make sense. If the above expressions do not assume positive integral values, you will need to know
much more about the values and behaviour of the Gamma function in order to do calculations. In
practice, we would use a computer of course. As you might expect, the Gamma function is tabulated
to assist in such calculations but in the examples and exercises set in this Workbook, the values of
1
will always be integral.
β

Example 4
The main drive shaft of a pumping engine runs in two bearings whose lifetime
follows a Weibull distribution with random variable X and parameters α = 0.0002
and β = 0.5.

(a) Find the expected time that a single bearing runs before failure.
(b) Find the probability that any one bearing lasts at least 8000 hours.
(c) Find the probability that both bearings last at least 6000 hours.

Solution

1 1
(a) We know that E(X) = Γ(1 + ) = 5000 × Γ(1 + 2) = 5000 × 2 = 10000 hours.
α β
(b) We require P(X > 8000) , this is given by the calculation:
0.5
P(X > 8000) = 1 − F (8000) = 1 − (1 − e−(0.0002×8000) )
0.5
= e−(0.0002×8000) = e−1.265
= 0.282

(c) Assuming that the bearings wear independently, the probability that both bearings last
at least 6000 hours is given by {P(X > 6000)}2 . But P(X > 6000) is given by the
calculation
0.5
P(X > 6000) = 1 − F (6000) = 1 − (1 − e−(0.0002×6000) ) = 0.335

so that the probability that both bearings last at least 6000 hours is given by

{P(X > 6000)}2 = 0.3352 = 0.112

12 HELM (2015):
Workbook 46: Reliability and Quality Control
®

Task
A shaft runs in four roller bearings the lifetime of each of which follows a Weibull
distribution with parameters α = 0.0001 and β = 1/3.

(a) Find the mean life of a bearing.


(b) Find the variance of the life of a bearing.
(c) Find the probability that all four bearings last at least 50,000 hours.
State clearly any assumptions you make when determining this proba-
bility.

Your solution

Answer
1 1
(a) We know that E(X) = Γ(1 + ) = 10000 × Γ(1 + 3) = 60000 × 2 = 10000 hours.
α β
 (  2 )
1 2 1
(b) We know that V(X) = Γ(1 + ) − Γ(1 + so that
α β β

V(X) = (10000)2 {Γ(1 + 6) − (Γ(1 + 3)]2 }


= (10000)2 {Γ(7) − (Γ(4))2 }
= (10000)2 {6! − (3!)2 } = 684(10000)2 = 6.84 × 1010

(c) The probability that one bearing will last at least 50,000 hours is given by the calculation

P(X > 50000) = 1 − F (50000)


0.5
= 1 − (1 − e−(0.0001×50000) )
0.5
= e−5 = 0.107

Assuming that the four bearings have independent distributions, the probability that all
four bearings last at least 50,000 hours is (0.107)4 = 0.0001

HELM (2015): 13
Section 46.1: Reliability
Exercises
1. The lifetimes in hours of certain machines have Weibull distributions with probability density
function

0 (t < 0)
f (t) = β−1 β
αβ(αt) exp{−(αt) } (t ≥ 0)
Z t
(a) Verify that αβ(αu)β−1 exp{−(αu)β }du = 1 − exp{−(αt)β }.
0

(b) Write down the distribution function of the lifetime.


(c) Find the probability that a particular machine is still working after 500 hours of use if
α = 0.001 and β = 2.
(d) In a factory, n of these machines are installed and started together. Assuming that
failures in the machines are independent, find the probability that all of the machines are
still working after t hours and hence find the probability density function of the time till
the first failure among the machines.

2. If the lifetime distribution of a machine has hazard function h(t), then we can find the reliability
function R(t) as follows. First we find the “cumulative hazard function” H(t) using
Z t
H(t) = h(t) dt then the reliability is R(t) = e−H(t) .
0

(a) Starting with R(t) = exp{−H(t)}, work back to the hazard function and hence confirm
the method of finding the reliability from the hazard.
(b) A lifetime distribution has hazard function h(t) = θ0 + θ1 t + θ2 t2 . Find
(i) the reliability function.
(ii) the probability density function.
1
(c) A lifetime distribution has hazard function h(t) = . Find
t+1
(i) the reliability function;
(ii) the probability density function;
(iii) the median.

What happens if you try to find the mean?


ρθ
(d) A lifetime distribution has hazard function h(t) = , where ρ > 0 and θ > 1.
ρt + 1
Find
(i) the reliability function.
(ii) the probability density function.
(iii) the median.
(iv) the mean.

14 HELM (2015):
Workbook 46: Reliability and Quality Control
®

Exercises continued

3. A machine has n components, the lifetimes of which are independent. However the whole
machine will fail if any component fails. The hazard functions for the components are
n
X
h1 (t), . . . , hn (t). Show that the hazard function for the machine is hi (t).
i=1

4. Suppose that the lifetime distributions of the components in Exercise 3 are Weibull distributions
with scale parameters ρ1 , . . . , ρn and a common index (i.e. “shape parameter”) γ so that the
hazard function for component i is γρi (ρi t)γ−1 . Find the lifetime distribution for the machine.

5. (Difficult). Show that, if T is a Weibull random variable with hazard function γρ(ρt)γ−1 ,

(a) the median is M(T ) = ρ−1 (ln 2)1/γ ,


(b) the mean is E(T ) = ρ−1 Γ(1 + γ −1 )
(c) the variance is V(T ) = ρ−2 {Γ(1 + 2γ −1 ) − [Γ(1 + γ −1 )]2 }.
Z ∞
Note that Γ(r) = xr−1 e−x dx. In the integrations it is helpful to use the substitution u = (ρt)γ .
0

HELM (2015): 15
Section 46.1: Reliability
Answers
1.

d
(a) (− exp{−(αu)β })
du
so

Z t  t
β−1 β β
αβ(αu) exp{−(αu) }du = − exp{−(αu) } = 1 − exp{−(αt)β }.
0 0

(b) The distribution function is F (t) = 1 − exp{−(αt)β }.


(c) R(500) = exp{−(0.001 × 500)2 } = 0.7788
(d) The probability that all of the machines are still working after t hours is

Rn (t) = {R(t)}n = exp{−(αt)nβ }.

Hence the time to the first failure has a Weibull distribution with β replaced by nβ. The
pdf is


0 (t < 0)
fn (t) = nβ−1 nβ
αnβ(αt) exp{−(αt) } (t ≥ 0)

2.

(a) The distribution function is


d
F (t) = 1 − R(t) = 1 − e−H(t) so the pdf is f (t) = F (t) = h(t)e−H(t) .
dt
Hence the hazard function is
f (t) h(t)e−H(t)
h(t) = = = h(t).
R(t) e−H(t)
(b) A lifetime distribution has hazard function h(t) = θ0 + θ1 t + θ2 t2 .
(i) Reliability R(t).
 t
2 3
H(t) = θ0 t + θ1 t /2 + θ2 t /3
0
2 3
= θ0 t + θ1 t /2 + θ2 t /3
R(t) = exp{−(θ0 t + θ1 t2 /2 + θ2 t3 /3)}.

(ii) Probability density function f (t).

F (t) = 1 − R(t)
f (t) = (θ0 + θ1 t + θ2 t2 ) exp{−(θ0 t + θ1 t2 /2 + θ2 t3 /3)}.

16 HELM (2015):
Workbook 46: Reliability and Quality Control
®

Answers
1
(c) A lifetime distribution has hazard function h(t) = .
t+1
(i) Reliability function R(t).
Z t t
1
H(t) = du = ln(u + 1) = ln(t + 1)
0 u+1 0
1
R(t) = exp{−H(t)} =
t+1
1
(ii) Probability density function f (t). F (t) = 1 − R(t), f (t) =
(t + 1)2
1 1
(iii) Median M. = so M = 1.
M +1 2
Z ∞
1
To find the mean: E(T + 1) = dt which does not converge, so neither does E(T ).
0 t+1

(d) A lifetime distribution has hazard function


ρθ
h(t) = , where ρ > 0 and θ > 1.
ρt + 1
(i) Reliability function R(t).
t  t
ρθ
Z
H(t) = dt = θ ln(ρt + 1) = θ ln(ρt + 1)
0 ρt + 1 0
R(t) = exp{−θ ln(ρt + 1)} = (ρt + 1)−θ

(ii) Probability density function f (t).

F (t) = 1 − R(t)
f (t) = θρ(ρt + 1)−(1+θ)

(iii) Median M.

(ρM + 1)−θ = 2−1


ρM + 1 = 21/θ ∴ M = ρ−1 (21/θ − 1)

(iv) Mean.
Z ∞
E(ρT + 1) = θρ (ρt + 1)−θ dt
0
 ∞
θ 1−θ θ
= − (ρt + 1) =
θ−1 0 θ−1
 
θ
E(T ) = ρ−1 − 1 = ρ−1 (θ − 1)−1
θ−1

HELM (2015): 17
Section 46.1: Reliability
Answers
3. For each component
Z t
Hi (t) = hi (t) dt
0
Ri (t) = exp{−Hi (t)}

For machine

n
Y n
X
R(t) = Ri (t) = exp{− Hi (t)}
i=1 i=1

the probability that all components are still working.

F (t) = 1 − R(t)
X n n
X
f (t) = exp{− Hi (t)} Hi (t)
i=1 i=1
n
X
h(t) = f (t)/R(t) = hi (t).
i=1

4. For each component

hi (t) = γρi (ρi t)γ−1


Z t
Hi (t) = γρi (ρi t)γ−1 dt = (ρi t)γ
0

Hence, for the machine,


n
X n
X
Hi (t) = t γ
ργi
i=1 i=1
n
!
X
R(t) = exp −tγ ργi
i=1
F (t) = 1 − R(t)
n n
!
X γ
X γ
f (t) = γtγ−1 ρi exp −tγ ρi
i=1 i=1
γ−1
h(t) = γρ(ρt)

so we have a Weibull distribution with index γ and scale parameter ρ such that
n
X
γ
ρ = ργi .
i=1

18 HELM (2015):
Workbook 46: Reliability and Quality Control
®

Answers

5. Suppose the distribution function is


γ
F (t) = 1 − e−(ρt) .

Then the pdf is


d γ
f (t) = F (t) = γρ(ρt)γ−1 e−(ρt)
dt
and the hazard function is
f (t) f (t)
H(t) = = = γρ(ρt)γ−1
R(t) 1 − F (t)

as required.

(a) At the median, M, F (M ) = 0.5. So


γ
1 − e−(ρM ) = 0.5
γ
e−(ρM ) = 0.5
(ρM )γ = − ln(0.5) = ln(2)
ρM = (ln 2)1/γ
M = ρ−1 (ln 2)1/γ

(b) The mean is


Z ∞ Z ∞
γ
E(T ) = t f (t) dt = t γρ(ρt)γ−1 e−(ρt) dt
Z0 ∞ 0
−(ρt)γ
= γ(ρt)γ e dt
0

Let u = (ρt)γ , so du/dt = γρ(ρt)γ−1 and du = γρ(ρt)γ−1 dt. Then


Z ∞
E(T ) = ρ−1 u1/γ e−u du
0
Z ∞
−1
= ρ u1+1/γ−1 e−u du
0
= ρ−1 Γ(1 + 1/γ)

HELM (2015): 19
Section 46.1: Reliability
Answers

5. continued

(c) To find the variance we first find


Z ∞ Z ∞
γ
2
E(T ) = 2
t f (t) dt = t2 γρ(ρt)γ−1 e−(ρt) dt
Z0 ∞ 0
γ
= γρ−1 (ρt)γ+1 e−(ρt) dt
0

Let u = (ρt)γ , so du/dt = γρ(ρt)γ−1 and du = γρ(ρt)γ−1 dt. Then


Z ∞
2
E(T ) = ρ−2 u2/γ e−u du
0
Z ∞
−2
= ρ u1+2/γ−1 e−u du
0
−2
= ρ Γ(1 + 2/γ)

So

V(T ) = E(T 2 ) − {E(T )}2


= ρ−2 Γ(1 + 2/γ) − ρ−2 {Γ(1 + 1/γ)}2
= ρ−2 {Γ(1 + 2/γ) − {Γ(1 + 1/γ)}2 }

20 HELM (2015):
Workbook 46: Reliability and Quality Control
®

 

Quality Control 46.2 

Introduction
Quality control via the use of statistical methods is a very large area of study in its own right and
is central to success in modern industry with its emphasis on reducing costs while at the same
time improving quality. In recent times, the Japanese have been extremely successful at applying
statistical methods to industrial quality control and have gained a significant advantage over many
of their competitors. One need only think of the reputations enjoyed by Japanese motor, camera,
TV, video, DVD and general electronics manufacturers to realize just how successful they have
become. It is because of the global nature of modern industrial competition that quality control,
or more precisely, statistical quality control has become an area of central importance to engineers.
Manufacturing a product that the public wants to buy is no longer good enough. The product
must be of sufficiently high quality and sufficiently competitive price-wise that it is preferred to its
competitors. Without statistical quality control methods it is extremely difficult, if not impossible,
to either attain or maintain a truly competitive position.

#
• be able to calculate means and standard
deviations from given data sets
Prerequisites
• be able to estimate population means and
Before starting this Section you should . . .
standard deviations from samples
" !
' $
• state what is meant by the term statistical
quality control.

• explain why different types of control charts


are necessary.
Learning Outcomes • construct and interpret a small variety of
On completion you should be able to . . . control charts, in particular those based on
means and ranges.

• describe in outline the relationship between


hypothesis testing and statistical quality
control.
& %

HELM (2015): 21
Section 46.2: Quality Control
1. Quality control
Background
Techniques and methods for checking the quality of materials and the building of houses, temples,
monuments and roads have been used over the centuries. For example, the ancient Egyptians had
to make and use precise measurements and adopt very high standards of work in order to build the
Pyramids. In the Middle Ages, the Guilds were established in Europe to ensure that new entrants
to the craft/trade maintained standards. The newcomer was required to serve a long period of
apprenticeship under the supervision of a master craftsman, and had to demonstrate his ability to
produce work of the appropriate quality and standard before becoming a recognized tradesman. In
modern times the notion of quality has evolved through the stages outlined below.
Inspection
The Industrial Revolution introduced mass production techniques to the workplace. By the end of the
19th century, production processes were becoming more complex and it was beyond the capabilities
of a single individual to be responsible for all aspects of production. It is impossible to inspect
quality into a product in the sense that a faulty product cannot be put right by means of inspection
alone. Statistical quality control can and does provide the environment within which the product is
manufactured correctly the first time. A process called acceptance sampling improves the average
quality of the items accepted by rejecting those items which are of unacceptable quality. In the
1920s, mass production brought with it the production line and assembly line concepts. Henry Ford
revolutionised car production with the introduction of the mass production of the ‘Model T.’
Mass production resulted in greater output and lower prices, but the quality of manufactured output
became more variable and less reliable. There was a need to tackle the problem of the production of
goods and parts of a fixed quality and standard. The solution was seen to be in the establishment
of inspection routines, under the supervision of a Quality Inspector. The first inspection procedures
required the testing of the entire production - a costly, time consuming and inefficient form of sorting
out good and defective items.
Quality Control
The Second World War brought with it the need for defect-free products. Inspection departments now
came to control the production process, and this resulted in the conformance to set specifications (the
reduction of variability and the elimination of defects) being monitored and controlled throughout
production. Quality Control departments were separate from, and independent of, the manufacturing
departments.
Quality Assurance
In turn, Quality Control evolved into Quality Assurance. The function of Quality Assurance is to
focus on assuring process and product quality through operational audits, the supplying of training,
the carrying out of technical analysis, and the giving of advice on quality improvement. The role
of Quality Assurance is to consult with the departments (design and production for example) where
responsibility for quality actually rests.
Total Quality Management
Quality Assurance has given way to Company Wide Quality Management or Total Quality Manage-
ment. As the names imply, quality is no longer seen to be the responsibility of a single department,
but has become the responsibility and concern of each individual within an organisation. A small
executive group sets the policy which results in targets for the various sections within the company.

22 HELM (2015):
Workbook 46: Reliability and Quality Control
®

For example, line management sets ‘do-able’ objectives; engineers to design attractive, reliable, and
functional products; operators to produce defect-free products and staff in contact with customers
to be prompt and attentive. Total Quality Management aims to measure, detect, reduce, eliminate,
and prevent quality lapses, it should include not only products but also services, and address issues
like poor standards of service and late deliveries.
Statistical Quality Control
As a response to the impracticality of inspecting every production item, methods involving sampling
techniques were suggested. The behaviour of samples as an indicator of the behaviour of the entire
population has a strong statistical body of theory to support it.
A landmark in the development of statistical quality control came in 1924 as a result of the work of
Dr. Walter Shewhart during his employment at Bell Telephone Laboratories. He recognised that in
a manufacturing process there will always be variation in the resulting products. He also recognized
that this variation can be understood, monitored, and controlled by statistical procedures. Shewhart
developed a simple graphical technique - the control chart - for determining if product variation is
within acceptable limits. In this case the production process is said to be ‘in control’ and control
charts can indicate when to leave things alone or when to adjust or change a production process.
In the latter cases the production process is said to be ‘out of control.’ Control charts can be used
(importantly) at different points within the production process.
Other leading pioneers in the field of quality control were Dr. H. Dodge and Dr. H. Romig. Shewhart,
Dodge and Romig were responsible for much of the development of quality control based on sampling
techniques. These techniques formed what has become known as ‘acceptance sampling.’
Although the techniques of acceptance sampling and control charts were not widely adopted initially
outside Bell, during the 1920s many companies had established inspection departments as a means
of ensuring quality.
During the Second World War, the need for mass produced products came to the fore and in the
United States in particular, statistical sampling and inspection methods became widespread. After the
Second World War, the Japanese in particular became very successful in the application of statistical
techniques to quality control and in the United States, Dr. W.E. Deming and Dr. J.M. Duran spread
the use of such techniques further into U.S. industry.
Statistical Process Control
The aim of statistical process control is to ensure that a given manufacturing process is as stable (in
control) as possible and that the process operates around stated values for the product with as little
variability as possible. In short, the aim is the reduction of variability to ensure that each product is
of a high a quality as possible. Statistical process control is usually thought of as a toolbox whose
contents may be applied to solve production-related quality control problems. Essentially the toolbox
contains the major tools called:
• the Histogram; • scatter diagrams;
• the Pareto chart; • check sheets;
• cause-and-effect digrams; • control charts;
• defect-concentration diagrams; • experimental design methods;
• sampling inspection.
Note that some authors argue that experimental design methods should not be regarded as a statistical
process control tool since experimental design is an active process involving the deliberate change of

HELM (2015): 23
Section 46.2: Quality Control
variables to determine their effect, while the other methods are based purely on the observation of
an existing process.
Control charts are a very powerful tool in the field of statistical process control. Before looking in
some detail at control charts in general we will look at the relationship between specification limits
and tolerance limits since this can and does influence the number of defective items produced by a
manufacturing process. It is this quantity that control charts seek to minimize.
Tolerance Limits and Specifications
An example of a specification for the manufacture of a short cylindrical spacer might be:
diameter: 1 ± 0.003 cm; length: 2 ± 0.001 cm.
Even though the ± differences here are the same (±0.003 in the case of the diameters and ±0.002
in the case of the lengths), there is no reason why this should always be so. These limits are called
the specification limits. During manufacture, some variation in dimensions will occur by chance.
These variations can be measured by using the standard deviation of the distribution followed by the
dimensions produced by the manufacturing process. Figure 1 below shows two normal distributions
each with so-called natural tolerance limits of 3σ either side of the mean.
Taking these limits implies that virtually all of the manufactured articles will fall within the natural
tolerance limits. Notice that in the top part of the figure the specification limits are rather wider
than the natural tolerance limits and that little if any wastage will be produced. One could argue
that in this case the manufacturing process is actually better than it needs to be and that this may
carry unnecessary costs. In the lower part of the figure the specification limits are narrower than the
natural tolerance limits and wastage will occur In general, a production process should aim to equate
the two types of limits to minimize both costs and wastage.
Control Charts
Any production process is subject to variability. Essentially, this variability falls into two categories
called chance causes and assignable causes. A production process exhibiting only chance causes
is usually described as being ‘in control’ or ‘in statistical control’ whereas a production process
exhibiting assignable causes is described as being ‘out of control.’ The basic aim of control charts
is the improvement of production processes via the detection of variability due to assignable causes.
It is then up to people such as process operators or managers to find the underlying cause of the
variation and take action to correct the process. Figure 4 below illustrates the point.

24 HELM (2015):
Workbook 46: Reliability and Quality Control
®

Specification Limits

Ideal
Value
3σ 3σ
Tolerance Limits

Wastage Wastage

Ideal
Value
3σ 3σ
Tolerance Limits

Diagram 1 - Specification and Tolerance Limits

Production Line

Apply
Corrective
Action
Verify
Identify Action
Basic and
Cause Follow-up

Detect
Assignable
Problem

Statistical Quality Control

Diagram 2 - Correcting Assignable Variation

Figure 4
A control chart basically consists of a display of ‘quality characteristics’ which are found from samples
taken from production at regular intervals. Typically the mean, together with a measure of the
variability (the range or the standard deviation) of measurements taken from the items being produced
may be the variables under consideration. The appearance of a typical control chart is shown below
in Figure 5.
The centre line on Figure 5 represents the ideal value of the characteristic being measured. Ideally,
the points on the figure should hover around this value in a random manner indicating that only
chance variation is present in the production process. The upper and lower limits are chosen in such
a way that so long as the process is in statistical control, virtually all of the points plotted will fall
between these two lines. Note that the points are joined up so that it is easier to detect any trends
present in the data. At this stage, a note of caution is necessary. The fact that a particular control

HELM (2015): 25
Section 46.2: Quality Control
chart shows all of its points lying between the upper and lower limits does not necessarily imply that
the production process is in control. We will look at this aspect of control charts in some depth later
in this booklet. For now simply note that even points lying within the upper and lower limits that
do not exhibit random behaviour can indicate that a process is either out of control or going out of
control.

Upper Limit
Sample Characteristic

Ideal Value

Lower Limit

Sample Number

Figure 5: A typical control chart


It is also worth noting at this stage that there is a close connection between control charts and
hypothesis testing. If we formulate the hypotheses:
H0 : the process is in control
H1 : the process is not in control
then a point lying with the upper and lower limits is telling us that we do not have the evidence to
reject the null hypothesis and a point lying outside the upper and lower limits is telling us to reject
the null hypothesis. From previous comments made you will realize that these statements are not an
absolute truth but that they are an indicative truth.
Control charts are very popular in industry. Before looking at particular types of control charts, the
following reasons are given for their popularity.

(a) They are simple to use. Production process operators do not need to be fluent in statistical
methods to be able to use control charts.
(b) They are effective. Control charts help to keep a production process in control. This
avoids wastage and hence unnecessary costs.
(c) They are diagnostic in the sense that as well as indicating when adjustments need to be
made to a process, they also indicate when adjustments do not need to be made.
(d) They are well-proven. Control charts have a long and successful history. They were
introduced in the 1920s and have proved themselves over the years.

26 HELM (2015):
Workbook 46: Reliability and Quality Control
®

(e) They provide information about the stability of a production process over time. This
information is of interest to production engineers for example. A very stable process may
enable fewer measurements to be taken - this has cost implications of course.

Control chart construction

In order to understand how control charts are constructed, consider the following scenario.

Part of a production line used by a manufacturer of instant coffee involves the use of a machine which
fills empty packets with coffee. The filler is set to fill each empty packet with 200 grams of coffee. As
this is an electro-mechanical process, repeated over a long period of time, it is inevitable that there
will be some degree of variation in the amounts of coffee dispensed. The question that production
engineers will need to answer is “Is the variation sufficient to result in a significant ‘underfilling’, or
‘overfilling’, of the packets?”

The following data (which represent the weights of a sample of 50 packets of 200 grams of coffee)
are obtained from the production line:

200.0 197.8 200.1 200.6 198.7


198.7 203.4 201.3 199.8 200.6
202.2 196.4 199.2 199.9 200.3
198.2 201.4 202.8 201.3 203.6
201.9 199.7 198.5 199.8 202.4
201.3 201.2 198.5 196.4 199.2
202.4 202.4 197.9 201.3 200.8
197.8 202.0 200.7 201.1 199.7
201.3 197.6 200.9 201.6 196.2
199.7 203.1 203.1 200.9 199.2

In this example, the data should be read from top left to bottom right, down the columns, that is
from 200.0, 198.7, 202.2 and so on to 199.2.

Using Microsoft Excel, the data give the following numerical summaries:

x̄ = 200.298 ≈ 200.30 grams,

s = 1.842 ≈ 1.84 grams

Note that the summaries refer to a sample but, as the sample size is large (n = 50 ), these values
may be taken as estimates for the population parameters and we shall take their values as:

µ = 200.30 grams,

σ = 1.84 grams

A plot of the data from Excel is shown in Figure 5, known as an x̄


x̄-chart.

HELM (2015): 27
Section 46.2: Quality Control
210

205
Weight (gm)

200

195

1 10 20 30 40 50

Coffee packet number


Figure 6: Data plot chart
To the plotted series of data are added:

(a) a horizontal line representing the process mean of 200.30 grams


(b) two further horizontal lines representing the upper and lower control limits. The values
of these limits are given by x̄ ± 3σ and are calculated as 205.8 and 194.8

These lines are called the Upper Control Limit (UCL) and the Lower Control Limit (LCL).
This process results in the x̄-chart Figure 7 below.
210

Upper Control Limit


205
Weight (gm)

Mean
200 Value

195
Lower Control Limit

190
1 10 20 30 40 50

Coffee packet number


Figure 7: Control chart with centre line and limits

28 HELM (2015):
Workbook 46: Reliability and Quality Control
®

In a production process, samples are taken at regular intervals, say every 10 mins., 15 mins., 30 mins,
and so on, so that at least 25 samples (say) are taken. At this stage, we will be able to detect trends
- looked at later in this workbook - in the data. This can be very important to production engineers
who are expected to take corrective action to prevent a production process going out of control. The
size of the sample can vary, but in general n ≥ 4. Do not confuse the number of samples taken,
with the sample size n.
The centre line, denoted by x̄¯ is given by the mean of the sample means
x̄1 + x̄2 + . . . + x̄k
x̄¯ =
k
where k is the number of samples. x̄¯ is used as an estimator for the population mean µ.
Given that the sample size is n the upper and lower control limits are calculated as
3s 3s
UCL = x̄¯ + √ LCL = x̄¯ − √
n n
These limits are also known as action limits; if any sample mean falls outside these limits, action
has to be taken. This may involve stopping the production line and checking and/or re-seting the
production parameters.
In the initial example, a single sample of size n = 50 was used. In reality, with samples being taken
at short time intervals, e.g. every 10 min., there is not sufficient time to weigh this many packages.
Hence, it is common to use a much smaller sample size, e.g. n = 4 or 5.
Under these circumstances, it is not sensible (very small sample size may give inaccurate results) to
use s as an estimate for σ the standard deviation of the population. In a real manufacturing process,
the value of σ is almost always unknown and must be estimated.
Estimating the population standard deviation.
It is worth noting that there is, in practice, a distinction between the initial setting up of a control chart
and its routine use after that. Initially, we need a sufficiently large sample to facilitate the accurate
estimation of the population standard deviation σ. Three possible methods used to estimate are:
(a) Calculate the standard deviations of several samples, and use the mean of these values as
an estimate for σ.
(b) Take a single, large size sample from the process when it is believed to be in control.
Use the standard deviation s of this sample as an estimate for σ. Note that this was the
method adopted in the initial example.
(c) In industry, a widely used method for obtaining an unbiased estimate for σ uses the
calculation of the average range R̄ of the sample results. The estimate for σ is then given
by the formula:

σ̂ =
d2
where d2 is a constant that depends on the sample size. For example, the Table given at the end of
this Workbook, show that for n = 5, d2 = 2.326.
The control limits can now be expressed in the form:
3R̄
x̄¯ ± √
d2 n

HELM (2015): 29
Section 46.2: Quality Control
and, by a further simplification as:

x̄¯ ± A2 R̄

A table for the constant A2 are given at the end of this Workbook (Table 1).

Example 5
The following table shows the data obtained from 30 samples, each of size n = 4,
taken from the ’instant coffee’ production line considered above. The table has
been extended to show the mean x̄ and the range R̄ of each sample. Find the
grand mean x̄¯, the upper and lower control limits and plot the control chart. You
are advised that all of the necessary calculations have been done in Excel and
that you should replicate them for practice. You can, of course, use a dedicated
statistics package such as MINITAB if you wish.

Sample Weight Weight Weight Weight Mean Range Sample Weight Weight Weight Weight Mean Range
1 202.3 199.8 201.4 200.3 200.95 2.5 16 195.5 202.0 199.2 202.3 199.75 6.8
2 198.6 201.3 199.7 201.8 200.35 3.2 17 199.0 200.7 200.1 199.3 199.78 1.7
3 197.9 199.8 200.1 189.8 196.90 10.3 18 197.6 188.9 201.1 200.3 196.98 12.2
4 202.4 205.1 199.6 189.2 199.08 15.9 19 200.6 199.4 201.8 202.0 200.95 2.6
5 194.7 201.2 197.5 201.1 198.63 6.5 20 199.4 201.0 200.8 197.6 199.70 3.4
6 200.8 199.8 200.3 196.2 199.28 4.6 21 197.8 205.1 203.6 204.3 202.70 7.3
7 198.5 201.4 200.6 199.2 199.93 2.9 22 201.3 196.8 197.6 199.6 198.83 4.5
8 202.3 204.1 199.2 201.3 201.73 4.9 23 200.6 199.4 200.4 201.8 200.55 2.4
9 205.4 198.3 194.9 198.5 199.28 10.5 24 198.9 204.2 202.0 201.1 201.55 5.3
10 199.0 202.2 197.1 202.8 200.28 5.7 25 203.0 201.6 201.4 200.8 201.70 2.2
11 189.7 200.1 202.6 201.9 198.58 12.9 26 200.1 194.8 201.7 198.3 198.73 6.9
12 203.6 197.5 204.5 196.4 200.50 8.1 27 198.2 201.4 197.6 201.1 199.58 3.8
13 198.6 197.8 199.7 200.4 199.13 2.6 28 200.6 202.8 210.0 202.8 201.80 2.2
14 202.6 199.2 199.0 199.2 200.00 3.6 29 200.3 201.3 201.6 201.3 201.13 1.3
15 203.3 203.1 200.8 201.7 202.23 2.5 30 195.9 203.3 196.3 203.4 199.73 7.5

Solution
Using Excel, the following results are obtained:
x̄¯ = 200.01 and R̄ = 5.56
Using these values, the control limits are defined as:
UCL = x̄¯ + A2 R̄ = 200.01 + 0.729 × 5.56 = 204.06
and
LCL = x̄¯ − A2 R̄ = 200.01 − 0.729 × 5.56 = 195.96

30 HELM (2015):
Workbook 46: Reliability and Quality Control
®

Solution (contd.)

205.00
UCL = 204.06

203.00
Weight (gm)

201.00

x̄¯ = 200.1
199.00

197.00
LCL = 195.96

195.00
10 20 30
Sample number

Figure 8

Note that if the process is in control, the probability that a sample mean will fall outside the control
limits is very small. From the central limit theorem we know that the sample means follow (or at least
approximately follow) a normal distribution and from tables it is easy to show that the probability of
a sample mean falling outside the region defined by the 3σ control limits is about 0.3%. Hence, if a
sample mean falls outside the control limits, there is an indication that there could be problems with
the production process. As noted before the process is then said to be out of control and process
engineers will have to locate the cause of the unusual value of the sample mean.
Warning limits
In order to help in the monitoring of the process, it is common to find additional warning limits
which are included on the control chart. The settings for these warning limits can vary but a common
practice involves settings based upon two standard deviations. We may define warning limits as:

x̄¯ ± √
n
Using the constant d2 defined earlier, these limits may be written as
2R̄
x̄¯ ± √
d2 n

HELM (2015): 31
Section 46.2: Quality Control
Example 6
Using the data from Example 5, find the 2σ warning limits and revise the control
chart accordingly.

Solution
We know that x̄¯ = 200.01, R̄ = 5.56, and that the sample size n is 4. Hence:
2R̄ 2 × 5.56
x̄¯ ± √ = 200.01 ± √ = 200.01 ± 2.70
d2 n 2.059 × 4
The warning limits clearly occur at 202.71 and 197.31. The control chart now appears as shown
below. Note that the warning limits in this case have been shown as dotted lines.

205.00
UCL = 204.06

203.00
Weight (gm)

201.00

x̄¯ = 200.1
199.00

197.00
LCL = 195.96

195.00
10 Sample number 20 30

Figure 9

Detecting trends
Control charts can also be used to detect trends in the production process. Even when all of the
sample means fall within the control limits, the existence of a trend can show the presence of one,
or more, assignable causes of variation which can then be identified and brought under control. For
example, due to wear in a machine, the process mean may have shifted slightly.
Trends are detected by considering runs of points above, or below, the centre line. A run is defined
as a sequence of consecutive points, all of which are on the same side of the centre line.
Using a plus (+) sign to indicate a point above, and a minus (−) sign to indicate a point below,
the centre line, the 30 sample means used as the worked examples above would show the following
sequence of runs:
++−−−−−+−+−+−++−−−+−+−++++−−++−

32 HELM (2015):
Workbook 46: Reliability and Quality Control
®

Considerable attention has been paid to the study of runs and theory has been developed. Simple
rules and guidelines, based on the longest run in a chart have been formulated.

Listed below are eight tests that can be carried out on the chart. Each test detects a specific pattern
in the data plotted on the chart. Remember that a process can only be considered to be in control if
the occurrence of points above and below the chart mean is random and within the control limits. The
occurrence of a pattern indicates a special cause for the variation, one that needs to be investigated.

The eight tests

• One point more than 3 sigmas from the centre line.


• Two out of three points in a row more than two sigmas from the centre line (same side)
• Four out of five points in a row more than 1 sigma from centre line (same side)
• Eight points in a row on the same side of the centre line.
• Eight points in a row more than 1 sigma either side of the centre line.
• Six points in a row, all increasing or all decreasing.
• Fourteen points in a row, alternating up and down.
• Fifteen points in a row within 1 sigma of centre line (either side)

The first four of these tests are sometimes referred as the Western Electric Rules. They were first
published in the Western Electric Handbook issued to employees in 1956.

The R -Chart

The control chart based upon the sample means is only one of a number of charts used in quality
control. In particular, to measure the variability in a production process, the chart is used. In this
chart, the range of a sample is considered to be a random variable. The range has its own mean and
standard deviation. The average range R̄ provides an estimate of this mean, while an estimate of
the standard deviation is given by the formula:

σ̂R = d3 ×
d2
where d2 and d3 are constants that depend on sample size. The control limits for the R-chart are
defined as:

R̄ ± 3σ̂R = R̄ ± 3d3 ×
d2
By the use of further constants, these control limits can be simplified to:

UCL = R̄ × D4 and LCL = R̄ × D3

Values for D3 and D4 are given in the table at the end of this workbook. For values of n ≤ 6 the
value of D3 is zero.

Note that if the R-chart indicates that the process is not in control, then there is no point in trying
to interpret the x̄-chart.

Returning to the data we have used so far, the control limits for the chart will be given by:

HELM (2015): 33
Section 46.2: Quality Control
UCL = R̄ × D4 = 5.56 × 2.282 ' 12.69
LCL = R̄ × D3 = 0

since the sample size is only 4. Effectively, the LCL is represented by the horizontal axis in the
following chart.

16.00

14.00
UCL = 12.69
Sample Range (gm)

12.00
10.00
8.00
6.00 R̄ = 5.56

4.00

2.00
0.00
10 20 30
Sample Number

Figure 10: Control Chart


Note that although the x̄-chart indicated that the process was in control, the R̄-chart shows that
in two samples, the ranges exceed the UCL. Hence, no attempt should be made to interpret the
x̄-chart. Since the limits used in the x̄-chart are based upon the value of R̄, these limits will have
little meaning unless the process variability is in control. It is often recommended that the R-chart be
constructed first and if this indicates that the process is in control then the x̄-chart be constructed.
However you should note that in reality, both charts are used simultaneously.
Further control charts
The x̄-charts and the R-charts considered above are only two of the many charts which are used in
quality control. If the sample is of size n > 10 then it may be better to use a chart based on the
standard deviation, the s-chart, for monitoring process variability. In practice, there is often little to
choose between the s-chart and the R-chart for samples of size n < 10 although it is worth noting
that the R-chart is easier to formulate thanks to the easier calculations.
Many control charts have a common appearance with a central line indicating a mean, a fixed setting,
or perhaps a percentage of defective items, when the process is in control. In addition, the charts
include control limits that indicate the bounds within which the process is deemed to be in control.
A notable exception to this is the CUSUM chart. This chart is not considered here, but see, for
example, Applied Statistics and Probability for Engineers, Montgomery and Runger, Wiley, 2002.

34 HELM (2015):
Workbook 46: Reliability and Quality Control
®

Pareto charts
A Pareto diagram or Pareto chart is a special type of graph which is a combination line graph and
bar graph. It provides a visual method for identifying significant problems. These are arranged in a
bar graph by their relative importance. Associated with the work of Pareto is the work of Joseph
Juran who put forward the view that the solution of the ‘vital few’ problems are more important than
solving the ‘trivial many’ . This is often stated as the ‘80 20 rule’ ; i.e. 20% of the problems (the
vital few) result in 80% of the failures to reach the set standards.
In general terms the Pareto chart is not a problem-solving tool, but it is a tool for analysis. Its
function is to determine which problems to solve, and in what order. It aims to identify and eliminate
major problems, leading to cost savings.
Pareto charts are a type of bar chart in which the horizontal axis represents categories of interest,
rather than a continuous scale. The categories are often types of defects. By ordering the bars from
largest to smallest, a Pareto chart can help to determine which of the defects comprise the vital few
and which are the trivial many. A cumulative percentage line helps judge the added contribution of
each category. Pareto charts can help to focus improvement efforts on areas where the largest gains
can be made.
An example of a Pareto chart is given below for 170 faults found in the process considered above
where coffee is dispensed by an automatic machine.

Pareto Chart of Cause


180
160 100
Number of Defects

140 80
120
100

Percent
60
80
60 40
40
20
20
0 0
le

gs

re
ix

le
zz

ba

ilu
zz
m
no

no

fa
e

e
ffe

ffe
r
se

ne
d
co

co
ne
en

hi
lig
py
sp

ac
lty
isa
di

M
m

u
Fa
Lu

M
d
ke
oc
Bl

Count 86 38 23 14 9
Percent 50.6 22.4 13.5 8.2 5.3
Cum% 50.6 72.9 86.5 94.7 100.0

Figure 11: A Pareto chart

HELM (2015): 35
Section 46.2: Quality Control
Task
The following table shows the data obtained from 30 samples, each of size n = 4,
taken from a production line whose output is nominally 200 gram packets of tacks.
Extend the table to show the mean x̄ and the range R̄ of each sample. Find the
grand mean x̄¯, the upper and lower control limits and plot the x̄- and R̄-control
charts. Discuss briefly whether you agree that the production process is in control.
Give reasons for your answer. You are advised to do all of the calculations in Excel
or use a suitable statistics package.

Sample Weight Weight Weight Weight Sample Weight Weight Weight Weight
1 201.3 199.8 201.4 200.3 16 196.5 202.0 199.2 202.3
2 198.6 201.3 199.7 201.8 17 199.0 200.7 200.1 199.3
3 198.9 199.8 200.1 199.8 18 196.6 198.9 201.1 200.3
4 202.4 203.1 199.6 199.2 19 200.6 199.4 201.8 201.0
5 194.7 201.2 197.5 201.1 20 199.4 201.0 200.8 197.6
6 200.8 199.8 200.3 196.2 21 199.8 203.1 201.6 204.3
7 199.5 201.4 200.6 199.2 22 201.3 196.8 197.6 199.6
8 202.3 203.1 199.2 201.3 23 200.6 199.4 200.4 200.8
9 205.4 198.3 197.9 198.5 24 197.9 202.2 201.0 200.1
10 199.0 202.2 197.1 202.8 25 203.0 201.6 201.4 200.8
11 189.7 200.1 202.6 201.9 26 200.1 199.8 201.7 199.3
12 201.6 197.5 204.5 196.4 27 198.2 201.4 198.6 201.1
13 198.6 198.8 199.7 200.4 28 200.6 202.8 210.0 202.8
14 202.6 199.2 199.0 199.2 29 200.3 201.3 201.6 201.3
15 203.3 203.1 200.8 201.7 30 197.9 201.3 198.3 201.4

Your solution

36 HELM (2015):
Workbook 46: Reliability and Quality Control
®

Answer
The solution is given in two parts, first the x̄-control chart and secondly the R̄-control chart.
Part (i)
Using Excel, the following results are obtained: x̄¯ = 200.24 and R̄ = 3.88
Using these values, the control limits are defined as:
UCL = x̄¯ + A2 R̄ = 200.24 + 0.729 × 3.88 = 203.07
and
LCL = x̄¯ − A2 R̄ = 200.04 − 0.729 × 3.88 = 197.41
The x̄-chart is shown below.

205.00

UCL = 203.07
Sample mean weight (gm)

203.00

201.00

x̄¯ = 200.24
199.00

197.00 LCL= 197.41

195.00
10 20 30
Sample number

HELM (2015): 37
Section 46.2: Quality Control
Answer
Part (ii)
The control limits for the chart are given by:
UCL = R̄ × D4 = 3.88 × 2.282 = 8.85
LCL = R̄ × F3 = 0
since the sample size is 4, so that the LCL is represented by the horizontal axis in the following
R-chart.
14.00

12.00

10.00
Sample Range (gm)

UCL = 8.85
8.00

6.00

4.00 R̄ = 3.88

2.00

0.00
10 20 30
Sample Number

Since one member of the R-chart exceeds the UCL we cannot say that the process is in control and
conclude that the source of the error should be investigated and corrected.

Table 1: Some constants used in the construction of control limits


Sample d2 A2 D3 D4
Size (n)

2 1.128 1.880 0 3.267


3 1.693 1.023 0 2.574
4 2.059 0.729 0 2.282
5 2.326 0.577 0 2.114
6 2.543 0.483 0 2.004
7 2.704 0.419 0.076 1.924
8 2.847 0.373 0.136 1.864
9 2.970 0.337 0.184 1.816
10 3.078 0.308 0.223 1.777
15 3.472 0.223 0.347 1.653
20 3.735 0.180 0.415 1.585
25 3.931 0.153 0.459 1.541

38 HELM (2015):
Workbook 46: Reliability and Quality Control
Index for Workbook 46

Action limits 29 Quality assurance 22


Quality control 21-38
Conditional failure rate function
see Hazard function R-chart 33
Control charts 24-38 Reliability 2-20
Control chart construction 27 Reliability function 3
Control limit constants table 38 Runs 32
Eight tests 33 s-chart 34
Electronic component lifetime 6 Series design of components 7-9
Estimating standard deviation 29 Standard deviation estimation 29
Exponential distribution 4, 10 Statistical process control 23
Statistical quality control 23
Gamma function 11 System reliability 7-9

Hazard function 3,5,10 Tolerance limits 24


Total quality management 22
Instant coffee production 27, 30, 32 Trend detection 32
Lifetime distributions 3
Upper control limit (UCL) 28
Light bulb lifetime 5
Lower control limit (LCL) 28 Warning limits 31
Weibull distribution 10
Parallel design of components 7-9
- cdf 10
Pareto charts 35
Pattern detection tests 33 - mean 11
Probability of failure 3 - variance 11
Production line data 27, 30, 32, 35, 36
Pumping engine bearing lifetime 12-13 x-chart 27, 32

EXERCISES
14

You might also like