Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

1.

Explain the acceptance sampling and its advantages


✓ A form of Inspection applied to lots or batches of items before or after a process to judge conformance
to predetermined standards
✓ Acceptance sampling is a statistical procedure used to determine whether to accept or reject a
production of lot material
✓ It is a decision-making tool by which a conclusion is reached regarding the acceptability of lot
✓ Acceptance sampling tables are there to supply a set of accepted procedures with known properties and
verified results
✓ Sampling provides rational means of verification that a production lot confirms with requirements of
technical specifications
✓ Most customers understand that 100% inspection is impractical and are generally willing to accept that
a certain level of defectives will be produced. Some customer’s acceptance levels are AQL and LTPD
o The Acceptance Quality Level (AQL) is the percentage level of defects at which a customer is
willing to accept the lot as “good”
o The Lot Tolerance Percent Defective (LTPD) is the upper limit on the percentage of defective that
a customer is willing to accept
o Customers wants lots with quality better than or equal to AQL but are willing to live with some
lots with quality as poor as the LTPD, but prefer not to accept lots with the quality levels worse
than the LTPD
o Therefore, the Sampling plan must be designed to assure the customer that they will be receiving
the required AQL and LTPD
o The Consumer’s risk is the probability that an unacceptable lot (above the LTPD) will be accepted
o The producer’s risk is the probability that a “Good” lot will be rejected
✓ The result of acceptance sampling (assuming rejected lots are 100% inspected) is that the level of
inspection automatically adjust to the quality of the lots being inspected
✓ The “Average Outgoing Quality (AOQ)” is the average of rejected lots and accepted lots, it is given by

Where;

Advantages
• It is usually less expensive because there is less inspection
• There is a less handling of the product, hence reduced damage
• It is applicable to destructive testing
• Fewer personnel are involved in inspection activities
• It often greatly reduces the amount of inspection error
• The rejection of entire lots is opposed to the sample return of defectives often provides a stronger
motivation to the vendor for quality improvements
2. Explain the developing Quality and Quality trilogy
Quality
When the expression “quality” is used, we usually think in terms of an excellent product or service that
fulfils or exceeds our expectations. These expectations are based on the intended use and the selling price.
When a product surpasses our expectations, we consider that quality. Thus, it is somewhat of an intangible
based on perception. Quality can be quantified as follows:
Q=P/E
where;
Q = quality
P = performance
E = expectations
1. In 1924, W. A. Shewhart of Bell Telephone Laboratories developed a statistical chart for the control of
product variables
2. In 1950, W. Edwards Deming, gave a series of lectures on statistical methods to Japanese engineers
3. In 1954, Joseph M. Juran made his first trip to Japan and further emphasized management’s responsibility
to achieve quality
4. In 1960, the first quality control circles were formed. Simple statistical techniques were learned and
applied by Japanese workers
5. By the late 1970s and early 1980s, U.S. managers were making frequent trips to Japan to learn about the
Japanese miracle. These trips were not necessary as these are the writings of Deming and Juran.
Nevertheless, a quality renaissance began to occur in U.S. products and services, and by the middle of
1980 the concepts of TQM were being publicized
6. In the late 1980s the automotive industry began to emphasize statistical process control (SPC). Suppliers
and their suppliers were required to use these techniques. Other industries and department of defence
also implemented SPC
7. Genechi Taguchi introduced his concepts of parameter and tolerance design and brought about a
resurgence of design of experiments (DOE) as a valuable quality improvement tool
8. In the 1990s, emphasis on quality continued in the auto industry when the Saturn automobile ranked
first in customer satisfaction (1996)
9. In addition, ISO 9000 became the worldwide model for a quality management system
10. ISO 14000 was approved as the worldwide model for environmental management systems

Quality Trilogy
Quality Trilogy is also referred as Juran Trilogy and it is one of the best approaches developed by Dr. Joseph
Juran. It has three components;
• Planning
• Control
• Improvement
It is based loosely on financial processes such as bud-getting (planning), expense measurement (control) and
cost reduction (improvement)
Planning
• The planning component begins with external customers. Once quality goals are established, marketing
determines the external customers, and all organizational personnel (managers, members of
multifunctional teams, or work groups) determine the internal customers.
• Once the customers are determined, their needs are discovered. This activity requires the customers to
state needs in their own words and from their own viewpoint; however, real needs may differ from stated
needs
• The next step in the planning process is to develop product and/or service features that respond to
customer needs, meet the needs of the organization and its suppliers, are competitive, and optimize the
costs of all stakeholders. This step typically is performed by a multifunctional team
• The fourth step is to develop the processes able to produce the product and/or ser-vice features

Control
Control is used by operating forces to help meet the product, process, and service requirements. It uses the
feedback loop and consists of the following steps:
1. Determine items/subjects to be controlled and their units of measure
2. Set goals for the controls and determine what sensors need to be put in place to measure the product,
process, or service
3. Measure actual performance
4. Compare actual performance to goals
5. Act on the difference

Improvement
• It is the third part of the trilogy aims to attain levels of performance that are significantly higher than
current levels
• Process improvements begin with the establishment of an effective infrastructure such as the quality
council. Two of the duties of the council are to identify the improvement projects and establish the
project teams with a project owner
• In addition, the quality council needs to provide the teams with the resources to determine the causes,
create solutions, and establish controls to hold the gains
• The problem-solving method may be applied to improve the process, while the quality council is the
driver that ensures that improvement is continuous and never ending
• Process improvement can be incremental or breakthrough
3. Describe the difference between single, double and multiple sampling plans

Sampling Plans specify the lot size, sample size, number of samples and acceptance/rejection criteria

Single Sampling Plan


• Single sampling plan is defined by sample size (n) and the acceptance number (c). Say there are N (lot
size) total items in the lot, choose n of the items at random, if at least c of the items is unacceptable,
reject the lot
• Acceptance/rejection of the lot is based on the results from a single sample, thus a single sampling plan
• It is a representative sample of n items is drawn from a lot size of N items
• Each item in the sample is examined and classified as good/defective
• If the number of defective exceeds a specified rejection number (C = Cut of point), the whole lot is
rejected; otherwise, the whole lot is accepted

Double Sampling Plan


• A Double sampling plan allows the opportunity to take a second sample if the result of the original sample
is inconclusive
• Specifies the lot size, size of the initial sample, the accept/reject/inconclusive criteria for the initial sample
(CL = lower level of defectives, CU = Upper level of defectives)
• Specifies the size of the second sample and the acceptance/rejection criteria based on the total number
of defective observed in both the first and second sample (CT = Total allowable defective)
• It works like the following example

Compare number of defective found in the first random sample to CL and CU and make appropriate
decision

Compare the total number of defective in both the samples to CT and make the appropriate decision

Multiple Sampling Plan


• Multiple sample planning is similar to double sampling plan in that successive trails are made, each of
which has acceptance, rejection and inconclusive options
• It involves inspection of 1 to successive samples as required to reach an ultimate decision
• It is very flexible as compared to other methods of sampling
• It leads to administrative efficiency by permitting the field work to be concentrated and yet covering a
large area
• Choosing the sampling plan depends on the cost and time, number of samples needed and number of
items in each sample
4. Discuss the ISO9000 and its importance.
• The International Organization for Standardization (ISO) is a worldwide federation of national
standards bodies from more than 160 countries, one from each member country
• ISO is a non-governmental organization established in 1947 and based in Geneva
• History - The organization which today is known as ISO began in 1926 as the International Federation
of the National Standardizing Associations (ISA). This organization focused heavily on mechanical
engineering. It was disbanded in 1942 during the second World War but was re-organized under the
current name, ISO, in 1946.
• Purpose - The International Organization for Standardization (ISO) is a global organization that works
to provide standardization across an array of products and companies. Its main goal is to facilitate
trade, but its focus is on process improvement, safety, and quality in several areas
• Standards
o ISO has published over 13,000 standards. The ISO 9000 series of standards, related to quality
management, is perhaps the most widely known and impactful of any standards issued by ISO
o The ISO 9000 definition is a description of a quality management syste006D
o ISO 9000 is often used to refer to a family of three standards:
- ISO 9000:2005 – QMS - Fundamentals and vocabulary
- ISO 9001:2008 – QMS - Requirements
- ISO 9004:2000 – QMS - Guidelines for performance improvement
• ISO 9000 is a set of internationally recognized standards for quality assurance and management.
Published by the International Organization for Standardization, it aims to encourage the production
of goods and services that meet a globally-acceptable level of quality
• The ISO 9000 system is a quality management system that can be adopted by all types of organizations
belonging to government, public, private, (or) joint sectors
• The ISO 9000 system shows the way in creating products by preventing deficiencies, instead of
conducting expensive post product inspections and rework
• Origin - The ISO 9000 series was intended to harmonies the large number of standards concerned
with quality management that had emerged in several countries throughout the world, although its
origins are in the British Standards BS5750, dating from 1979
• Elements - The ISO 9000 standards are based on three basic principles:
o Processes affecting the quality of products and services must be documented
o Records of important decisions and related data must be archived
o When these first two steps are complied with, the product or service will enjoy continuing
improvement
• Basic requirements
o Procedure to cover all processes in the business
o Monitoring process to ensure effectiveness
o Keeping adequate record
o Defect verification and appropriate correction
o Regular review of individual processes
o Facilitating continual improvement
• Importance
o ISO 9000 is a quality management standard that presents guidelines intended to increase
business efficiency and customer satisfaction.
o The goal of ISO 9000 is to embed a quality management system within an organization, increasing
productivity, reducing unnecessary costs, and ensuring quality of processes and products.
5. Explain the basic tools of quality controls
• There are many proposed tools and techniques to achieve the TQM promises. Generally, a technique
can be considered as a number of activities performed in a certain order to reach the values.
• On the other hand, tools sometimes have statistical basis to support decision making or facilitate
analysis of data.
• The best tools for this purpose are
o Check sheet
o Pareto chart
o Histogram
o Scatter diagram
o X-Bar and R-chart
o Flow chart
o Control charts
o Cause and Effect Diagram
o Data Collection
o Statistical process control (SPC)
• The data collected by these tools can be used to measure the process
• The TQM tools and techniques can be divided into simple tools for solving a special problem and
complex one that cover all functions within the company.

Check Sheet
• A check sheet can be introduced as the most basic tool for quality. A check sheet is basically used for
gathering and organizing data
• This can be done with the help of software packages such as Microsoft Excel which you can derive
further analysis graphs and automate through macros available
• Types of Check sheets
i) Process distribution check sheets
ii) Defective item check sheets
iii) Defect location check sheet
iv) Defect factor check sheet

Pareto chart
• Pareto charts are used for identifying a set of priorities. You can chart any number of issues/variables
related to a specific concern and record the number of occurrences
• This way you can figure out the parameters that have the highest impact on the specific concern
• This helps you to work on the propriety issues in order to get the condition under control

Histogram
• Histogram is used for illustrating the frequency and the extent in the context of two variables
• Histogram is a chart with columns, this represents the distribution by mean
• If the histogram is normal, the graph takes the shape of a bell curve.
• If it is not normal, it may take different shapes based on the condition of the distribution
• Histogram can be used to measure something against another thing. Always, it should be two
variables
Scatter diagram
• When it comes to the values of two variables, scatter diagrams are the best way to present.
• Scatter diagrams present the relationship between two variables and illustrate the results on a
Cartesian plane. Then, further analysis, such as trend analysis can be performed on the values.
• In these diagrams, one variable denotes one axis and another variable denotes the other axis.

X-Bar and R-chart


• An X-Bar and R-Chart are control charts utilized with processes that have subgroup sizes of 2 or more.
They are a standardized chart for variables data and help determine if a particular process is
predictable and stable
• These are used to monitor the effects of process improvement theories.
• X-Bar and R-Chart are often used in business applications like manufacturing, to measure equipment
part sizes; in service, industries to evaluate customer support call handle times, or in healthcare for
uses like measuring blood pressure over time.

Flow chart
• This is one of the basic quality tools that can be used for analyzing a sequence of events.
• The tool maps out a sequence of events that take place sequentially or in parallel
• The flow chart can be used to understand a complex process in order to find the relationships and
dependencies between events.
• Can get a brief idea about the critical path of the process and the events involved in the critical path.
• Flow charts can be used for any field and to illustrate events involving processes of any complexity
• There are specific software tools developed for drawing flow charts, such as MS Vision.

Control charts
• Control chart is the best tool for monitoring the performance of a process
• These types of charts can be used for monitoring any processes related to function of the
organization.
• These charts allow you to identify the following conditions related to the process that has been
monitored.
o Stability of the process
o Predictability of the process
o Identification of common cause of variation
o Special conditions where the monitoring party needs to react

Cause and Effect Diagram


• Cause and effect diagrams (Ishikawa Diagram) are used for understanding organizational or business
problem causes.
• Used to understand the causes of the problems in the organization in order to solve them effectively
• Cause and effect diagrams exercise is usually a teamwork.
• A brainstorming session is required in order to come up with an effective cause and effect diagram
• All the main components of a problem area are listed and possible causes from each area is listed.
Then, most likely causes of the problems are identified to carry out further analysis.
Data Collection
• A Data Collection Plan is a well thought out approach to collecting both baseline data as well as data
that can provide clues to root cause
• The plan includes where to collect data, how to collect it, when to collect it and who will do the
collecting
• This plan is prepared for each measure and includes helpful details such as the operational definition of
the measure as well as any sampling plans
• Types of Data Collection – Data may be grouped into 4 types based on the methods for collection
o Observational
o Experimental
o Simulation
o Derived

Statistical process control (SPC) – Process Capability


• Process capability is defined as a statistical measure of the inherent process variability of a given
characteristic
• Can use a process-capability study to assess the ability of a process to meet specifications. ... Cp and
Cpk show how capable a process is of meeting its specification limits, used with continuous data.
• we can calculate Cp (Process Capability), Cpk (Process Capability Index), or Pp (Preliminary Process
Capability) and Ppk (Preliminary Process Capability Index), depending on the state of the process and
the method of determining the standard deviation or sigma value
6. Explain the relation between reliability, MTBF and failure rate.
Reliability
• According to The Advisory Group on the Reliability of Electronic Equipment (AGREE), the classic definition
of reliability is “The probability of a product performing without failure a specified function under given
conditions for a specified period of time."
• Reliability is also an important aspect when dealing with customer satisfaction. Whether the customer is
internal or external
• Customers want a product that will have a relatively long service life, with long times between failures.
However, as products become more complex in nature, traditional design methods are not adequate for
ensuring low rates of failure. This problem gave rise to the concept of designing reliability into the product
itself
• Reliability and life data analysis is preformed to estimate measures such as Mean Time Between Failures
(MTBF) or B10 life which is the time period by which 10% of the parts or systems will fail
• Reliability data is most often incomplete, that is, the data of failures and survivals. Such data requires
different approach from statistical analyses to draw conclusions about reliability validation.
• The most commonly used reliability prediction formula is the exponential distribution, which assumes a
constant failure rate and is given by

Reliability = e (- failure rate x time)

MTBF
• MTBF stands for Mean Time Between Failures and is represents the average time between two failures
for a repairable system
• MTBF is related to failure rate. It assumes a constant random failure rate during the useful life of a piece
of equipment
• History
o The MTBF calculation comes out of the reliability initiatives of the military and commercial aviation
industries
o It was introduced as a way to set specifications and standards for suppliers to improve the quality of
components for use in mission-critical equipment like missiles, rockets and aviation electronics
o Maintenance practitioners first used MTBF as a basis for setting up time-based maintenance
strategies
o Inspection intervals and routine maintenance tasks were set up based on MTBF
o These programs aimed to identify potential failures before they occurred, but time-based systems are
not the most effective strategy
• We calculate MTBF by dividing the total running time by the number of failures during a defined period.
As such, it is the inverse of the failure rate

MTBF = running time / no. of failures


Failure Rate
• The failure rate is the number of failures in a component or piece of equipment over a specified period
• It is important to note that the measurement excludes maintenance-related outages, these outages are
not deemed to be failures and therefore, do not form
• In industrial applications, the failure rate represents past performance based on historical data, But in
engineering design, the failure rate can also be predicted
• There is a high rate of infancy failures at the beginning of its life and a high rate of wear out failures at
the end of its life. But in between, during the product’s useful life, its rate of failure is expected to be
reasonably constant
• Manufacturers seek to reduce infancy failures by testing products and removing early failures before they
get to the customer
• The disadvantage of failure rate as an indicator is that it yields a tiny result, which is difficult to interpret
• A failure rate does not correlate with online time or availability for operation – it only reflects the rate of
failure and is given by

Failure Rate = No. Of Failures / Time

You might also like