Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 17

Goodness of Measures

• Measuring the concept that we set out to


measure.
• Scales can often be imperfect, and errors
are prone to occur in the measurement of
variables.
• Better instrument will ensure more accuracy
in results, which in turn enhance the
scientific quality of the research.
• We need to assess the “goodness” of the
measures developed.
• How we can ensure that the measures
developed are reasonably good?

• Item analysis (clarity & relevancy)


• Reliability
• Validity
Reliability
How consistently a measuring instrument
measures whatever concept it is measuring.

The degree to which measures are free from


random error and therefore yield consistent
results.
Validity
The ability of a scale to measure what was
intended to be measured.

Validity is a test of how well an instrument that is


developed measures the particular concept it is
intended to measure.
Forms of Reliability
1. Stability of measures
• The ability of a measure to remain the same
over time.
• Two forms:
I. Test-retest reliability: The reliability
coefficient obtained by repetition of the same
measure on a second occasion.
• Measure the concept from same set of
respondents after several weeks to six months
later
Forms of Reliability
• The higher it is, the better the test–retest
reliability & consequently, the stability of the
measure across time.
II.Parallel-form reliability
• Two comparable sets of measures tapping the
same construct.
• Both forms have similar items and the same
response format.
• Only changes being the wording and the
order or sequence of the questions.
Forms of Reliability
• If two such comparable forms are highly
correlated (say 8 and above).

• We fairly certain that the measures are


reasonably reliable, with minimal error
variance caused by wording, ordering, or
other factors.
Forms of Reliability
2. Internal consistency of measures
• It is indicative of the homogeneity/similarity
of the items in the measure that tap the
construct.
• Examining whether the items and the subsets
of items in the measuring instrument are
correlated highly.
I. Inter-item consistency reliability
II.Split-half reliability tests
Forms of Reliability
I. Inter-item consistency reliability
• Test of the consistency of respondents' answers to
all the items in a measure.
• The degree that items are independent measures
of the same concept, they will be correlated with
one another.
• The most popular test of interitem consistency
reliability is Cronbach's coefficient alpha
(Cronbach, 1946)
• The higher the coefficients, the better the
measuring instrument.
Forms of Reliability
II. Split-half reliability reflects the correlations between
two halves of an instrument.

• The estimates will vary depending on how the items


in the measure are split into two halves.
Forms of Validity
I. Content validity ensures that the measure
includes an adequate and representative set of
items that tap the concept.
• A panel of judges can attest to the content
validity of the instrument.
• Face validity is considered by some a basic
and minimum index of content validity.
• Face validity indicates that the items that are
intended to measure a concept, do, on the face
of it, look like they measure the concept.
Forms of Validity
II. Criterion-related validity is established when the
measure differentiates individuals on a criterion it
is expected to predict.
a) Concurrent validity is established when the scale
discriminates individuals who are known to be
different; that is, they should score differently on
the instrument.
b) Predictive validity indicates the ability of the
measuring instrument to differentiate among
individuals with reference to a future criterion.
Forms of Validity
III. Construct validity testifies to how well the results
obtained from the use of the measure fit the theories
around which the test is designed.

a) Convergent validity is established when the scores


obtained with two different instruments measuring the
same concept are highly correlated.

b) Discriminant validity is established when, based on


theory, two variables are predicted to be uncorrelated.
Common Method Biases
Podsakoff et al. (2003)
• Ensured Participants’ anonymity
• Different formats for different sections of the
instruments
• Unique categorization for anchoring of various
scales
• Avoid neutral scale point
• Multiple raters to aggregate score for
organizations.
• The independent and dependent variables
measured from two separate sources.

You might also like