Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 6

RESUME RESEARCH UMA SEKARAN

BAB 9

- Compare and contrast different types of questionnaires.

Questionnaires are generally designed to collect large numbers of (quantitative) data. They can be
administered personally, mailed to the respondents, or distributed electronically. When the survey is
limited to a local area a good way to collect data is to personally administer the questionnaires. A mail
questionnaire is a self-administered (paper and pencil) questionnaire that is sent to respondents via the
mail. This method has long been the backbone of business research, but with the arrival of the Internet,
mail questionnaires have become redundant or obsolete. Instead, online questionnaires are posted on
the Internet or sent via email.

- Design questionnaires to tap different variables.

Sound questionnaire design principles should focus on three areas. The first concerns the wording of the
questions; the second refers to the planning of issues with regard to how the variables will be
categorized, scaled, and coded after the responses are received; and the third pertains to the general
appearance of the questionnaire. All three are important issues because they can minimize bias in
research.

- Discuss the issues related to cross-cultural research.

With the globalization of business, managers are often interested in similarities and differences in
behavioral and attitudinal responses of people (employees, consumers, investors) in different cultures.
Surveys should be tailored to the specific needs and features of different cultures. At least three issues
are important for cross-cultural data collection – response equivalence, timing of data collection, and
the status of the individual collecting the data.

- Discuss the advantages of multisources and multimethods of data collection.

Because almost all data collection methods have some bias associated with them, collecting data
through multimethods and from multiple sources lends rigor to research. If data obtained from several
sources bear a great degree of similarity, we will have stronger conviction of the goodness of the data

- Demonstrate awareness of the role of the manager in primary data collection.

Managers often engage consultants to do research and may not be collecting data themselves through
interviews, questionnaires, or observation. However, some basic knowledge of the characteristics and
the advantages and disadvantages of primary methods of data collection will help them to evaluate
alternative approaches to primary data collection and/or to understand why a consultant has opted for
a certain method or for a combination of methods.
- Demonstrate awareness of the role of ethics in primary data collection.

Several ethical issues should be addressed while collecting primary data. These pertain to those who
sponsor the research, those who collect the data, and those who offer them.

BAB 10

- Describe lab experiments and discuss the internal and external validity of this type of
experiment.

When control and manipulation are introduced to establish cause-and-effect relationships in an artificial
setting, we have lab experiments. The goal of the researcher is to keep every variable constant except
for the independent variable. Manipulation means that we create different levels of the independent
variable to assess the impact on the dependent variable. One way of controlling for contaminating
variables is to match the various groups in an experiment. Another way of controlling the contaminating
variables is to assign participants randomly to groups. In lab experiments internal validity can be said to
be high. On the other hand, the external validity of lab experiments is typically low.

- Describe field experiments and discuss the internal and external validity of this type of
experiment.

A field experiment is an experiment done in the natural environment. In the field experiment, it is not
pos- sible to control all the nuisance variables. The treatment however can still be manipulated. Control
groups can also be set up in field experiments. Cause-and-effect relationships found under these
conditions will have a wider generalizability to other similar settings (the external validity is typically
high; the internal validity of field experiments is low).

- Describe, discuss, and identify threats to internal and external validity and make a trade-off
between internal and external validity.

External validity refers to the extent of generalizability of the results of a causal study to other settings.
Internal validity refers to the degree of our confidence in the causal effects. Field experiments have
more external validity, but less internal validity. In lab experiments, the internal validity is high but the
exter- nal validity is low. There is thus a trade-off between internal validity and external validity. Even
the best designed lab studies may be influenced by factors affecting the internal validity. The seven
major threats to internal validity are the effects of history, maturation, (main) testing, selection,
mortality, statistical regression, and instrumentation. Two threats to external validity are (interactive)
testing and selection.
- Describe the different types of experimental designs.

A quasi-experimental design is the weakest of all designs, and it does not measure the true cause-and-
effect relationship. Pretest and posttest experimental group designs, posttests only with experimental
and control groups, and time series designs are examples of quasi-experimental designs. True
experimen- tal designs that include both treatment and control groups and record information both
before and after the experimental group is exposed to the treatment are known as ex post facto
experimental designs. Pretest and posttest experimental and control group designs, Solomon four-group
designs, and double blind studies are examples of true experimental designs. In ex post facto
experimental design there is no manipulation of the independent variable in the lab or field setting, but
subjects who have already been exposed to a stimulus and those not so exposed are studied.

- Discuss when and why simulation might be a good alternative to lab and field experiments.

An alternative to lab and field experimentation, simulation uses a model-building technique to


determine the effects of changes.

- Discuss the role of the manager in experimental designs.

Knowledge of experimental designs may help the manager to (engage a consultant to) establish cause-
and- effect relationships. Through the analysis of the cause-and effect relationships, it is possible to find
answers or solutions to a problem. Experiments may help managers to examine whether bonus systems
lead to more motivation, whether piece rates lead to higher productivity, or whether price cuts lead to
more sales.

- Discuss the role of ethics in experimental designs.

Ethics in experiments refers to the correct rules of conduct necessary when carrying out experimental
research. Researchers have a duty to respect the rights and dignity of research participants. This means
that they should take certain rules of conduct into account.
BAB 11

- Explain how variables are measured.

To test hypotheses the researcher has to measure. Measurement is the assignment of numbers or other
symbols to characteristics (or attributes) of objects according to a prespecified set of rules. There are at
least two types of variables: one lends itself to objective and precise measurement; the other is more
nebulous and does not lend itself to accurate measurement because of its abstract and subjective
nature.

- Explain when operationalization of variables is necessary.

Despite the lack of physical measuring devices to measure the more nebulous variables, there are ways
of tapping these types of variables. One technique is to reduce these abstract notions to observable
behavior and/or characteristics. This is called operationalizing the concepts. A valid measurement scale
includes quantitatively measurable questions or items (or elements) that adequately represent the
domain or uni- verse of the construct; if the construct has more than one domain or dimension, the
researcher has to make sure that questions that adequately represent these domains or dimensions are
included in the measure. An operationalization does not describe the correlates of the concept.

- Operationally define (or operationalize) abstract and subjective variables.

In conducting transnational research, it is important to remember that certain variables have different
meanings and connotations in different cultures.
BAB 12

- Describe the characteristics and power of the four types of scales – nominal, ordinal, interval,
and ratio.

To be able to assign numbers to attributes of objects we need a scale. A scale is a tool or mechanism by
which individuals are distinguished as to how they differ from one another on the variables of interest to
our study. Scaling involves the creation of a continuum on which our objects are located. There are four
basic types of scales: nominal, ordinal, interval, and ratio. The degree of sophistication to which the
scales are fine- tuned increases progressively as we move from the nominal to the ratio scale.

- Describe and know how and when to use different forms of rating scales.

In rating scales each object is scaled independently of the other objects under study. The following
rating scales are often used in business research: dichotomous scale, category scale, semantic
differential scale, numerical scale, itemized rating scale, Likert scale, fixed or constant sum rating scale,
Stapel scale, graphic rating scale, and consensus scale. The Likert scale or some form of numerical scale
is the one most frequently used to measure attitudes and behaviors in business research.

- Describe and know how and when to use different forms of ranking scales.

Ranking scales are used to tap preferences between two or among more objects or items. The paired
com- parison scale is used when, among a small number of objects, respondents are asked to choose
between two objects at a time. The forced choice enables respondents to rank objects relative to one
another, among the alternatives provided. The comparative scale provides a benchmark or a point of
reference to assess atti- tudes toward the current object, event, or situation under study.

- Discuss international dimensions of scaling.

Different cultures react differently to issues of scaling. What’s more, recent research has shown that
people from different countries differ in both their tendency to use the extremes of the rating scale (for
instance 1 and 5 or 1 and 7) and to respond in a socially desirable way. These findings illustrate that
analyzing and interpreting data that are collected in multiple countries is a challenging task.

- Describe validity and reliability and how they are established and assess the reli- ability and
validity of a scale.

Reliability is a test of how consistently a measuring instrument measures whatever concept it is


measuring. Validity is a test of how well an instrument that is developed measures the particular
concept it is intended to measure. Several types of validity test are used to test the goodness of
measures. Content validity ensures that the measure includes an adequate and representative set of
items that tap the concept. Criterion- related validity is established when the measure differentiates
individuals on a criterion it is expected to predict. Construct validity testifies to how well the results
obtained from the use of the measure fit the theo- ries around which the test is designed. Two tests of
stability are test–retest reliability and parallel-form reli- ability. The internal consistency of measures is
indicative of the homogeneity of the items in the measure that taps the construct.

- Explain the difference between reflective and formative scales.

The items that measure a concept should not always hang together: this is only true for reflective, but
not for formative, scales. In a reflective scale, the items (all of them!) are expected to correlate. A
formative scale is used when a construct is viewed as an explanatory combination of its indicators.

You might also like