Three Types of Research
Three Types of Research
1. Causal Reseach
When most people think of scientific experimentation, research on cause and
effect is most often brought to mind. Experiments on causal relationships
investigate the effect of one or more variables on one or more outcome variables.
This type of research also determines if one variable causes another variable to
occur or change. An example of this type of research would be altering the
amount of a treatment and measuring the effect on study participants.
2. Descriptive Research
Descriptive research seeks to depict what already exists in a group or
population. An example of this type of research would be an opinion poll to
determine which Presidential candidate people plan to vote for in the next
election. Descriptive studies do not seek to measure the effect of a variable; they
seek only to describe.
3.
Relational Research
A study that investigates the connection between two or more variables is
considered relational research. The variables that are compared are generally
already present in the group or population. For example, a study that looked at
the proportion of males and females that would purchase either a classical CD or
a jazz CD would be studying the relationship between gender and music
preference.
While the terms are sometimes used interchangeably in general practice, the
difference between a theory and a hypothesis is important when studying
experimental design.
Some important distinctions to note include:
1
• A theory predicts events in general terms, while a hypothesis makes a
specific prediction about a specified set of circumstances.
• A theory is has been extensively tested and is generally accepted, while a
hypothesis is a speculative guess that has yet to be tested.
One of the most important distinctions to make when discussing the relationship
between variables is the meaning of causation.
2
• In a negative correlation, as the amount of one variable goes up, the levels
of another variable go down.
• In both types of correlation, there is no evidence or proof that changes in one
variable cause changes in the other variable. A correlation simply indicates
that there is a relationship between the two variables.
The most important concept to take from this is that correlation does not equal
causation.
Many popular media sources make the mistake of assuming that simply because
two variables are related that there a causal relationship exists.
Q. What is Validity?
A. Validity is the extent to which a test measures what it claims to measure. It is
vital for a test to be valid in order for the results to be accurately applied and
interpreted.
• Concurrent Validity occurs when the criterion measures are obtained at the
same time as the test scores. This indicates the extent to which the test
scores accurately estimate an individual’s current state with regards to the
criterion. For example, on a test that measures levels of depression, the test
would be said to have concurrent validity if it measured the current levels of
depression experienced by the test taker.
3
Construct Validity:
A test has construct validity if it demonstrates an association between the test
scores and the prediction of a theoretical trait. Intelligence tests are one example
of measurement instruments that should have construct validity.
Q. What Is Reliability?
A. Reliability refers to the consistency of a measure. A test is considered reliable
if we get the same result repeatedly. For example, if a test is designed to
measure a trait (such as introversion), then each time the test is administered to
a subject, the results should be approximately the same. Unfortunately, it is
impossible to calculate reliability exactly, but there several different ways to
estimate reliability.
Test-Retest Reliability
To gauge test-retest reliability, the test is administered twice at two different
points in time. This kind of reliability is used to assess the consistency of a test
across time. This type of reliability assumes that there will be no change in the
quality or construct being measured. Test-retest reliability is best used for things
that are stable over time, such as intelligence.
Generally, reliability will be higher when little time has passed between tests.
Inter-rater Reliability
This type of reliability is assessed by having two or more independent judges
score the test. The scores are then compared to determine the consistency of the
raters estimates. One way to test inter-rater reliability is to have each rater assign
each test item a score. For example, each rater might score items on a scale
from 1 to 10. Next, you would calculate the correlation between the two rating to
determine the level of inter-rater reliability. Another means of testing inter-rater
reliability is to have raters determine which category each observations falls into
and then calculate the percentage of agreement between the raters. So, if the
raters agree 8 out of 10 times, the test has an 80% inter-rater reliability rate.
Parallel-Forms Reliability
Parellel-forms reliability is gauged by comparing to different tests that were
created using the same content. This is accomplished by creating a large pool of
test items that measure the same quality and then randomly dividing the items
into two separate tests. The two tests should then be administered to the same
subjects at the same time.
Internal Consistency Reliability
This form of reliability is used to judge the consistency of results across items on
the same test. Essentially, you are comparing test items that measure the same
construct to determine the tests internal consistency. When you see a question
that seems very similar to another test question, it may indicate that the two
questions are being used to gauge reliability. Because the two questions are
similar and designed to measure the same thing, the test taker should answer
both questions the same, which would indicate that the test has internal
consistency.
4
The Simple Experiment
Finding Cause-and-Effect Relationships
What is a Simple Experiment?
A simple experiment is used to establish cause and effect, so this type of study is
often used to determine the effect of a treatment. In a simple experiment, study
participants are randomly assigned to one of two groups. Generally, one group is
the control group and receives no treatment, while the other group is the
experimental group and receives the treatment.
Parts of a Simple Experiment
The experimental hypothesis: a statement that predicts that the treatment will
cause an effect. The experimental hypothesis will always be phrased as a cause-
and-effect statement.
The null hypothesis: a hypothesis that the experimental treatment will have no
effect on the participants or dependent variables. It is important to note that
failing to find an effect of the treatment does not mean that there is no effect.
The treatment might impact another variable that the researchers are not
measuring in the current experiment.
5
representative sample of that population.
6
Naturalistic observation involves observing and recording the variables of interest
in the natural environment without interference or manipulation by the
experimenter.
Survey and questionnaires are one of the most common methods used in
psychological research. In this method, a random sample of participants
completes a survey, test, or questionnaire that relates to the variables of interest.
Random sampling is a vital part of ensuring the generalizability of the survey
results.
• It’s fast, cheap, and easy. Researchers can collect large amount of data in a
relatively short amount of time.
• More flexible than some other methods.
3. Archival Research
7
Advantages of Archival Research:
• The researchers have not control over how data was collected.
• Important date may be missing from the records.
• Previous research may be unreliable.