Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 16

1

Compilation for PSY 332

by
Moyosolaoluwa Olowokure

DISCLAIMER: This is NOT an official UNILAG FSS material. This compilation was
curated by Moyosolaoluwa for her revision; please use it with caution and
discretion.
INTRODUCTION
Psychology aims at observing, explaining, predicting & modifying behaviour. This is
done using the scientific method of investigation which has 6 KEY STEPS:
1-Observing, 2-Asking a Question, 3-Forming a Hypothesis,
4-Testing the Hypothesis, 5-Conclusion 6-Presenting the Results
It also has 3 MAJOR TOOLS:
1-Observation 2–Measurement (Nominal, Ordinal, Interval & Ratio
Scale) 3–Experimentation (within-groups design & between-groups design)
2

TYPES OF SCIENTIFIC INVESTIGATION/RESEARCH DESIGNS


Descriptive (Observational) – seeks to simply describe or analyse data without any
hypothesis or prediction e.g. “What is the attitude of UNILAG students towards
premarital sex?”

Comparative – Collects and Compares the data from different groups or within the
same group at different points in time. e.g. “ Gender Differences in Recall of
Emotional and Neutral Data”, “Differences in Recall Before and After Interference”,

Correlational/Predictive – Investigates the relationship/association between two


variables e.g. Relationship between Self-Esteem and Anxiety Levels among students
of UNILAG.

Experimental/Causal – The experimenter controls/manipulates an independent


variable while other variables are kept constant to see the
impact/influence/effect/prediction of one variable on another e.g. Impact of stress
tolerance on suicidal ideation among students of UNILAG. It relies on fundamental
principles: Determinism (there is cause and effect and the state of an object is
determined by its previous states), Empiricism (we can only study what is observable
and not compare hypotheses to intuition or revelation.It is positive not normative),
Parsimony (simplicity/basicness) and Testability (hypotheses and theories should be
testable and falsifiable overtime).
Types of Experiments:
Controlled/True experiments – done in the laboratory using randomized
assignment of participants to a control group and treatment group. Such
experiments could use a pretest-posttest design (testing outcomes in both groups
before and after an intervention), a posttest-only design (testing outcomes only after
the intervention, or a combination of the two (Solomon four-group design).
E.g Darley & Latane’s 1968 bystander experiment was conducted in a controlled
laboratory setting

Quasi-experimental research is very similar to experimental but without the random


assignment of participants to groups e.g. Effect of landslide on anxiety levels of
residents of Afue village and Titu village. (It would be unethical/ impossible to
randomly assign people to locations for them to experience the landslide and then
compare). It is useful when random assignment is not feasible or ethical. The
experimenter may have some control over the IV but it can be prone to bias,
making it more difficult to definitively establish causality compared to true
experiments.
3

Natural Experiments – A type of quasi-experimental research done using natural or


pre-existing groups. E.g. The Great Smog of London (1952). A severe air pollution
event resulted in a significant increase in respiratory illnesses and deaths.
Researchers were able to compare mortality rates during the smog event to previous
periods, highlighting the negative health impacts of air pollution.

Field Experiments – done in a natural setting to preserve external validity. It uses


random assignment but is still at risk of having extraneous and confounding
variables and low internal validity compared to true experiments. E.g the bystander
effect social experiment carried out by 300L students of psychology in natural
settings on campus (Relaxation center, Moremi, Quadrangle, etc)
Research is also divided into these categories which are not mutually exclusive:
Basic Research – No practical goal, it is the reservoir of data and explanations the
applied researcher will draw from e.g. what are the causes of depression?

Applied Research – Research aimed at solving a particular problem e.g how to cure
depression

Longitudinal Research – Monitoring the same event or individual or group over a


relatively long but defined time. It is detailed and expressive but stands the risk of
attrition, practice effects, and cohort effects. A Case study focuses on one
individual/place for a long time. It risks reliability, generalizability, and unbiased
selection in reporting.

Sequential Research – A longitudinal study carried out on multiple cohorts,

Cross-sectional Research– Individuals from different ages/stages/cohorts are tested


at the same point in time.

STEPS IN HYPOTHESIS TETSING (SCRAP)


State Hypothesis – Whether Alternate (Ha) or Null (H0) but in psychology we use
Ha. We also prefer one-tailed/directional hypotheses.
Collection of Data – Using the appropriate method
Run Statistical Test
Accept or Reject Hypothesis
Present findings in current APA style
4

VARIABLES
These are attributes that take on different values.
TYPES OF VARIABLES
Quantitative – Numbers represent actual amounts. They could be: Discrete (whole
numbers/ counts without fractions) or Continuous (infinite, with fractions) e.g
Variables measured on an interval or ratio scale

Qualitative – Numbers do not represent actual amounts. Rather, they represent


categories. They could be: Dichotomous/Binary (e.g gender, yes or no questions,
etc). They are variables measured on a nominal or ordinal scale.

Independent Variable – The treatment/control variable in experimental research.


Dependent Variable – The response variable in experimental research

Predictor Variable – The ‘independent variable’ in correlational research, x


Outcome Variable – The ‘dependent variable’ in correlational research, y

Latent Variable – Such a variable cannot be directly measured and so a proxy is


used, thus they are measured indirectly by using observable variables or “proxies”.
For example, “happiness” is a latent variable that can be measured by “how happy
one feels on a scale of one to ten”.
Composite Variable – A combination of multiple variables used in data analysis.

Confounding Variable – An uncontrolled variable that affects both the IV and DV


Extraneous Variable – An uncontrolled variable that affects the DV

MEASUREMENT
Measurement is the application of rules of assigning numbers to objects. Scales of
measurement refers to a feature of measurement that helps transform qualities of
attributes into numbers.
SCALES/LEVELS OF MEASUREMENT
5

Nominal– Categorises by labelling. It has no magnitude, equal intervals between


data points or absolute 0. Thus, mode is the only available measure of central
tendency for this level e.g location

Ordinal– Categorises and Ranks. It has magnitude but no equal intervals or absolute
0 and thus, mode and median are the only available measures of central tendency
for this level e.g tax bracket, karate ability level, class rank

Interval– Categorises, ranks and infers equal intervals between data points but has
no absolute 0 nor fractions. Thus, mode, median, and mean are all available
measures of central tendency for this level e.g IQ, temperature, time of day.
They are often discrete variables
Ratio– Categorises, ranks, infers equal intervals between data points and has
absolute 0 and fractions. The absence of a variable at this level is indicated by 0 and
they could be continuous variables. Thus, mode, median, and mean are all available
measures of central tendency for this level e.g height, weight, CGPA

SAMPLING
The population is the group we are trying to make conclusions about while the
sample is the representative sub-group that we actually collect data from. Sampling
is the process of selecting a sample that is representative of the target population.

TYPES OF SAMPLING METHODS:


Probability Sampling – every member of the population has a chance of being
selected. It is mainly used in quantitative research. If you want to produce results
that are representative of the whole population, probability sampling techniques are
the most valid choice.
Types of Probability Sampling:
Simple Random Sampling – every member of the population has an equal chance
of being selected. Your sampling frame should include the whole population.
To conduct this type of sampling, you can use tools like random number generators
or other techniques that are based entirely on chance to select the sample.

Systematic Sampling – every member of the population is listed with a number, but
instead of randomly generating numbers, individuals are chosen at regular intervals.
e.g. choosing after every four persons. One must ensure that there is no hidden
pattern in the list that might skew the sample.

Stratified Sampling – the population is divided into subpopulations/ subgroups


called “strata” that may differ in important ways/characteristics (e.g., gender identity,
6

age range, income bracket, job role). It allows you to draw more precise conclusions
by ensuring that every subgroup is properly represented in the sample. Based on the
overall proportions of the population, you calculate how many people should be
sampled from each subgroup. Then you use random or systematic sampling to
select a sample from each subgroup.

Cluster Sampling – dividing the population into subgroups, but each subgroup
should have similar characteristics to the whole sample. Instead of sampling
individuals from each subgroup, you randomly select entire subgroups.
If it is practically possible, you might include every individual from each sampled
cluster. If the clusters themselves are large, you can also sample individuals from
within each cluster using one of the techniques above. This is called multistage
sampling. This method is good for dealing with large and dispersed populations, but
there is more risk of error in the sample, as there could be substantial differences
between clusters. It’s difficult to guarantee that the sampled clusters are
representative of the whole population.

Non-Probability Sampling – individuals are selected based on non-random criteria,


and not every individual has a chance of being included. This type of sample is
easier and cheaper to access, but it has a higher risk of sampling bias. That means
the inferences you can make about the population are weaker than with probability
samples, and your conclusions may be more limited. If you use a non-probability
sample, you should still aim to make it as representative of the population as
possible. Non-probability sampling techniques are often used in exploratory and
qualitative research. In these types of research, the aim is not to test a hypothesis
about a broad population but to develop an initial understanding of a small or
under-researched population.
Types of Non-probability Sampling:
Convenience Sampling – Lazy. It involves using participants that are closest/most
accessible to the researcher

Voluntary response sampling – Instead of the researcher choosing participants and


directly contacting them, people volunteer themselves (e.g. by responding to a
public online survey). Voluntary response samples are always at least somewhat
biased, as some people will inherently be more likely to volunteer than others,
leading to self-selection bias.
For NAPS PRO sends a questionnaire to the general group chat asking people to air
their opinions on the state of facilities in FSS. Responses may only be from people
with strong opinions if it's just based on people volunteering.
7

Purposive Sampling/Judgment sampling – the researcher uses their expertise to


select a sample that is most useful to the purposes of the research. It is often used
in qualitative research, where the researcher wants to gain detailed knowledge about
a specific phenomenon rather than make statistical inferences, or where the
population is very small and specific. An effective purposive sample must have clear
criteria and rationale for inclusion and exclusion. These inclusion and exclusion
criteria must be described and the researcher should be aware of observer bias
affecting their arguments. E.g selecting papers/studies for a systematic review

Snowball Sampling – Participants are contacted to recruit other participants and the
number of participants continues to “snowball” (increase in size) as more participants
are recruited. This method is very useful when one does not have easy access to the
target population but it risks sampling bias (when some members of a population
are systematically more likely to be selected in a sample than others) and thus lower
generalizability and external validity.
e.g. a study to be conducted on homeless people where participants are recruited
by other homeless people.

Quota Sampling – The pop is divided into mutually exclusive strata and then
sample units are recruited until the quota (predetermined number or proportion of
units) is reached. These units share specific characteristics, determined by the
researcher before forming strata. Quota sampling aims to control what or who
makes up your sample.

DATA COLLECTION METHODS:


Observation – this could be quantitative; measuring and counting or qualitative;
describing a phenomenon richly

Surveys – these are commonly used in correlational studies and could be in the
form of questionnaires, structured interviews, semi-structured interviews or
unstructured interviews.

Secondary Data – e.g Systematic Reviews

Discourse Analysis – Analysing speeches and other written/verbal works to uncover


how language reflects social context. This has low psychometric properties (reliability
and validity).
INFERENTIAL STATISTICAL TOOLS & WHEN TO USE THEM:
PARAMETRIC TESTS OF DIFFERENCE
8

Independent T-test – Compares means of two separate groups in a sample across


one IV and one DV eg Comparing anxiety levels of 400l and 300l psych students

Paired/Dependent T-test – Compares means of the same group before and after
an intervention across one IV and one DV. Tis used for pretest posttest experiments.
eg Comparing happiness levels of 300l psychology students before and after a
psy332 class.

One sample T-test – compares a group mean against a preset criterion/cutoff mark
e.g comparing JAMB scores to the cutoff mark

One-way ANOVA – Compares the means of three or more groups with


quantitative data across one IV and one DV
eg Comparing CGPAs of 300l, 200l and 100l psychology students.

Two-way ANOVA – Compares means of three or more groups with quantitative


data across two IVs and one DV. It shows the main effect of each IV on the DV and
the interaction effect of both IVs on the DV.
eg Comparing CGPAs of male and female students of various departments in FSS

PARAMETRIC TESTS OF CORRELATION


Pearson’s Correlation – Used to measure the magnitude and direction of an
association between two variables.. e.g correlation between net worth and self-
esteem

PARAMETRIC TESTS OF EFFECT


Linear Regression – Used to predict cause and effect relationships between an IV
and DV. In other words, it is used to predict the DV score from the IV score.
e.g. predict the level of suicidal ideation from perceived social support.

AN EXAMPLE OF APA REPORTING FOR AN INDEPENDENT T-TEST;


An independent t-test was conducted to determine whether the cognitive skills of
those who took red bull was significantly higher than those who took coffee. Group
red bull (N=34) had a mean test score of 37.6 (SD=13.5) while group coffee (N=16)
had a mean score of 37.4 (SD=7.0). The results revealed no significant difference
between the two groups (t (df=48) =.072, p=.94), thus we reject the hypothesis
which states that those who took red bull will score higher than those who took
coffee.

MORE TERMS TO KNOW:


9

Sampling error – The statistical difference between the population and the sample.
(As a researcher you’d want this to be as low as possible)

Type I error – This is also known as a false positive. Here, the researcher accepts the
alternate hypothesis when it is not true. The probability of a type I error is alpha
(hence why we usually use 0.5; a low alpha level)

Type II error – This is also known as a false negative. Here, the researcher rejects
the alternate hypothesis when it is true. The probability of a type II error is beta.

Confidence interval – A range of values likely to include an unknown parameter e.g


true mean diff. The Confidence Interval provides an error margin for tis unknown
parameter. (This error margin is our alpha level and is usually +/-5%).

p value – the probability of observing an effect of a particular magnitude by chance.


If the p value of a test is >.05, the probability is considered too high and thus the
results are deemed “not statistically significant”. Statistical significance is also
determined by sample size, effect size, and confidence interval.

Statistical Power – (1-beta) It is the probability of observing an effect if this effect


of a particular effect size or greater actually exists in the population. Statistical
power is affected by sample size, Standard deviation (SD), alpha and Mean diff.
Furthermore a directional/one-tailed test would have greater statistical power than a
non-directional/ two-tailed test.

Effect Size – This is a statistical measure that indicates the strength/magnitude of an


observed effect in research. It also strongly determines the practical significance and
ecological validity of results. E.g Pearson’s r

SD – How dispersed all data is in relation to the mean

Variance – average degree to which each data point differs from the mean

Normal Distribution –

Null Hypothesis –

Alternate Hypothesis –
10

RESEARCH ETHICS: Informed consent, Confidentiality, Debriefing, Beneficence


and Non-malevolence, Declared Conflict of Interest and Integrity.

CONTENTS OF A GOOD PSYCHOLOGY LAB REPORT:


Title (12-15words)
Abstract (150 words and key words below)
Introduction (definition of concepts and background of the study)
Aim & Objectives the general purpose of the study & steps taken to achieve them)
RQ (could be descriptive, correlational, comparative, etc)
RH (clear & concise tentative statement to test the relationship between IV and DV)
Lit Review (objective summary and analysis of relevant research and non-research)
Methodology (Participants, Sample, R.Design, Instruments, Procedure, Data analysis,
Ethical considerations)
Results
Discussion
References
SYSTEMATIC REVIEW
A systematic review aims to find, organize, and critically evaluate existing research
on a specific focused topic. It does this through a rigorous systematic process.
1. Formulating a research question: precisely defining the specific topic and
knowledge gap the review aims to address.

2. Developing a search strategy: identifying relevant keywords, databases, and


search filters to ensure a comprehensive yet focused search for pertinent literature.

3. Selecting studies: screening the identified studies based on pre-defined eligibility


criteria to ensure they align with the research question and methodological quality
standards.

4. Data extraction: This involves extracting relevant data from the selected studies,
such as study characteristics, methods, results, and conclusions, using a standardized
approach.

5. Data synthesis: This stage involves organizing and analyzing the extracted data
to identify patterns, trends, and potential answers to the research question.
Depending on the research question, this may involve qualitative or quantitative
methods or a combination of both.

6. Writing the review: presenting the findings of the review in a clear, concise, and
structured manner, adhering to established reporting guidelines like PRISMA.
11

CONTENTS OF A GOOD SYSTEMATIC REVIEW (AND META-ANALYSIS):


Title (12-15words)
Abstract (150 words and keywords below)
Introduction - (definition and discussion of each concept (IV and DV) separately
and then jointly under “the current review”. Expatiate on how each variable is seen
in various contexts not just in the context of your topic. After the current review,
clearly state your meta-analysis questions.

Method - This comprises:


Search strategy - a brief description of how you looked for studies and what
resources you used.

Inclusion/exclusion criteria a.k.a Eligibility Criteria - The characteristics or features


that individuals in your target population must have to participate in a
study/research/experiment. In the case of a systematic review, it refers to those
characteristics and features that studies must have to be selected for inclusion in
your review. E.g time of publication, country, research design, psychometric tools
used, outcome evaluation methods, effect reported, sample characteristics, etc

Data Extraction – a description of what key data was taken from the included
studies. This may include authors, the country in which data were collected,
sampling method, psychological tests/measures used, and reported differences,
impacts, correlations, etc between variables of interest.

Quality Appraisal – a description of what instrument was used to appraise the


quality of selected studies. It could be a widely used checklist from recognized
institutions e.g. Joanna Briggs Institute, National Institute for Health and Clinical
Excellence, etc. These checklists often provide a means of assigning quality appraisal
scores to selected studies. The higher the score, the higher the scientific quality of
the papers.

Statistical analysis – the use of statistical tools and meta-analysis software such as
Pearson’s Correlation and “Comprehensive Meta-Analysis” to compare the results of
included studies and determine effect size. Tables and figures can be used to
represent information obtained from statistical analysis.
Results – Begins with a brief statement of how the researcher(s) conducting the
review arrived at a given number of selected/included studies by eliminating
duplicates, screening studies for relevance, and assessing them against the eligibility
criteria. It is followed by a PRISMA flow diagram as a visual representation of the
12

selection process highlighting: 1- number of papers identified (Identification), 2-


number of papers screened out due to duplicate or irrelevant records (Screening),
3- number of articles assessed for Eligibility and finally 4- The number of
papers/studies Included. Possible exam question: “Importance/Uses of PRISMA”.

 Note: PRISMA is an acronym for “Preferred Reporting Items for


Systematic Reviews and Meta-Analysis”. It is a sets of guidelines that
includes protocol, search methods, selection process, data extraction, and
data synthesis to be used by researchers to ensure transparent,
comprehensive, and high-quality systematic reviews and meta-analyses. There
are currently 27 items on the 2020 PRISMA checklist.

Figure 1;
A PRISMA flow diagram showing the selection of studies included in a review
13

The results section also includes:


Participants – age range, number, demographics, etc of participants in the included
studies

Instruments and Data Analysis – a description of what psychological


tests/measures/instruments were used by the various studies in collecting and
analyzing relevant data.

Quality Assessment Summary – Outlining the scores/ratings of included papers


(gotten from the quality appraisal). It highlights key items on the checklist that were
or were not checked (fulfilled). It also highlights any potential issues with external
validity, missing data, etc.
e.g. “Of the 9 included studies, 8 checked the use of appropriate statistical analysis.
Only one did not check.”

The Quality Assessment summary also touches on how the studies got their data
(whether self-reported, third-party-reported, computer software, pen and paper, etc)
and a summary of findings (what kind of effects, relationships, or differences were
found in the studies). This could be arranged in a table.
14

Meta-analysis – A statistical technique of combining the results of multiple findings


investigating the same question. It takes numerical data (e.g. effect size) from each
study and pools it together to give a precise estimate of the effect across multiple
pieces of research. In other words, it is a statistical analysis of many related
quantitative findings from many studies used to draw conclusions and detect
patterns and relationships between them.

· Note: It’s not always possible to have a meta-analysis in a systematic


review. if individual studies are qualitative or are too different in their
design, measures, or populations, a meta-synthesis is used to pick out
core elements and themes for new conceptualizations and interpretations.
·
Discussion – A discussion of findings, research limitations, and recommendations
for future research.

Conclusion – A brief highlight of interesting findings and contributions of the review


to supporting/not supporting certain theories, proposals, or popular assertions. It
also mentions the contribution of the review to improving/refining interventions for
certain populations. It ends with a declaration of conflicting interests, funding
received, and supplemental materials for the review.

References – List of all sources used arranged in the prevailing APA style
(alphabetical order, first line hanging indent, etc)

CREATIVE THINKING EXPERIMENT/ NINE-DOT TEST


“convergent thinking” and “divergent thinking” were coined by J.P. Guilford in 1956.
Convergent thinking is used to solve well-defined, straightforward problems while
Divergent thinking is used to solve problems with many possible solutions or
outcomes. Robert Sternberg propounded his triarchic theory of multiple intelligence
(practical, analytical, and creative intelligence). The 9-dot puzzle is used to test
participants’ ability to think outside the box.

Procedure:
Participants were presented with nine dots arranged in a 3 by 3 square matrix and
asked to connect/ join all the dots together with not more than 4 consecutive
straight lines within 5 minutes.

Instruments: Timer, Nine-dot puzzle computer application, Writing Materials


15

Hypotheses for this test could include:


1- Male participants will have a higher frequency of divergent thinking than females.
2- There will be equal proportions of successful and unsuccessful participants in the
9-dot puzzle.

Independent, Dependent, Extraneous, and Confounding variables could be:


IVs – Gender,
DVs – Frequency of divergent thinking, Success level, Time spent solving
EVs – Closeness to the board, Tone of the observer, Lighting/Temperature of the
environment, Emotional state of the participant, Noise, Evaluation apprehension,
Participant Fatigue, etc.

Statistical Instruments: **Chi-square goodness of fit, Descriptive Statistics

BYSTANDER EFFECT
Popularized by Darley & Latane in 1968 as the effect where persons are reluctant
to take action in the presence of others. The Bystander experiment is a social
experiment often used to observe and compare participants’ reactions and reaction
times during emergencies. The original experiment was done in a controlled lab
setting but it can also be conducted as a quasi-experiment:

Procedure
The Confederates dressed in casual clothing walked closely together through various
campus locations. One Confederate student would pretend to faint (faking a medical
emergency) and The second Confederate student would panic and call for help. The
observers recorded the following:
· Response times of participants.
· Number of participants that responded and offered help.
· Gender of confederate actor that fainted
· Gender of participants that responded and offered help.
· Types of non-verbal behaviors of participants that ignored the emergency.

Instruments: Video recording device, timer, recording sheet, and writing materials.

Hypotheses for this test could include:


Ha 1: The mean response time for bystanders will be 6 seconds.
Ha 2: Male participants will have slower response times than female participants
Ha 3: Female confederates will experience faster reaction times from participants
than male confederates
16

Independent, Dependent, and Extraneous variables could be:


IVs – Gender of participant, Gender of confederate
DV – Response time, Number of participants that responded
EVs - Time of day, Temperature of the environment, Participant Fatigue, etc.
Statistical Instruments: One-sample T-test, Independent T-test, Mann-Whitney U,
Descriptive Statistics

PERSONAL SPACE EXPERIMENT


Robert Sommer in 1969 expatiated on human territoriality and personal space
This experiment is conducted to observe how far apart or how close together people
prefer to be.

Procedure
Participants were asked to sit beside a confederate on a bench. The distance
between them and the confederate was then measured and recorded.

Instruments: Measuring tape, Bench

Hypotheses for this test could include:


Ha 1: Male participants will have a wider personal space than female participants.
Ha 2: Male Confederate- Male Participant personal space will be greater than Female
Confederate - Female Participant personal space.

Independent, Dependent and Extraneous variables could be:


IVs – Gender of participant, Gender of confederate
DVs – Personal space width
EVs – Tone of the observer, Lighting/Temperature of the environment, Emotional
state of the participant, Evaluation apprehension etc.

Possible Statistical Instruments to use for this test:


Mann Whitney U Test, Independent T-test, Chi-square test of independence

For more info on research designs and sampling methods visit: Scribbr Research
Designs or ask Gemini
I pray we experience ease and peace of mind throughout these exams. May God
crown our efforts and grant us success in Jesus' name.

You might also like