Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

ST.

JOHN PAUL II INSTITUTE OF TECHNOLOGY


TRAINING ASSESSMENT AND LEARNING CENTER INC.
FRA Bldg. Carmen West Rosales Pangasinan / Aguila Road, Sevilla, San Fernando City La Union

INQUIRIES,
INVESTIGATION, AND
IMMERSION
– Third Quarter –
Module 1: Finding the Answers to
the Research Questions
(Week 1-2)

1
Subject: Inquiries, Investigation, and Immersion
Grade & Section: Grade 12
Module No. 1
Week: 1 and 2
Subject Teacher: Ms. Lorena Obedoza

Objectives:

1. interpret the data gathered;


2. analyze data; and
3. present the results of the data gathered.

Lesson Inquiries, Investigations, and Immersion


1
Data Interpretation & Analysis Method

Data Analysis is a process of understanding data or known facts or assumptions serving

as the basis of any claims or conclusions about the research problem/ investigation. These data

maybe collected through observations, interviews, documentary analysis and research

instruments like questionnaires. The primary aim in analyzing the recorded data is to find out

the appropriate method of data processing to transform the raw data to a finished product include

editing of the raw material, coding, scoring and selling the data, and summarizing the data into

statistical tables. These data are processed either by mechanical or electronic equipment.

INTERPRETATION OF DATA

Interpretation of data refers to the implementation of certain procedures through


which data results from surveys is reviewed, analyze for the purpose of achieving at valid and
evident based conclusion. The interpretation of data denotes a meaning to the information
analyzed and determines its significance and implications to the study.

2
Guidelines for the Data Interpretation
1. Getting to know the data
2. Focusing the analysis
3. Coding
4. Entering and organizing the data
5. Interpreting the data

DATA ANALYSIS METHODS


Data collection comprises a major area of the research process. Results should be
interpreted in a systematic and logical order, with the data showing relevance to the research
problem and sufficient to answer the research questions.
According to Venson (2010), for qualitative data, the main summary measures are rates
and percentages while for quantitative data measures of central tendency (mean, median and
mode) and measures of variation (range, standard deviation) are used.
The researcher should examine the organized data that will serve as basis for
generalization of what the data say about the particular facet of the problem identified in the
study. The data gathered must be presented in an orderly fashion to show relations. The order of
presentation of results should be systematic and logical. The reported data must be relevant to
the research problem and sufficient to answer the research question.
Statistics appropriate to the research design must be reported. It should be clear from the
design which statistics are relevant to each research question. Results should be organized and
presented in a way that the reader would clearly understand which statistic bear on which
research hypothesis.
Honesty should be observed in analyzing and interpreting the data and in finally reporting
them. This means that findings that do not show the relationship among the variables studied
are also reported and included. The results or the analysis should be presented in a verifiable
form.

Major Elements of Data Analysis

Venson (2010), the major elements in data analysis which includes the
following:
1. Presentation of the data. This showcases the data for easy
understanding of the reader. They can be displayed using tables,
diagrams, or other figures for easy comprehension.
2. Analysis. In this part, the knowledge and logical understanding of the
researcher is required. The important data are given enough attention
as it will be the basis of the final results of the study.

3
3. Interpretation. In this part, comprehensible statements are included
after analyzing and synthesizing the patterns and categories that are
derived from the findings.
4. Discussion. In this part, the results of the investigation are compared
with reviewed literature and studies, the transcripts and personal
narration of events that serve as proof of the themes and categories are
mentioned in verbatim. After the analysis and interpretation of the data,
the discussions and explanations of the results are needed to give a more
logical and empirical basis for the conclusion.

Quantitative Data Analysis is a systematic approach to investigations during which


numerical data is collected and/or the researcher transforms what is collected or observed into
numerical data. It often describes a situation or event; answering the 'what' and 'how many'
questions you may have about something. This research involves measuring or counting
attributes (i.e. quantities).

A quantitative approach is often concerned with finding evidence to either support or


contradict an idea or hypothesis you might have. A hypothesis is where a predicted answer to a
research question is proposed, for example, you might propose that if you give a student training
on how to use a search engine it will improve their success in finding information on the Internet.

Qualitative Data Analysis is an ongoing and cyclical process that involves the
identification, examination and interpretation of patterns and themes in textual data and
determines how these patterns and themes help answer the research questions at hand.
(Cristobal 2017).
Once you have collected your data you need to make sense of the responses you have
gathered.
Data analysis enables you to make sense of data by:
1. organizing them
2. summarizing them
3. doing exploratory analysis
And to communicate the meaning to others, the data can be presented as:
1. tables
2. graphical displays
3. summary statistics
4. patterns and themes (qualitative research)

4
Statistical Analysis

Before you proceed in analyzing your data there are types of analysis and tools you need
to be familiarize with some concepts:
A. Population - the whole units of analysis that might be investigated, this
could be students, cats, house prices etc.
1. Sample - the actual set of units selected for investigation and who

participate in the research.


2. Variable - characteristics of the units/participants.

B. Value - the score/label/value of a variable, not the frequency of occurrence.


1. Case/subject - the individual unit/participant of the study/research

Steps in Data Analysis


Baraceros (2016), identified the different steps in Quantitative data analysis and quoted
that no ―data organization means no sound data analysis.

1. Coding system – to analyze data means to change the verbally expressed


data into numerical information. Converting the words, images, or pictures
into numbers, they become fit for any analytical procedures requiring
knowledge of arithmetic and mathematical computations. It is not possible
for the researcher to do the mathematical operations such as division,
multiplication, or subtraction in the word level, unless you code the verbal
responses and observation categories. For example: As regards gender
variable, give number 1 as the code or value for Male and number 2 for
Female.
2. Analyzing the Data- Data coding and tabulation are both essential in
preparing the data analysis. Before the data is interpreted every component
of the data, the researcher decides first what kind of quantitative analysis to
use whether to use a simple descriptive statistical technique or an advance
analytical method.

According to Cristobal (2017), the core process of qualitative data analysis to identify
meaningful patterns and themes includes:
1. Content Analysis - carried out by coding the data for certain words or
content by going through all the text and label words, phrases, and
sections of the text; or devising a matrix to group for categories of the

5
texts when listening to a recorded interview, identifying their patterns
(ideas, concepts, behaviors, interactions, incidents, terminologies, or
phrases used), and interpreting their meanings.
2. Thematic Analysis – a process of analyzing the data by grouping them
according to themes. Themes either evolve directly from the research
questions or preset or naturally emerge from the resulting data.

Statistical Methodologies

1. Descriptive Statistics- are brief descriptive coefficients that summarize


a given data set, which can either be a representation of the entire population
or a sample of it. It is broken down into measures of central tendency and
measures of variability or spread. Measures of central tendency include the
mean, median and mode, while measures of variability include the standard
deviation or variance, and the minimum and maximum variables.

2. Inferential Statistics - Inferential statistics makes inferences about


populations using data drawn from the population. Suppose you need to
collect data on a very large population. For example, suppose you want to
know the average height of all the men in a city with a population of so many
million residents. It isn't very practical to try and get the height of each man.
This is where inferential statistics comes into play. Instead of using the
entire population to gather the data, the statistician will collect a sample or
samples from the millions of residents and make inferences about the entire
population using the sample.

The sample is a set of data taken from the population to represent the population.
Probability distributions, hypothesis testing, correlation testing and regression analysis are all
fall under the category of inferential statistics.

Types of Statistical Data Analysis


1. Univariate Analysis – analysis of one variable.

2. Bivariate Analysis – analysis of two variables (independent

and dependent)

3. Multivariate Analysis – analysis of multiple relations between

multiple variables.

6
Validity and Reliability
Validity refers to the quality of the instrument of being functional only within its specific
purpose. An instrument is valid if it measures what is supposed to measure. Since the
instruments of the study are used by the researcher in the methodology to obtain the data, the
validity of each should be established beforehand. This is to ensure the credibility of the findings,
and the correctness and accuracy of the following data analysis. (Cristobal 2017)

Types of Validity
1. Face Validity – involves an analysis whether the instrument is using a valid
scale including the font size, spacing, the size of the paper used, and other
necessary details that will not distract respondents from answering the
questionnaire.
2. Content Validity – determined by studying the questions to see whether
they can elicit the necessary information. It is measured by subjecting the
instrument to an analysis by a group of experts who have theoretical and
practical knowledge of the subject.
3. Construct Validity – refers to whether the test corresponds with its
theoretical construct. It is concerned with the extent to which a particular
measure related to other measure is and to which it is consistent with the
theoretically derived hypothesis,
4. Criterion- related validity or equivalent test – an expression of how
scores form the test are correlated with an external criterion.

Reliability refers to the consistency of the results of an instrument in repeated trials. A


reliable instrument also can be used to verify the credibility of the subject if the same results in
several tests. It is important to note that, while a valid instrument is always reliable, a reliable
instrument is not always necessarily valid.

Criteria for Assessing the Validity and Reliability


According to Cristobal 2017, there are other forms of criteria that can be used in
assessing the literature which includes:
1. Sensitivity
2. Specificity
3. Comprehensibility
4. Precision
5. Speed
6. Range
7. Linearity
8. Reactivity

7
Basic Quantitative Data Analysis Procedure
In quantitative data analysis, the researcher is expected to make the raw numbers into
a significant data through the application of rational and critical thinking. In this case, the
quantitative data analysis may contain the calculation of differences between variables and
frequencies of variables. Therefore, a quantitative approach is usually related with finding an
evidence to either support or reject the hypotheses you have formulated at the previous stages of
your research process.
It should be noted that visual presentations of data findings are insignificant unless a
sound decision is made regarding scales of measurement.
Before any data analysis can begin, the scale of measurement must be decided for the
data as this will have a long-term impact on data interpretation. The varying scales include:

1. Nominal Scale: non-numeric categories that cannot be ranked or


compared quantitatively. Variables are exclusive and exhaustive.
2. Ordinal Scale: exclusive categories that are exclusive and
exhaustive but with a logical order. Quality ratings and agreement
ratings are examples of ordinal scales (i.e., good, very good, fair, etc.,
or agree, strongly agree, disagree, etc.).
3. Interval: a measurement scale where data is grouped into
categories with orderly and equal distances between the categories.
There is always an arbitrary zero point.
4. Ratio: contains features of the three scale, nominal, ordinal and
interval.

In applying descriptive statistics, it’s important to think about which one is the most
appropriate for your research question and what you want to present. For instance, a percentage
is a good way to present the age distribution of respondents.

Likert Scale
According to McLeod (2019), there are different kinds of a rating scale have been
developed to measure attitudes directly (i.e. the person knows their attitude is being studied). It
is utmost widely used in research study is the Likert scale developed by Rensis Likert in 1932.
Typically, the Likert scale is a five (or seven) point scale which is used to allow the individual to
express how much they agree or disagree with a particular statement.

8
Example: I believe that ecological questions are the most important issues facing
human beings today.

(Source:https://www.simplypsychology.org/Likert-
agree.jpg?ezimgfmt=rs:575x136/rscb1/ng:webp/ngcb1)

Average Weighted Mean


A weighted mean is a kind of average. Instead of each data point contributing equally to
the final mean, some data points contribute more “weight” than others. If all the weights are equal,
then the weighted mean equals the arithmetic mean (the regular “average” you are used to).
Weighted means are very common in statistics, especially when studying populations.

Example: The following ranges of values, statistical limits, describe using the
average weighted mean and it is interpreted with the following:

Mean Scale Range Descriptive Rating


3 2.34- 3.00 Very serious (VS)
2 1.67- 2.33 Moderately Serious (MS)
1 1.00- 1.66 Very Serious (NS)

The Formula will be:

(𝑓1 ) + (𝑓2 ) + (𝑓3 )


𝐴𝑊𝑀 =
𝑁

Where:
f1= number of respondents who answered not serious
f2= number of respondents who answered moderately serious f3=
number of respondents who answered very serious
N= number of total of respondents

9
Percentage Frequency Distribution

A percentage frequency distribution is a display of data that specifies the percentage of


observations that exist for each data point or grouping of data points. It is particularly useful
method of expressing the relative frequency of survey responses and other data. Many times,
percentage frequency distributions are displayed as tables or as bar graphs or pie charts.

The Formula in order for you to get the percentage frequency distribution:
𝐹 × 100
𝑃=
𝑁

Where:
P= percentage F=
frequency
N= total number or respondents

T-test
This will be done using Statistical Program for Social Sciences (SPSS).
If you are trying to test the significant difference between the means of two
groups, T-test is used in this kind of data analysis and the researcher specify the level of
probability (alpha level, level of significance, p) are willing to accept before we collect data (p <
.05 is a common value that is used).
In the general rule of interpreting the t-test result, if the result is lower than the set level
of (0.05) significance therefore, the null hypothesis is accepted means that there is no significant
difference between the two groups and the alternative hypothesis is rejected. But if the result is
higher than the set level of significance (0.05), the interpretation will be there is a significant
difference between the two groups and therefore, the alternative hypothesis is accepted and the
null hypothesis is rejected.

10
I CAME TO KNOW

Data Collection is a vital characteristic of any type of research study. Inaccurate data
collection can affect the results or the findings of a research study and ultimately lead to invalid
or inappropriate findings. Data collection methods for impact evaluation vary along a range. Data
analysis and interpretation are crucial in developing sound conclusions and making better
informed decisions.

TIPS on how to Interpret data.


1. Collect your data and make it as readable as possible.
2. Choose the type of data analysis to perform; qualitative or
quantitative and apply the methods respectively.
3. Qualitative analysis: observe, document and interview notice, collect
and think about things.
4. Quantitative analysis: research with a lot of numerical data to be
analyzed through various statistical methods such as the descriptive –
mean, standard deviation or frequency and inferential statistics – Chi
square, Pearson Product moment correlation and the like.
5. Think: Ponder about your data from various point of views, and what it
means for various respondents.
6. Reflect: Be aware of the risk in analyzing and interpreting data.
Correlation with causation, subjective ideas and bias, wrong information
and inappropriate data.

The importance of data interpretation is evident, and therefore it needs to be done correctly.
Data analysis tends to be extremely subjective. While there are different types of processes that
are implemented based on individual data nature, the two broadest and most common categories
are “quantitative analysis” and “qualitative analysis”.

11
WHAT CAN BE DONE

A researcher conducted a study to one of the companies in Urdaneta City,


Pangasinan to determine the factors affecting career preferences among the
residence one barangays in Urdaneta City ages 22 to 60 years old. The following
data were collected.
Table 1
Distribution of Respondents by Age

Age Frequenc Percent


y
21 – 30 yrs. old 170 45.33
31 – 40 yrs .old 90 24.00
41 – 50 yrs. old 80 21.33
51 – 60 yrs. old 35 9.33
Total 375 100
Interpretation of Data (Table 1)

Table 1 reveals that almost 45.33 percent of the respondents are in the age
bracket of 21- 30 years old compared to only 9.33 percent in ages 51 – 60 years old
and above and 21.33 percent belonged to the 41- 50 age range.

This age profile is important as it also reflects the current age demographic
for the Filipinos according to Philippine Statistics Authority (PSA).

There is a much younger age cohort of teachers entering the workforce.

There is a much younger cohort who has the capacity to purchase product
and services.

12
STOP! Now it`s your turn to answer the following questions below.

Supposed a study is conducted in your barangay to determine the


awareness in the prevention of COVID 19 among the residences ages 21
to 60 years old. The following data were given.

Table 2
Distribution of Respondents by Age

Age Frequency Percent


21 – 30 yrs. old 38 42.22
31 – 40 yrs .old 21 23.33
41 – 50 yrs. old 18 20
51 – 60 yrs. old 13 14.44
Total 90 100

Kindly give your interpretation on the given data in table 3.

_
_
_
_ _
_
_
_

What data analysis method will you use to determine the


awareness in the prevention of COVID 19 among the residences ages 21
to 60 years old? Why?
_
_
_
_
_
_
_

13
ASSESSMENT

A. Directions: Read each statement carefully. Write the letters of the correct answer on a
separate sheet of paper.

1. Which of the following is a process of understanding data or known facts or assumptions


serving as the basis of any claims or conclusions in a research problem or investigation?
a. conclusion c. recommendation
b. data analysis d. summary of findings

2. Which is a systematic approach to investigations during which numerical data is collected


and/or the researcher transforms what is collected or observed into numerical data?
a. qualitative data analysis c. qualitative data statistics
b. quantitative data analysis d. quantitative data statistics

3. Which of the following showcases the data for easy understanding of the reader and they
can be displayed using tables diagrams or other figures for easy comprehension?
a. analysis c. interpretation
b. discussion d. presentation of the data

4. This is the part where the knowledge and logical understanding of the researcher is
required. The important data are given enough attention as it will be the basis of the results
of the study.
a. analysis c. interpretation
b. discussion d. presentation of the data

5. Which is of the following is a comprehensible statement that will be included after


analyzing and synthesizing the patterns and categories that are derived from the findings?
a. analysis c. interpretation
b. discussion d. presentation of the data
6. In this part of data analysis, the research will be compared and contrasted the
results of the investigation with reviewed literature and studies.
a. analysis c. interpretation
b. discussion d. presentation of the data

7. It is a system used to analyze data means to quantify or change the verbally


expressed data into numerical information.
a. analyzing the data c. content analysis
b. coding system d. thematic analysis

14
8. It is a part of data analysis that is carried out by coding the data for certain
words or content by going through all the text?
a. analyzing the data c. content analysis
b. coding system d. thematic analysis

9. It is a process of analyzing the data by grouping them according to themes.


a. analyzing the data c. content analysis
b. coding system d. thematic analysis

10. It is Analysis if one Variable.


a. bivariate analysis c. univariate analysis
b. multivariate analysis d. content analysis
11. It is analysis of two variables (independent and dependent).
a. bivariate analysis c. univariate analysis
b. multivariate analysis d. content analysis
12. It is analysis of multiple relations between multiple variables.
a. bivariate analysis c. univariate analysis
b. multivariate analysis d. content analysis
13. This refers to the quality of the instrument being functional only within its
specific purpose.

a. content validity c. reliability

b. face validity d. validity


14. Which refers to the consistency of the results of an instrument in repeated
trials. It can also be used to verify the credibility if the subject yield the same
results in several tests?

a. content validity c. reliability

b. face validity d. validity

15. This refers to the quality of the instrument of being functional only within its
specific purpose?

a. content validity c. reliability

b. face validity d. validity

16. Which of the following determined by studying the questions to see whether
they can elicit the necessary information?

a. criterion- related validity or equivalent test c. content validity

b. construct validity d. face validity

15
17. Which of the following refers to whether the test corresponds with its
theoretical construct?

a. criterion- related validity or equivalent test c. content validity

b. construct validity d. face validity

18. It involves an analysis whether the instrument is using a valid scale including
the fond size, spacing, the size of the paper used, and other necessary details that
will not distract respondents form answering the questionnaire?

a. criterion- related validity or equivalent test c. content validity

b. construct validity d. face validity

19. It is used to test the significant difference between the means of two groups?

a. average weighted mean c. Likert- scale

b. percentage frequency distribution d. T- test


20. Which of the following display of data that specify the percentage of
observation that exist for each data point or grouping of data points?

a. average weighted mean c. Likert- scale

b. percentage frequency distribution d. T- test

16
B. Suppose your conducting a study entitled “TEENAGE PREGNANCY AND ITS
INTERVENTIONS: MINIMIZING FUTURE RISKS AMONG HIGH SCHOOL
STUDENTS.” The following data were drawn:

Table 1. Experiences knowing that you are


pregnant.

“I’m afraid that my boyfriend won’t carry the responsibility, but I am more
R1
afraid of my parents not accepting me for they don’t like my boyfriend”

“I am so afraid and reached to the point of aborting the baby’s life inside my
R2 tummy, since I don’t have parents to turn to. With that, I planned to commit
suicide.”

Interpretation for Respondent 1:


All the respondents’ responses were about fear, worries, and
apprehensions. Table 1 showed the emotions that respondents felt
knowing that they were pregnant at an early age. The two respondents
directly blurted out the feeling of fear and the rest indirectly said. Fear on
how the parents reacted to the shame they brought up, fear of
hopelessness that the baby shuttered their future dreams, fear on how they
raise the child knowing that they are incapable of supporting themselves.
The fear felt push to worry, apprehended and thought of the worst deed
to abort the child.
Kindly write your interpretation, based on the data given in Table
1. Remember to write first the comparison and contrast the data given,
its implication to the study and connect it with your review of related
literature.

_
_
_
_ _
_
_
_ _
_

17
ANSWER SHEET (FORMAT)
Name: ___________________________ Date: _____________
Strand & Year Level: _________________

INQUIRIES, INVESTIGATION, and IMMERSION


Week 1-2 – Module 1

WHAT CAN BE DONE


(Write your answer)

ASSESSMENT
(Write your answer)

18

You might also like