Chapter 3, Numerical Descriptive Measures: - Data Analysis Is

Download as ppt, pdf, or txt
Download as ppt, pdf, or txt
You are on page 1of 21

Chapter 3, Numerical Descriptive

Measures
• Data analysis is objective
– Should report the summary measures that best
meet the assumptions about the data set

• Data interpretation is subjective


– Should be done in fair, neutral and clear
manner
Summary Measures
Describing Data Numerically

Central Tendency Variation Shape

Arithmetic Mean Range Skewness

Median Interquartile Range

Mode Variance

Geometric Mean Standard Deviation

Quartiles Coefficient of Variation


Arithmetic Mean
• The arithmetic mean (mean) is the most common measure of
central tendency
• Mean = sum of values divided by the number of values
• Affected by extreme values (outliers)

X i
X1  X 2    Xn
X i1

n n

Sample size Observed values


Geometric Mean
• Geometric mean
– Used to measure the rate of change of a variable over time

XG  ( X1  X 2    Xn ) 1/ n

• Geometric mean rate of return


– Measures the status of an investment over time

RG  [(1  R1 )  (1  R 2 )    (1  Rn )]1/ n  1

– Where Ri is the rate of return in time period I


Median: Position and Value
• In an ordered array, the median is the “middle”
number (50% above, 50% below)
• The location (position) of the median:

n 1
Median position  position in the ordered data
2

• The value of median is NOT affected by


extreme values
Mode
• A measure of central tendency
• Value that occurs most often
• Not affected by extreme values
• Used for either numerical or categorical data
• There may may be no mode
• There may be several modes
Quartiles
• Quartiles split the ranked data into 4 segments
with an equal number of values per segment
• Find a quartile by determining the value in the
appropriate position in the ranked data, where
First quartile position: Q1 = (n+1)/4

Second quartile position: Q2 =2 (n+1)/4 (the median


position)

Third quartile position: Q3 = 3(n+1)/4

where n is the number of observed values


Measures of Variation
Variation

Range Interquartile Variance Standard Coefficient


Range Deviation of Variation

• Measures of variation give


information on the spread or
variability of the data values.

Same center,
different variation
Range and Interquartile Rage
• Range
– Simplest measure of variation
– Difference between the largest and the smallest observations:
Range = Xlargest – Xsmallest
– Ignores the way in which data are distributed
– Sensitive to outliers
• Interquartile Range
– Eliminate some high- and low-valued observations and calculate
the range from the remaining values
– Interquartile range = 3rd quartile – 1st quartile
= Q3 – Q1
Variance
• Average (approximately) of squared
deviations of values from the mean
n

– Sample variance:
 (X  X) i
2

S 2 i1
n -1
Where X = arithmetic mean
n = sample size
Xi = ith value of the variable X
Standard Deviation
• Most commonly used measure of variation
• Shows variation about the mean
• Has the same units as the original data
• It is a measure of the “average” spread around the mean
• Sample standard deviation:

 (X  X)
i
2

S i1
n -1
Coefficient of Variation
• Measures relative variation
• Always in percentage (%)
• Shows variation relative to mean
• Can be used to compare two or more sets of data
measured in different units

 S
CV     100%

X 
Shape of a Distribution
• Describes how data are distributed
• Measures of shape
– Symmetric or skewed
Left-Skewed Symmetric Right-Skewed
Mean < Median Mean = Median Median < Mean
Using the Five-Number Summary to
Explore the Shape
• Box-and-Whisker Plot: A Graphical display of data using
5-number summary:
Minimum, Q1, Median, Q3, Maximum

• The Box and central line are centered between the


endpoints if data are symmetric around the median

Min Q1 Median Q3 Max


Distribution Shape and
Box-and-Whisker Plot
Left-Skewed Symmetric Right-Skewed

Q1 Q2 Q3 Q1 Q2 Q3 Q1 Q2 Q3
Relationship between Std. Dev. And
Shape: The Empirical Rule

• If the data distribution is bell-shaped, then the interval:


– μ  1σ contains about 68% of the values in the population or
the sample

– μ  2σ contains about 95% of the values in the population or


the sample

– μ  3σ contains about 99.7% of the values in the population


or the sample
Population Mean and Variance

Population Mean
X i
X1  X 2    XN
 i 1

N N

 (X i  μ) 2

Population variance σ 2
 i 1
N
Covariance and Coefficient of
Correlation
• The sample covariance measures the strength of the
linear relationship between two variables (called
bivariate data)
• The sample covariance:
n

 ( X  X)( Y  Y )
i i
cov ( X , Y )  i1
n 1
• Only concerned with the strength of the relationship
• No causal effect is implied
• Covariance between two random variables:
• cov(X,Y) > 0 X and Y tend to move in the same direction

• cov(X,Y) < 0 X and Y tend to move in opposite directions

• cov(X,Y) = 0 X and Y are independent

• Covariance does not say anything about the relative strength of


the relationship.

• Coefficient of Correlation measures the relative strength of the


linear relationship between two variables
n

 ( X  X)( Y  Y )
i i
cov ( X , Y )
r i1

n n SX SY
 i
( X
i 1
 X ) 2
 i
( Y
i1
 Y ) 2
• Coefficient of Correlation:
– Is unit free
– Ranges between –1 (perfect negative) and 1(perfect
positive)
– The closer to –1, the stronger the negative linear
relationship
– The closer to 1, the stronger the positive linear
relationship
– The closer to 0, the weaker any positive linear
relationship
– At 0 there is no relationship at all
Correlation vs. Regression
• A scatter plot (or scatter diagram) can be used
to show the relationship between two
variables
• Correlation analysis is used to measure
strength of the association (linear
relationship) between two variables
– Correlation is only concerned with strength of the
relationship
– No causal effect is implied with correlation

You might also like