Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 40

KMB104 

Business Statistics And Analysis Statistics is indispensable in planning—may it be in business,


economics or government level. The modern age is termed as the ‘age
UNIT 1 Descriptive Statistics of planning’ and almost all organizations in the government or
business or management are resorting to planning for efficient
working and for formulating policy decision.
Statistics: Definition, To achieve this end, the statistical data relating to production,
Importance, Limitation consumption, birth, death, investment, income are of paramount
importance. Today efficient planning is a must for almost all
 10 FEB 2019 3
countries, particularly the developing economies for their economic
Statistics is a form of mathematical analysis that uses quantified development.
models, representations and synopses for a given set of experimental
data or real-life studies. Statistics studies methodologies to gather, (ii) Statistics in Mathematics
review, analyze and draw conclusions from data. Some statistical
measures include mean, regression analysis, skewness, kurtosis, Statistics is intimately related to and essentially dependent upon
variance and analysis of variance. mathematics. The modern theory of Statistics has its foundations on
the theory of probability which in turn is a particular branch of more
Statistics is a term used to summarize a process that an analyst uses to advanced mathematical theory of Measures and Integration. Ever
characterize a data set. If the data set depends on a sample of a larger increasing role of mathematics into statistics has led to the
population, then the analyst can develop interpretations about the development of a new branch of statistics called Mathematical
population primarily based on the statistical outcomes from the Statistics.
sample. Statistical analysis involves the process of gathering and
evaluating data and then summarizing the data into a mathematical Thus Statistics may be considered to be an important member of the
form. mathematics family. In the words of Connor, “Statistics is a branch
of applied mathematics which specializes in data.”
Statistical methods analyze large volumes of data and their properties.
Statistics is used in various disciplines such as psychology, business, (iii) Statistics in Economics
physical and social sciences, humanities, government and
manufacturing. Statistical data is gathered using a sample procedure Statistics and Economics are so intermixed with each other that it
or other method. Two types of statistical methods are used in looks foolishness to separate them. Development of modern statistical
analyzing data: descriptive statistics and inferential statistics. methods has led to an extensive use of statistics in Economics.
Descriptive statistics are used to synopsize data from a sample
exercising the mean or standard deviation. Inferential statistics are All the important branches of Economics—consumption, production,
used when data is viewed as a subclass of a specific population. exchange, distribution, public finance—use statistics for the purpose
of comparison, presentation, interpretation, etc. Problem of spending
Importance and Scope of Statistics of income on and by different sections of the people, production of
(i) Statistics in Planning national wealth, adjustment of demand and supply, effect of economic
policies on the economy etc. simply indicate the importance of
statistics in the field of economics and in its different branches.
1
Statistics of Public Finance enables us to impose tax, to provide The job of a research worker is to present the result of his research
subsidy, to spend on various heads, amount of money to be borrowed before the community. The effect of a variable on a particular
or lent etc. So we cannot think of Statistics without Economics or problem, under differing conditions, can be known by the research
Economics without Statistics. worker only if he makes use of statistical methods. Statistics are
everywhere basic to research activities. To keep alive his research
(iv) Statistics in Social Sciences interests and research activities, the researcher is required to lean upon
his knowledge and skills in statistical methods.
Every social phenomenon is affected to a marked extent by a
multiplicity of factors which bring out the variation in observations Limitations of Statistics
from time to time, place to place and object to object. Statistical tools
of Regression and Correlation Analysis can be used to study and Statistics is a mathematical science pertaining to the collection,
isolate the effect of each of these factors on the given observation. analysis, interpretation or explanation, and presentation of data.
Statisticians improve the quality of data with the design of
Sampling Techniques and Estimation Theory are very powerful and experiments and survey sampling.
indispensable tools for conducting any social survey, pertaining to any
strata of society and then analyzing the results and drawing valid (i) Statistics does not deal with isolated measurement
inferences. The most important application of statistics in sociology is
in the field of Demography for studying mortality (death rates), (ii) Statistics deals with only quantitative characteristics
fertility (birth rates), marriages, population growth and so on.
(iii) Statistics laws are true on average. Statistics are aggregates of
(v) Statistics in Trade facts. So single observation is not a statistics, it deals with groups and
aggregates only.
As already mentioned, statistics is a body of methods to make wise
decisions in the face of uncertainties. Business is full of uncertainties (iv) Statistical methods are best applicable on quantitative data.
and risks. We have to forecast at every step. Speculation is just
(v) Statistical cannot be applied to heterogeneous data.
gaining or losing by way of forecasting. Can we forecast without
taking into view the past? Perhaps, no. The future trend of the market
(vi) It sufficient care is not exercised in collecting, analyzing and
can only be expected if we make use of statistics. Failure in
interpretation the data, statistical results might be misleading.
anticipation will mean failure of business.
(vii) Only a person who has an expert knowledge of statistics can
Changes in demand, supply, habits, fashion etc. can be anticipated
handle statistical data efficiently.
with the help of statistics. Statistics is of utmost significance in
determining prices of the various products, determining the phases of
(viii) Some errors are possible in statistical decisions. Particularly the
boom and depression etc. Use of statistics helps in smooth running of
inferential statistics involves certain errors. We do not know whether
the business, in reducing the uncertainties and thus contributes
an error has been committed or not.
towards the success of business.

(vi) Statistics in Research Work

2
Measures of Central Tendency: You may have noticed that the above formula refers to the sample
mean. So, why have we called it a sample mean? This is because, in
statistics, samples and populations have very different meanings and
Mean, Median, and Mode these differences are very important, even if, in the case of the mean,
they are calculated in the same way. To acknowledge that we are
 A measure of central tendency is a summary statistic that represents calculating the population mean and not the sample mean, we use the
the center point or typical value of a dataset. These measures indicate Greek lower case letter “mu”, denoted as µ:
where most values in a distribution fall and are also referred to as the
central location of a distribution. You can think of it as the tendency
of data to cluster around a middle value. In statistics, the three most
common measures of central tendency are the mean, median, and
mode. Each of these measures calculates the location of the central
The mean is essentially a model of your data set. It is the value that is
point using a different method.
most common. You will notice, however, that the mean is not often
The mean, median and mode are all valid measures of central
one of the actual values that you have observed in your data set.
tendency, but under different conditions, some measures of central
However, one of its important properties is that it minimizes error in
tendency become more appropriate to use than others. In the following
the prediction of any one value in your data set. That is, it is the value
sections, we will look at the mean, mode and median, and learn how
that produces the lowest amount of error from all other values in the
to calculate them and under what conditions they are most appropriate
data set.
to be used.
An important property of the mean is that it includes every value in
MEAN (ARITHMETIC)
your data set as part of the calculation. In addition, the mean is the
only measure of central tendency where the sum of the deviations of
The mean (or average) is the most popular and well known measure of
each value from the mean is always zero.
central tendency. It can be used with both discrete and continuous
data, although its use is most often with continuous data (see
MEDIAN
our Types of Variable guide for data types). The mean is equal to the
sum of all the values in the data set divided by the number of values in
The median is the middle score for a set of data that has been arranged
the data set. So, if we have n values in a data set and they have values
in order of magnitude. The median is less affected by outliers and
x , x , …, x , the sample mean, usually denoted by  (pronounced x bar),
skewed data. In order to calculate the median, suppose we have the
1 2 n

is:
data below:

65 55 89 56 35 14 56 55

This formula is usually written in a slightly different manner using the We first need to rearrange that data into order of magnitude (smallest
Greek capitol letter, , pronounced “sigma”, which means “sum of…”: first):

3
1
35 45 55 55 56 56 65 87 89 92
4

Our median mark is the middle mark – in this case, 56 (highlighted in


bold). It is the middle mark because there are 5 scores before it and 5
scores after it. This works fine when you have an odd number of
scores, but what happens when you have an even number of scores?
What if you had only 10 scores? Well, you simply have to take the
middle two scores and average the result. So, if we look at the
example below:

65 55 89 56 35 14 56 55 87 45
Partition Values: Quartile,
We again rearrange that data into order of magnitude (smallest first):
Deciles, Percentiles
14 35 45 55 55 56 56 65 87 89 Partition values or fractiles such a quartile, a decile, etc. are the
 

different sides of the same story. In other words, these are values that
Only now we have to take the 5th and 6th score in our data set and divide the same set of observations in different ways. So, we can
average them to get a median of 55.5. fragment these observations into several equal parts.
QUARTILE
MODE
Whenever we have an observation and we wish to divide it, there is a
The mode is the most frequent score in our data set. On a histogram it chance to do it in different ways. So, we use the median when a given
represents the highest bar in a bar chart or histogram. You can, observation is divided into two parts that are equal.  Likewise,
therefore, sometimes consider the mode as being the most popular quartiles are values that divide a complete given set of observations
option. An example of a mode is presented below: into four equal parts.

Basically, there are three types of quartiles, first quartile, second


quartile, and third quartile. The other name for the first quartile is
lower quartile. The representation of the first quartile is ‘Q1.’ The
other name for the second quartile is median. The representation of
the second quartile is by ‘Q2 .‘  The other name for the third quartile
is the upper quartile. The representation of the third quartile is by
‘Q3.’

First Quartile is generally the one-fourth of any sort of observation.


However, the point to note here is, this one-fourth value is always less
4
than or equal to ‘Q1.’  Similarly, it goes for the values of ‘Q2‘ and p = 1/10, 2/10, …. 9/10 for different values of D1, D2, …… D9
‘Q3.’ respectively.

DECILES p = 1/100, 2/100, ….. 99/100 for different values of P1, P2, ………
P99 respectively.
Deciles are those values that divide any set of a given observation into
a total of ten equal parts. Therefore, there are a total of nine deciles. Formula
These representation of these deciles are as follows – D1,  D2, D3,
D4, ……… D9. At times, the grouping of frequency distribution takes place. For
which, we use the following formula during the computation:
D1 is the typical peak value for which one-tenth (1/10) of any given
observation is either less or equal to D1. However, the remaining Q = l1 + [(Np– Ni)/(Nu-Ni)] * C
nine-tenths(9/10) of the same observation is either greater than or
equal to the value of D1. Here,

PERCENTILES l1 = lower class boundary of the specific class that contains the
median.
Last but not the least, comes the percentiles. The other name for
percentiles is centiles. A centile or a percentile basically divide any Ni = less than the cumulative frequency in correspondence to l1 (Post
given observation into a total of 100 equal parts. The representation of Median Class)
these percentiles or centiles is given as – P1,  P2, P3, P4, ……… P99.
Nu = less than the cumulative frequency in correspondence to l2 (Pre
P1 is the typical peak value for which one-hundredth (1/100) of any Median Class)
given observation is either less or equal to P1. However, the
remaining ninety-nine-hundredth (99/100) of the same observation is C = Length of the median class
either greater than or equal to the value of P1. This takes place once
all the given observations are arranged in a specific manner i.e. The symbol ‘p’ has its usual value. The value of ‘p’ varies completely
ascending order. depending on the type of quartile. There are different ways to find
values or quartiles. We use this way in a grouped frequency
So, in case the data we have doesn’t have a proper classification, then distribution. The best way to do it is by drawing an ogive for the
the representation of p quartile is (n + 1 )p
th  th present frequency distribution.

Here, Hence, all that we need to do to find one specific quartile is, find the
point and draw a horizontal axis through the same. This horizontal
n = total number of observations. line must pass through Np. The next step is to draw a perpendicular.
The perpendicular comes up from the same point of intersection of the
p = 1/4, 2/4, 3/4 for different values of Q1, Q2, and Q3 respectively. ogive and the horizontal line. Hence, the value of the quartile comes
from the value of ‘x’ of the given perpendicular line.

5
Measures of Variation:
Range, IQR
 10 FEB 2019 3

A measure of variation is a summary statistic that represents the


amount of dispersion in a dataset. While a measure of central
tendency describes the typical value, measures of variability define
how far away the data points tend to fall from the center. We talk
about variability in the context of a distribution of values. A low
dispersion indicates that the data points tend to be clustered tightly
around the center. High dispersion signifies that they tend to fall
further away. While the range is easy to understand, it is based on only the two most
extreme values in the dataset, which makes it very susceptible
In statistics, variability, dispersion, and spread are synonyms that to outliers. If one of those numbers is unusually high or low, it affects
denote the width of the distribution. Just as there are multiple the entire range even if it is atypical.
measures of central tendency, there are several measures of
variability. Additionally, the size of dataset affects the range. In general, you are
less likely to observe extreme values. However, as you increase
RANGE the sample size, you have more opportunities to obtain these extreme
values. Consequently, when you draw random samples from the same
Let’s start with the range because it is the most straightforward population, the range tends to increase as the sample size increases.
measure of variability to calculate and the simplest to understand. The Consequently, use the range to compare variability only when the
range of a dataset is the difference between the largest and smallest sample sizes are similar.
values in that dataset. For example, in the two datasets below, dataset
1 has a range of 20 – 38 = 18 while dataset 2 has a range of 11 – 52 = THE INTERQUARTILE RANGE (IQR)
41. Dataset 2 has a wider range and, hence, more variability than
dataset 1. The interquartile range is the middle half of the data. To visualize it,
think about the median value that splits the dataset in half. Similarly,
you can divide the data into quarters. Statisticians refer to these
quarters as quartiles and denote them from low to high as Q1, Q2, Q3,
and Q4. The lowest quartile (Q1) contains the quarter of the dataset
with the smallest values. The upper quartile (Q4) contains the quarter
of the dataset with the highest values. The interquartile range is the
middle half of the data that is in between the upper and lower
quartiles. In other words, the interquartile range includes the 50% of
data points that fall in Q2 and

The IQR is the red area in the graph below.

6
The interquartile range is a robust measure of variability in a similar
manner that the median is a robust measure of central tendency.
Neither measure is influenced dramatically by outliers because they
don’t depend on every value. Additionally, the interquartile range is
excellent for skewed distributions, just like the median. As you’ll
learn, when you have a normal distribution, the standard deviation
tells you the percentage of observations that fall specific distances
from the mean. However, this doesn’t work for skewed distributions,
and the IQR is a great alternative.
Mean Deviation and
I’ve divided the dataset below into quartiles. The interquartile range
(IQR) extends from the low end of Q2 to the upper limit of Q3. For Standard Deviation
this dataset, the range is 21 – 39.  8 MAY 2019 3
Mean Deviation

To understand the dispersion of data from a measure of central


tendency, we can use mean deviation. It comes as an improvement
over the range. It basically measures the deviations from a value. This
value is generally mean or median. Hence although mean deviation
about mode can be calculated, mean deviation about mean and median
are frequently used.

Note that the deviation of an observation from a value a is d= x-a. To


find out mean deviation we need to take the mean of these deviations.
However, when this value of a is taken as mean, the deviations are
both negative and positive since it is the central value.
7
This further means that when we sum up these deviations to find out
their average, the sum essentially vanishes. Thus to resolve this Variance, Coefficient of Variance
problem we use absolute values or the magnitude of deviation. The  25 AUG 2019 1 COMMENT
basic formula for finding out mean deviation is : Variance

Mean deviation= Sum of absolute values of deviations from ‘a’ ÷ Another statistical term that is related to the distribution is the
The number of observations variance, which is the standard deviation squared (variance = SD² ).
The SD may be either positive or negative in value because it is
Standard Deviation calculated as a square root, which can be either positive or negative.
By squaring the SD, the problem of signs is eliminated. One common
As the name suggests, this quantity is a standard measure of the application of the variance is its use in the F-test to compare the
deviation of the entire data in any distribution. Usually represented variance of two methods and determine whether there is a statistically
by s or σ. It uses the arithmetic mean of the distribution as the significant difference in the imprecision between the methods.
reference point and normalizes the deviation of all the data values
from this mean. In many applications, however, the SD is often preferred because it is
expressed in the same concentration units as the data. Using the SD, it
Therefore, we define the formula for the standard deviation of the is possible to predict the range of control values that should be
distribution of a variable X with n data points as: observed if the method remains stable. As discussed in an earlier
lesson, laboratorians often use the SD to impose “gates” on the
expected normal distribution of control values.

Coefficient of Variation

Another way to describe the variation of a test is calculate the


coefficient of variation, or CV. The CV expresses the variation as a
percentage of the mean, and is calculated as follows:

CV% = (SD/Xbar)100

In the laboratory, the CV is preferred when the SD increases in


proportion to concentration. For example, the data from a replication
experiment may show an SD of 4 units at a concentration of 100 units
and an SD of 8 units at a concentration of 200 units. The CVs are
4.0% at both levels and the CV is more useful than the SD for
describing method performance at concentrations in between.
However, not all tests will demonstrate imprecision that is constant in
terms of CV. For some tests, the SD may be constant over the
analytical range.

8
The CV also provides a general “feeling” about the performance of a Key Takeaways
method. CVs of 5% or less generally give us a feeling of good method
performance, whereas CVs of 10% and higher sound bad. However,  Skewness, in statistics, is the degree of distortion from the
you should look carefully at the mean value before judging a CV. At symmetrical bell curve in a probability distribution.
very low concentrations, the CV may be high and at high  Distributions can exhibit right (positive) skewness or left (negative)
concentrations the CV may be low. For example, a bilirubin test with skewness to varying degree.
an SD of 0.1 mg/dL at a mean value of 0.5 mg/dL has a CV of 20%,  Investors note skewness when judging a return distribution because it,
whereas an SD of 1.0 mg/dL at a concentration of 20 mg/dL like kurtosis, considers the extremes of the data set rather than
corresponds to a CV of 5.0%. focusing solely on the average.

Skewness ant Types Broadly speaking, there are two types of skewness: They are

 8 MAY 2019 3
(1) Positive skewness and
Skewness, in statistics, is the degree of distortion from the
symmetrical bell curve, or normal distribution, in a set of data. (2) Negative skewnes.
Skewness can be negative, positive, zero or undefined. A normal
Positive skewness
distribution has a skew of zero, while a lognormal distribution, for
example, would exhibit some degree of right-skew. A series is said to have positive skewness when the following
characteristics are noticed:
The three probability distributions depicted below depict increasing
levels of right (or positive) skewness. Distributions can also be left  Mean > Median > Mode.
(negative) skewed. Skewness is used along with kurtosis to better  The right tail of the curve is longer than its left tail, when the data are
judge the likelihood of events falling in the tails of a probability plotted through a histogram, or a frequency polygon.
distribution.  The formula of Skewness and its coefficient give positive figures.

Negative skewness

A series is said to have negative skewness when the following


characteristics are noticed:

 Mode> Median > Mode.


 The left tail of the curve is longer than the right tail, when the data are
plotted through a histogram, or a frequency polygon.
 The formula of skewness and its coefficient give negative figures.

Thus, a statistical distribution may be three types viz.


Right skewness  Symmetric
 Positively skewed
9
 Negatively skewed 1. Mesokurtic
Data that follows a mesokurtic distribution shows an excess kurtosis
of zero or close to zero. It means that if the data follows a normal
Kurtosis distribution, it follows a mesokurtic distribution.
 25 AUG 2019 1 COMMENT

Kurtosis is a statistical measure that defines how heavily the tails of a


distribution differ from the tails of a normal distribution. In other
words, kurtosis identifies whether the tails of a given distribution
contain extreme values.

Along with skewness, kurtosis is an important descriptive statistic of


data distribution. However, the two concepts must not be confused
with each other. Skewness essentially measures the symmetry of the
distribution while kurtosis determines the heaviness of the distribution
tails.
2. Leptokurtic
In finance, kurtosis is used as a measure of financial risk. A large
Leptokurtic indicates a positive excess kurtosis distribution. The
kurtosis is associated with a high level of risk of an investment
leptokurtic distribution shows heavy tails on either side, indicating the
because it indicates that there are high probabilities of extremely large
large outliers. In finance, a leptokurtic distribution shows that the
and extremely small returns. On the other hand, a small kurtosis
investment returns may be prone to extreme values on either side.
signals a moderate level of risk because the probabilities of extreme
Therefore, an investment whose returns follow a leptokurtic
returns are relatively low.
distribution is considered to be risky.
Excess Kurtosis

An excess kurtosis is a metric that compares the kurtosis of a


distribution against the kurtosis of a normal distribution. The kurtosis
of a normal distribution equals 3. Therefore, the excess kurtosis is
found using the formula below:

Excess Kurtosis = Kurtosis – 3

Types of Kurtosis

The types of kurtosis are determined by the excess kurtosis of a


particular distribution. The excess kurtosis can take positive or
negative values as well, as values close to zero. 3. Platykurtic
A platykurtic distribution shows a negative excess kurtosis. The
kurtosis reveals a distribution with flat tails. The flat tails indicate the
10
small outliers in a distribution. In the finance context, the platykurtic communications engineering, and largely in any domain of applied
distribution of the investment returns is desirable for investors because science and engineering which involves temporal measurements.
there is a small probability that the investment would experience
extreme returns. Time series analysis comprises methods for analyzing time series data
in order to extract meaningful statistics and other characteristics of the
data. Time series forecasting is the use of a model to predict future
values based on previously observed values. While regression analysis
is often employed in such a way as to test theories that the current
values of one or more independent time series affect the current value
of another time series, this type of analysis of time series is not called
“time series analysis”, which focuses on comparing values of a single
time series or multiple dependent time series at different points in
time. Interrupted time series analysis is the analysis of interventions
on a single time series.

Time series data have a natural temporal ordering. This makes time
series analysis distinct from cross-sectional studies, in which there is
no natural ordering of the observations (e.g. explaining people’s
UNIT 2 Time Series and Index Number wages by reference to their respective education levels, where the
individuals’ data could be entered in any order). Time series analysis

ime Series Analysis: Concept, is also distinct from spatial data analysis where the observations
typically relate to geographical locations (e.g. accounting for house

Additive and prices by the location as well as the intrinsic characteristics of the
houses). A stochastic model for a time series will generally reflect the

Multiplicative Models fact that observations close together in time will be more closely
related than observations further apart. In addition, time series models
 28 AUG 2019 1 COMMENT
will often make use of the natural one-way ordering of time so that
A time series is a series of data points indexed (or listed or graphed) in values for a given period will be expressed as deriving in some way
time order. Most commonly, a time series is a sequence taken at from past values, rather than from future values.
successive equally spaced points in time. Thus it is a sequence of
discrete-time data. Examples of time series are heights of ocean tides, Additive model:
counts of sunspots, and the daily closing value of the Dow Jones
Industrial Average. 1. Data is represented in terms of addition of seasonality, trend, cyclical
and residual components
Time series are very frequently plotted via line charts. Time series are 2. Used where change is measured in absolute quantity
used in statistics, signal processing, pattern recognition, econometrics, 3. Data is modeled as-is
mathematical finance, weather forecasting, earthquake prediction,
electroencephalography, control engineering, astronomy, Additive model is used when the variance of the time series doesn’t
change over different values of the time series.
11
On the other hand, if the variance is higher when the time series is values are usually recorded over equal time interval daily,
higher then it often means we should use a multiplicative models.R weekly, monthly, quarterly, half yearly, yearly, or any other time
measure. Monthly statistics of Industrial Production in India, Annual
eturni=pricei−pricei−1=trendi−trendi−1+seasonali−seasonali−1+errori birth-rate figures for the entire world, yield on ordinary shares, weekly
−errori−1returni=pricei−pricei−1=trendi−trendi−1+seasonali−seasonal wholesale price of rice, daily records of tea sales or census data are
i−1+errori−errori−1I some of the examples of time series. Each has a
common characteristic of recording magnitudes that vary with passage
f error’s increments have normal iid distributions then returni has also of time.
a normal distribution with constant variance over time.
Time series are influenced by a variety of forces. Some are
Multiplicative model: continuously effective other make themselves felt at recurring time
intervals, and still others are non-recurring or random in nature.
1. Data is represented in terms of multiplication of seasonality, trend, Therefore, the first task is to break down the data and study each of
cyclical and residual components these influences in isolation. This is known as decomposition of the
2. Used where change is measured in percent (%) change time series. It enables us to understand fully the nature of the forces at
3. Data is modeled just as additive but after taking logarithm (with base work. We can then analyse their combined interactions. Such a study
as natural or base 10) is known as time-series analysis.

If log of the time series is an additive model then the original time Components of time series
series is a multiplicative model, because:l A time series consists of the following four components or elements:

og(pricei)=log(trendi⋅seasonali⋅errori)=log(trendi)+log(seasonali) 1. Basic or Secular or Long-time trend;


+log(errori)log(pricei)=log(trendi⋅seasonali⋅errori)=log(trendi) 2. Seasonal variations;
+log(seasonali)+log(errori) 3. Business cycles or cyclical movement; and
4. Erratic or Irregular fluctuations.
So the return of logarithms:
These components provide a basis for the explanation of the past
log(pricei)−log(pricei−1)=log(pricei/pricei−1) behaviour. They help us to predict the future behaviour. The major
tendency of each component or constituent is largely due to casual

Components of time series Viz. factors. Therefore a brief description of the components and the causal
factors associated with each component should be given before
proceeding further.
Secular trend, Cyclical, Seasonal
1. Basic or secular or long-time trend: Basic trend underlines the tendency to
and irregular variations grow or decline over a period of years. It is the movement that the series
would have taken, had there been no seasonal, cyclical or erratic factors. It
 12 MAY 2019 3
is the effect of such factors which are more or less constant for a long time
When quantitative data are arranged in the order of their occurrence, or which change very gradually and slowly. Such factors are gradual growth
the resulting statistical series is called a time series. The quantitative in population, tastes and habits or the effect on industrial output due to

12
improved methods. Increase in production of automobiles and a gradual The successful operation of any business requires that its seasonal
decrease in production of food grains are examples of increasing and variations be known, measured and exploited fully. Frequently, the
decreasing secular trend. purchase of seasonal item is made from six months to a year in
advance. Departments with opposite seasonal changes are frequently
All basic trends are not of the same nature. Sometimes the combined in the same firm to avoid dull seasons and to keep sales or
predominating tendency will be a constant amount of growth. This production up during the entire year. Seasonal variations are measured
type of trend movement takes the form of a straight line when the as a percentage of the trend rather than in absolute quantities. The
trend values are plotted on a graph paper. Sometimes the trend will be seasonal index for any month (week, quarter etc.) may be defined as
constant percentage increase or decrease. This type takes the form of a the ratio of the normally expected value (excluding the business cycle
straight line when the trend values are plotted on a semi-logarithmic and erratic movements) to the corresponding trend value. When
chart. Other types of trend encountered are “logistic”, “S-curyes”, etc. cyclical movement and erratic fluctuations are absent in a lime series,
Properly recognising and accurately measuring basic trends is one of such a series is called normal. Normal values thus are consisting of
the most important problems in time series analysis. Trend values are trend and seasonal components. Thus when normal values are divided
used as the base from which other three movements are measured. by the corresponding trend values, we obtain seasonal component of
Therefore, any inaccuracy in its measurement may vitiate the entire time series.
work. Fortunately, the causal elements controlling trend growth are 3. Business Cycle: Because of the persistent tendency for business to
relatively stable. Trends do not commonly change their nature quickly prosper, decline, stagnate recover; and prosper again, the third
and without warning. It is therefore reasonable to assume that a characteristic movement in economic time series is called the business
representative trend, which has characterized the data for a past cycle. The business cycle does not recur regularly like seasonal
period, is prevailing at present, and that it may be projected into the movement, but moves in response to causes which develop
future for a year or so. intermittently out of complex combinations of economic and other
considerations. When the business of a country or a community is
2. Seasonal Variations: The two principal factors liable for seasonal changes above or below normal, the excess deficiency is usually attributed to
are the climate or weather and customs. Since, the growth of all vegetation the business cycle. Its measurement becomes a process of contrast
depends upon temperature and moisture, agricultural activity is confined occurrences with a normal estimate arrived at by combining the
largely to warm weather in the temperate zones and to the rainy or post- calculated trend and seasonal movements. The measurement of the
rainy season in the torried zone (tropical countries or sub-tropical countries variations from normal may be made in terms of actual quantities or it
like India). Winter and dry season make farming a highly seasonal business. may be made in such terms as percentage deviations, which is
This high irregularity of month to month agricultural production determines generally more satisfactory method as it places the measure of cyclical
largely all harvesting, marketing, canning, preserving, storing, financing, and tendencies on comparable base throughout the entire period under
pricing of farm products. Manufacturers, bankers and merchants who deal analysis.
with farmers find their business taking on the same seasonal pattern which 4. Erratic or Irregular Component: These movements are
characterise the agriculture of their area. exceedingly difficult to dissociate quantitatively from the business
The second cause of seasonal variation is custom, education or tradition. cycle. Their causes are such irregular and unpredictable happenings
Such traditional days as Dewali, Christmas. Id etc., product marked such as wars, droughts, floods, fires, pestilence, fads and fashions
variations in business activity, travel, sales, gifts, finance, accident, and
which operate as spurs or deterrents upon the progress of the cycle.
vacationing.
Examples such movements are : high activity in middle forties due to
erratic effects of 2nd world war, depression of thirties throughout the
world, export boom associated with Korean War in 1950.
13
The common denominator of every random factor it that does not function estimating problem, and correspondingly, – prediction
come about as a result of the ordinary operation of the business problem, is only the form for problem discussing.
system and does not recur in any meaningful manner.
It is opportune to note, that the LSM is equivalent to Maximum
Mathematical Statement of the Composition of Time Series Likelihood Method for classic normal regression. This method is
widely used in econometric problems. The development of technical
A time series may not be affected by all type of variations. Some of capabilities of LSM for solution of optimization and predictive
these type of variations may affect a few time series, while the other application tasks is proposed. Some examples of the least squares
series may be effected by all of them. Hence, in analysing time series, method for macroeconomic models parameters identification are
these effects are isolated. In classical time series analysis it is assumed given in. Linear regression (LA) within RA has the advantage of
that any given observation is made up of trend, seasonal, cyclical and having a closed form solution of parameter estimation
irregular movements and these four components have multiplicative
relationship.  Problem and prediction problem. Real valued functions of vector
Symbolically : argument are the object of investigation in RA in general and in LA in
particular. Such suppositions are due to technical capabilities of
O=T×S×C×I technique for solving optimization problems in LSM. This technique
where O refers to original data, is in the essence an investigation of extremum necessary conditions.
T refers to trend. This remark is entirely true for yet another widely used assumption,
S refers to seasonal variations, namely, full column rank assumption for appropriate matrix, which
C refers to cyclical variations and ensure uniqueness of parameter estimation. It’s interesting that
I refers lo irregular variations. another technique: Moore – Penrose pseudo inverse (M-Ppi) provides
This is the most commonly used model in the decomposition of time a comprehensive study and solution of parameter estimation problem.
series.
There is another model called Additive model in which a particular And the remark in conclusion. Obvious advantage of matrixes LSM,
observation in a time series is the sum of these four components. besides the explicit closed estimation form, is the fact that matrixes
O=T+S+C+I observations preserve relationships between the characteristics of
phenomenon under consideration. Examples of matrix least square

Applications in business method in macroeconomic and business problems with different types
of relations between input and output data and different degree data

Decision Making discretization are given in.

 28 AUG 2019 1 COMMENT

The Least Squares Method (LSM) is reliable and prevalent means to Index Numbers: Definition,
solve prediction problems in applied research and in econometrics
particularly. It is used in the case when the function is represented by Characteristics, Uses, Types
its observations. Commonly used statistical form of LSM is called  10 JUL 2019 2

Regression Analysis (RA). It is necessary to say, that RA is only The value of money does not remain constant over time. It rises or
statistical shape for representing the link between the components in falls and is inversely related to the changes in the price level. A rise in
observations. So using RA terminology of LSM for solution of the price level means a fall in the value of money and a fall in the
14
price level means a rise in the value of money. Thus, changes in the (iv) The technique of index numbers is used to compare the levels of a
value of money are reflected by the changes in the general level of phenomenon on a certain date with its level on some previous date
prices over a period of time. Changes in the general level of prices can (e.g., the price level in 1980 as compared to that in 1960 taken as the
be measured by a statistical device known as ‘index number.’ base year) or the levels of a phenomenon at different places on the
same date (e.g., the price level in India in 1980 in comparison with
Index number is a technique of measuring changes in a variable or that in other countries in 1980).
group of variables with respect to time, geographical location or other
characteristics. There can be various types of index numbers, but, in The main uses of index numbers are given below.
the present context, we are concerned with price index numbers,
which measures changes in the general price level (or in the value of  Index numbers are used in the fields of commerce, meteorology,
money) over a period of time. labour, industry, etc.
 Index numbers measure fluctuations during intervals of time, group
Price index number indicates the average of changes in the prices of differences of geographical position of degree, etc.
representative commodities at one time in comparison with that at  They are used to compare the total variations in the prices of different
some other time taken as the base period. According to L.V. Lester, commodities in which the unit of measurements differs with time and
“An index number of prices is a figure showing the height of average price, etc.
prices at one time relative to their height at some other time which is  They measure the purchasing power of money.
taken as the base period.”  They are helpful in forecasting future economic trends.
 They are used in studying the difference between the comparable
Features of Index Numbers: categories of animals, people or items.
 Index numbers of industrial production are used to measure the
(i) Index numbers are a special type of average. Whereas mean,
changes in the level of industrial production in the country.
median and mode measure the absolute changes and are used to
 Index numbers of import prices and export prices are used to measure
compare only those series which are expressed in the same units, the
the changes in the trade of a country.
technique of index numbers is used to measure the relative changes in
 Index numbers are used to measure seasonal variations and cyclical
the level of a phenomenon where the measurement of absolute change
variations in a time series.
is not possible and the series are expressed in different types of items.
A collection of index numbers for different years, locations, etc., is
(ii) Index numbers are meant to study the changes in the effects of
sometimes called an index series.
such factors which cannot be measured directly. For example, the
general price level is an imaginary concept and is not capable of direct
 Simple Index Number: A simple index number is a number that
measurement. But, through the technique of index numbers, it is
measures a relative change in a single variable with respect to a base.
possible to have an idea of relative changes in the general level of
 Composite Index Number: A composite index number is a number
prices by measuring relative changes in the price level of different
that measures an average relative changes in a group of relative
commodities.
variables with respect to a base.
(iii) The technique of index numbers measures changes in one
Types of Index Numbers
variable or group of related variables. For example, one variable can
be the price of wheat, and group of variables can be the price of sugar,
the price of milk and the price of rice.
15
The following types of index numbers are usually used: price index Just like price index numbers and value index numbers, there are also
numbers and quantity index numbers. two types of quantity index numbers, namely

 Price Index Numbers: Price index numbers measure the relative  Unweighted Quantity Indices
changes in the price of a commodity between two periods. Prices can  Weighted Quantity Indices
be either retail or wholesale.
 Quantity Index Numbers: These index numbers are considered to Let us take a look at the various methods, formulas, and examples of
measure changes in the physical quantity of goods produced, both these types of quantity index numbers.
consumed or sold for an item or a group of items.
Value Index Number

Price, Quantity and Value Indices The value index number compares the value of a commodity in the
current year, with its value in the base year. What is the value of a
 10 JUL 2019 2 commodity? It is nothing but the product of the price of the
commodity and the quantity. So the value index number is the sum of
Price index (PI)
the value of the commodity of the current year divided by the sum of
A price index (PI) is a measure of how prices change over a period of its value in the chosen base year. The formula is as follows,
time, or in other words it is a way to measure inflation. There are
multiple methods on how to calculate the inflation (or deflation), in v  = (∑p1q1/∑p0q0) × 100
01

this guide we will take a look at a couple of methods on how to do so.


Inflation is one of the core metrics monitored by the FED in order to or alternatively,  v  = (∑V1/∑V0 )× 100
01

set interest rates.


In this case of a value index number, we do not apply any weights.
The general formula for the price index is the following: Since these are considered to be inherent in the value of a commodity.
Thus we can say that a value index number is an aggregate of values.
PI1,2 = f(P1,P2,X)
Where: The value index number is not a very popular statistical tool. Price and
quantity index numbers give a clearer picture of the economy for
 PI : Some PI that measures the change in price from period 1 to period 2
1,2 study and analysis. They even help in the formulation and
 P : Price of goods in period 1
1 implementation of economic policies.
 P : Price of goods in period 2
2

 X: Weights (the weights are used in conjunction with the prices)


Chain index Numbers
Quantity index numbers  10 JUL 2019 2

In this method, there is no fixed base period; the year immediately


Quantity index numbers measure the change in the quantity or volume
preceding the one for which the price index has to be calculated is
of goods sold, consumed or produced during a given time period.
assumed as the base year. Thus, for the year 1994 the base year would
Hence it is a measure of relative changes over a period of time in the
be 1993, for 1993 it would be 1992, for 1992 it would be 1991, and so
quantities of a particular set of goods.
on. In this way there is no fixed base and it keeps on changing.

16
The chief advantage of this method is that the price relatives of a year 1974 18 1818×100=1001818
can be compared with the price levels of the immediately preceding 100100
1974 18 ×100=100
year. Businesses mostly interested in comparing this time period
rather than comparing rates related to the distant past will utilize this
method. 1975 21 2118×100=116.6721 100×116.67100=116.67100×1
1975 21 18×100=116.67 16.67100=116.67
Another advantage of the chain base method is that it is possible to
include new items in an index number or to delete old times which are
no longer important. This is not possible with the fixed base method. 1976 25 2521×100=119.0525 116.67×119.05100=138.9116.
But the chain base method has the drawback that comparisons cannot 1976 25 21×100=119.05 67×119.05100=138.9
be made over a long period.
1977 23 2325×100=922325× 138.9×92100=127.79138.9×9
In chain base,
1977 23 100=92 2100=127.79
Link relative of current years  = (Price in the Current Year/Price in the
preceding Year×100
OR 1978 28 2823×100=121.7428 127.79×121.74100=155.57127
1978 28 23×100=121.74 .79×121.74100=155.57
Pn−1,n= (Pn/Pn−1)×100

Example: 1979 30 3028×100=107.1430 155.57×107.17100=166.68155


Find the index numbers for the following data taking 1980 as the base 1979 30 28×100=107.14 .57×107.17100=166.68
year.
Selection of a suitable average
Yea 197419 197519 197619 197719 197819 197919
rs 74 75 76 77 78 79 There are different averages which can be used in averaging the price
relatives or link relatives of different commodities. Experts have
suggested that the geometric mean should be calculated to average
Pric these relatives. But as the calculation of the geometric mean is
1818 2121 2525 2323 2828 3030
e difficult, it is mostly avoided and the arithmetic mean is commonly
used. In some cases the median is used to remove the effect of wild
Solution: observations.

Selection of suitable weights


Pr Link Relatives
Year Chain Indices
ice =Pn/Pn−1×100 In the calculation of price index numbers all commodities are not of
equal importance. In order to give them due importance, commodities
are given due weights. Weights are of two kinds: (a) Implicit weights
and (b) explicit weights.

17
Implicit weights are not explicitly assigned to any commodity, but the If the relationship between two variables X and Y is to be ascertained,
commodity to which greater importance is attached and is repeated a then the following formula is used:
number of times. A number of varieties of such commodities are
included in the index number as separate items. Thus, if an index
number wheat is to receive a weight of 3 and rice a weight of 2, three
varieties of wheat and two varieties of rice included in these method
weights are not apparent, but items are implicitly weighted.

Explicit weights are explicitly assigned to commodities. Only one


variety of the commodity is included in the construction of the index
number but its price relative is multiplied by the figure of weight
assigned to it. Explicit weights are decided on a logical basis. For
example, if wheat and rice are to be weighted in accordance with the
value of their net output and if the ratio of their net output is 5:2,
wheat would receive a weight of five and rice of two. Properties of Coefficient of Correlation

Sometimes the quantities which are consumed are used as weights.  The value of the coefficient of correlation (r) always lies between±1.
These are called quantity weights. The amount spent on different Such as:
commodities can also be used as their weights. These are called the r=+1, perfect positive correlation
value weights. r=-1, perfect negative correlation
r=0, no correlation
 The coefficient of correlation is independent of the origin and
UNIT 3 Correlation & Regression Analysis scale.By origin, it means subtracting any non-zero constant from the
given value of X and Y the vale of “r” remains unchanged. By scale it
Measurement of Correlation: Karl means, there is no effect on the value of “r” if the value of X and Y is
divided or multiplied by any constant.
Pearson’s Method, Spearman  The coefficient of correlation is a geometric mean of two regression
coefficient. Symbolically it is represented as:
Rank Correlation
 10 FEB 2019 3

Karl Pearson’s Coefficient of Correlation is widely used


mathematical method wherein the numerical expression is used to
calculate the degree and direction of the relationship between linear
related variables.  The coefficient of correlation is “ zero” when the variables X and Y
are independent. But, however, the converse is not true.
Pearson’s method, popularly known as a Pearsonian Coefficient of
Correlation, is the most extensively used quantitative methods in Assumptions of Karl Pearson’s Coefficient of Correlation
practice. The coefficient of correlation is denoted by “r”.

18
1. The relationship between the variables is “Linear”, which means The following formula is used to calculate the Spearman rank
when the two variables are plotted, a straight line is formed by the correlation:
points plotted.

2. There are a large number of independent causes that affect the


variables under study so as to form a Normal Distribution. Such as,
variables like price, demand, supply, etc. are affected by such factors
that the normal distribution is formed. ρ= Spearman rank correlation

3. The variables are independent of each other.                                      di= the difference between the ranks of corresponding variables

Note: The coefficient of correlation measures not only the magnitude n= number of observations
of correlation but also tells the direction. Such as, r = -0.67, which
shows correlation is negative because the sign is “-“ and the Assumptions
magnitude is 0.67.
The assumptions of the Spearman correlation are that data must be at
SPEARMAN RANK CORRELATION least ordinal and the scores on one variable must be monotonically
related to the other variable.
Spearman rank correlation is a non-parametric test that is used to
measure the degree of association between two variables.  The
Spearman rank correlation test does not carry any assumptions about Properties of Correlation co-
the distribution of the data and is the appropriate correlation analysis
when the variables are measured on a scale that is at least ordinal. efficient
 10 MAY 2019 2
The Spearman correlation between two variables is equal to the The following are the main properties of correlation.
Pearson correlation between the rank values of those two variables;
while Pearson’s correlation assesses linear relationships, Spearman’s 1. Coefficient of Correlation lies between -1 and +1:
correlation assesses monotonic relationships (whether linear or not). If
there are no repeated data values, a perfect Spearman correlation of +1 The coefficient of correlation cannot take value less than -1 or more
or −1 occurs when each of the variables is a perfect monotone than one +1. Symbolically,
function of the other.
-1<=r<= + 1 or | r | <1.
Intuitively, the Spearman correlation between two variables will be
high when observations have a similar (or identical for a correlation of
2. Coefficients of Correlation are independent of Change of Origin:
1) rank (i.e. relative position label of the observations within the
variable: 1st, 2nd, 3rd, etc.) between the two variables, and low when
This property reveals that if we subtract any constant from all the
observations have a dissimilar (or fully opposed for a correlation of
values of X and Y, it will not affect the coefficient of correlation.
−1) rank between the two variables.
3. Coefficients of Correlation possess the property of symmetry:
19
The degree of relationship between two variables is symmetric as  The value of the coefficient of correlation (r) always lies between±1.
shown below: Such as:
r=+1, perfect positive correlation
4. Coefficient of Correlation is independent of Change of Scale: r=-1, perfect negative correlation
r=0, no correlation
This property reveals that if we divide or multiply all the values of X  The coefficient of correlation is independent of the origin and
and Y, it will not affect the coefficient of correlation. scale.By origin, it means subtracting any non-zero constant from the
given value of X and Y the vale of “r” remains unchanged. By scale it
5. Co-efficient of correlation measures only linear correlation between X means, there is no effect on the value of “r” if the value of X and Y is
and Y. divided or multiplied by any constant.
6. If two variables X and Y are independent, coefficient of correlation  The coefficient of correlation is a geometric mean of two regression
between them will be zero. coefficient.Symbolically it is represented as:

Karl Pearson’s Coefficient of Correlation is widely used


mathematical method wherein the numerical expression is used to
calculate the degree and direction of the relationship between linear
related variables.
 The coefficient of correlation is “zero”when the variables X and Y
Pearson’s method, popularly known as a Pearsonian Coefficient of are independent. But, however, the converse is not true.
Correlation, is the most extensively used quantitative methods in
practice. The coefficient of correlation is denoted by “r”. Assumptions of Karl Pearson’s Coefficient of Correlation
If the relationship between two variables X and Y is to be ascertained, 1. The relationship between the variables is “Linear”,which means
then the following formula is used: when the two variables are plotted, a straight line is formed by the
points plotted.
2. There are a large number of independent causes that affect the
variables under study so as to form a Normal Distribution. Such as,
variables like price, demand, supply, etc. are affected by such factors
that the normal distribution is formed.
3. The variables are independent of each other.

Note: The coefficient of correlation measures not only the magnitude


of correlation but also tells the direction. Such as, r = -0.67, which
shows correlation is negative because the sign is “-“and the
magnitude is 0.67.
Properties of Coefficient of Correlation

20
Regression: Meaning, Where:

Y = the variable that you are trying to predict (dependent variable).


Assumption, Regression Line
 10 FEB 2019 3
X = the variable that you are using to predict Y (independent
REGRESSION variable).

Regression is a statistical measurement used in finance, investing and a = the intercept.


other disciplines that attempts to determine the strength of the
relationship between one dependent variable (usually denoted by Y) b = the slope.
and a series of other changing variables (known as independent
variables). u = the regression residual.

Regression helps investment and financial managers to value assets Regression takes a group of random variables, thought to be
and understand the relationships between variables, such as predicting Y, and tries to find a mathematical relationship between
commodity prices and the stocks of businesses dealing in those them. This relationship is typically in the form of a straight line (linear
commodities. regression) that best approximates all the individual data points. In
multiple regression, the separate variables are differentiated by using
Regression Explained numbers with subscripts.

The two basic types of regression are linear regression and multiple ASSUMPTIONS IN REGRESSION
linear regressions, although there are non-linear regression methods
for more complicated data and analysis. Linear regression uses one  Independence: The residuals are serially independent (no
independent variable to explain or predict the outcome of the autocorrelation).
dependent variable Y, while multiple regressions use two or more  The residuals are not correlated with any of the independent
independent variables to predict the outcome. (predictor) variables.
 Linearity: The relationship between the dependent variable and each
Regression can help finance and investment professionals as well as of the independent variables is linear.
professionals in other businesses. Regression can also help predict  Mean of Residuals: The mean of the residuals is zero.
sales for a company based on weather, previous sales, GDP growth or  Homogeneity of Variance: The variance of the residuals at all levels
other types of conditions. The capital asset pricing model (CAPM) is of the independent variables is constant.
an often-used regression model in finance for pricing assets and  Errors in Variables: The independent (predictor) variables are
discovering costs of capital. measured without error.
 Model Specification: All relevant variables are included in the
The general form of each type of regression is: model. No irrelevant variables are included in the model.
 Normality: The residuals are normally distributed. This assumption is
 Linear regression: Y = a + bX + u needed for valid tests of significance but not for estimation of the
 Multiple regression: Y = a + b1X1 + b2X2 + b3X3 + … + btXt + u regression coefficients.

21
REGRESSION LINE
Properties of
Definition: The Regression Line is the line that best fits the data, such
that the overall distance from the line to the points (variable values) Regression Coefficients
plotted on a graph is the smallest. In other words, a line used to  27 AUG 2019 1 COMMENT
minimize the squared deviations of predictions is called as the The constant ‘b’ in the regression equation (Y = a + bX) is called as

regression line. the Regression Coefficient. It determines the slope of the line, i.e. the
change in the value of Y corresponding to the unit change in X and
There are as many numbers of regression lines as variables. Suppose therefore, it is also called as a “Slope Coefficient.”
we take two variables, say X and Y, then there will be two regression
lines: Properties of Regression Coefficient

 Regression line of Y on X: This gives the most probable values of Y 1. The correlation coefficient is the geometric mean of two regression
from the given values of X. coefficients. Symbolically, it can be expressed as:
 Regression line of X on Y: This gives the most probable values of X
from the given values of Y.

The algebraic expression of these regression lines is called as


Regression Equations. There will be two regression equations for the 2. The value of the coefficient of correlation cannot exceed unity i.e.
two regression lines. 1. Therefore, if one of the regression coefficients is greater than unity,
the other must be less than unity.
The correlation between the variables depend on the distance between 3. The sign of both the regression coefficients will be same, i.e. they
these two regression lines, such as the nearer the regression lines to will be either positive or negative. Thus, it is not possible that one
each other the higher is the degree of correlation, and the farther the regression coefficient is negative while the other is positive.
regression lines to each other the lesser is the degree of correlation. 4. The coefficient of correlation will have the same signas that of the
regression coefficients, such as if the regression coefficients have a
The correlation is said to be either perfect positive or perfect negative
positive sign, then “r” will be positive and vice-versa.
when the two regression lines coincide, i.e. only one line exists. In
5. The average value of the two regression coefficients will be greater
case, the variables are independent; then the correlation will be zero,
than the value of the correlation. Symbolically, it can be represented
and the lines of regression will be at right angles, i.e. parallel to the X
axis and Y axis.

Note: The regression lines cut each other at the point of average of X


and Y. This means, from the point where the lines intersect each other as
the perpendicular is drawn on the X axis we will get the mean value of 6. The regression coefficients are independent of the change of origin,
X. Similarly, if the horizontal line is drawn on the Y axis we will get but not of the scale. By origin, we mean that there will be no effect
the mean value of Y. on the regression coefficients if any constant is subtracted from the
value of X and Y. By scale, we mean that if the value of X and Y is
either multiplied or divided by some constant, then the regression
coefficients will also change.
22
Thus, all these properties should be kept in mind while solving for the versa. In other words, for a negative correlation, the variables work
regression coefficients. opposite each other. When there is a positive correlation between two
variables, as the value of one variable increases, the value of the other

Correlation and variable also increases. The variables move together.

The standard error of a correlation coefficient is used to determine the


regression analysis confidence intervals around a true correlation of zero. If your
 11 APR 2018 5
correlation coefficient falls outside of this range, then it is
significantly different than zero. The standard error can be calculated
Correlation Analysis for interval or ratio-type data (i.e., only for Pearson’s product-moment
correlation).

The significance (probability) of the correlation coefficient is


determined from the t-statistic. The probability of the t-statistic
indicates whether the observed correlation coefficient occurred by
chance if the true correlation is zero. In other words, it asks if the
correlation is significantly different than zero. When the t-statistic is
calculated for Spearman’s rank-difference correlation coefficient,
there must be at least 30 cases before the t-distribution can be used to
determine the probability. If there are fewer than 30 cases, you must
refer to a special table to find the probability of the correlation
coefficient.

Example
Correlation is a measure of association between two variables. The A company wanted to know if there is a significant relationship
variables are not designated as dependent or independent. The two between the total number of salespeople and the total number of sales.
most popular correlation coefficients are: Spearman’s correlation They collect data for five months.
coefficient rho and Pearson’s product-moment correlation coefficient.
Variable 1 Variable 2
When calculating a correlation coefficient for ordinal data, select
Spearman’s technique. For interval or ratio-type data, use Pearson’s
technique. 207 6907

The value of a correlation coefficient can vary from minus one to plus
one. A minus one indicates a perfect negative correlation, while a plus 180 5991
one indicates a perfect positive correlation. A correlation of zero
means there is no relationship between the two variables. When there
is a negative correlation between two variables, as the value of one 220 6810
variable increases, the value of the other variable decreases, and vise
23
205 6553 1 2

190 6190 3 3

——————————– 4 3

Correlation coefficient = .921


Standard error of the coefficient = ..068 1 1
t-test for the significance of the coefficient = 4.100
Degrees of freedom = 3
Two-tailed probability = .0263 2 1

Another Example
——————————————-
Respondents to a survey were asked to judge the quality of a product
on a four-point Likert scale (excellent, good, fair, poor). They were Correlation coefficient rho = .830
also asked to judge the reputation of the company that made the t-test for the significance of the coefficient = 3.332
product on a three-point scale (good, fair, poor). Is there a significant Number of data pairs = 7
relationship between respondents perceptions of the company and
their perceptions of quality of the product? Probability must be determined from a table because of the small
sample size.
Since both variables are ordinal, Spearman’s method is chosen. The
Regression Analysis
first variable is the rating for the quality the product. Responses are
coded as 4=excellent, 3=good, 2=fair, and 1=poor. The second Simple regression is used to examine the relationship between one
variable is the perceived reputation of the company and is coded dependent and one independent variable. After performing an
3=good, 2=fair, and 1=poor. analysis, the regression statistics can be used to predict the dependent
variable when the independent variable is known. Regression goes
Variable 1 Variable 2 beyond correlation by adding prediction capabilities.

People use regression on an intuitive level every day. In business, a


4 3 well-dressed man is thought to be financially successful. A mother
knows that more sugar in her children’s diet results in higher energy
levels. The ease of waking up in the morning often depends on how
2 2 late you went to bed the night before. Quantitative regression adds

24
precision by developing a mathematical formula that can be used for researchers prefer to report the F-ratio instead of the t-statistic. The F-
predictive purposes. ratio is equal to the t-statistic squared.

For example, a medical researcher might want to use body weight The t-statistic for the significance of the slope is essentially a test to
(independent variable) to predict the most appropriate dose for a new determine if the regression model (equation) is usable. If the slope is
drug (dependent variable). The purpose of running the regression is to significantly different than zero, then we can use the regression model
find a formula that fits the relationship between the two variables. to predict the dependent variable for any value of the independent
Then you can use that formula to predict values for the dependent variable.
variable when only the independent variable is known. A doctor could
prescribe the proper dose based on a person’s body weight. On the other hand, take an example where the slope is zero. It has no
prediction ability because for every value of the independent variable,
The regression line (known as the least squares line) is a plot of the the prediction for the dependent variable would be the same. Knowing
expected value of the dependent variable for all values of the the value of the independent variable would not improve our ability to
independent variable. Technically, it is the line that “minimizes the predict the dependent variable. Thus, if the slope is not significantly
squared residuals”. The regression line is the one that best fits the data different than zero, don’t use the model to make predictions.
on a scatterplot.
The coefficient of determination (r-squared) is the square of the
Using the regression equation, the dependent variable may be correlation coefficient. Its value may vary from zero to one. It has the
predicted from the independent variable. The slope of the regression advantage over the correlation coefficient in that it may be interpreted
line (b) is defined as the rise divided by the run. The y intercept (a) is directly as the proportion of variance in the dependent variable that
the point on the y axis where the regression line would intercept the y can be accounted for by the regression equation. For example, an r-
axis. The slope and y intercept are incorporated into the regression squared value of .49 means that 49% of the variance in the dependent
equation. The intercept is usually called the constant, and the slope is variable can be explained by the regression equation. The other 51%
referred to as the coefficient. Since the regression model is usually not is unexplained.
a perfect predictor, there is also an error term in the equation.
The standard error of the estimate for regression measures the amount
In the regression equation, y is always the dependent variable and x is of variability in the points around the regression line. It is the standard
always the independent variable. Here are three equivalent ways to deviation of the data points as they are distributed around the
mathematically describe a linear regression model. regression line. The standard error of the estimate can be used to
develop confidence intervals around a prediction.
y = intercept + (slope x) + error
Example
y = constant + (coefficientx) + error
A company wants to know if there is a significant relationship
y = a + bx + e between its advertising expenditures and its sales volume. The
independent variable is advertising budget and the dependent variable
The significance of the slope of the regression line is determined from is sales volume. A lag time of one month will be used because sales
the t-statistic. It is the probability that the observed correlation are expected to lag behind actual advertising expenditures. Data was
coefficient occurred by chance if the true correlation is zero. Some collected for a six month period. All figures are in thousands of
25
dollars. Is there a significant relationship between advertising budget Furthermore, 80.7% of the variability in sales volume could be
and sales volume? explained by advertising expenditures.

Indep. Var. Depen. Var UNIT 4 Probability Theory & Distribution

4.2 27.1 Probability Meaning and


Approaches of Probability Theory
6.1 30.4
 THESTREAK24 MAY 2018 3

In our day to day life the “probability” or “chance” is very commonly


3.9 25.0
used term. Sometimes, we use to say “Probably it may rain
tomorrow”, “Probably Mr. X may come for taking his class today”,
“Probably you are right”. All these terms, possibility and probability
5.7 29.7 convey the same meaning. But in statistics probability has certain
special connotation unlike in Layman’s view.

7.3 40.1 The theory of probability has been developed in 17th century. It has
got its origin from games, tossing coins, throwing a dice, drawing a
card from a pack. In 1954 Antoine Gornband had taken an initiation
5.9 28.8 and an interest for this area.

After him many authors in statistics had tried to remodel the idea
————————————————– given by the former. The “probability” has become one of the basic
tools of statistics. Sometimes statistical analysis becomes paralyzed
Model: y = 9.873 + (3.682x) + error without the theorem of probability. “Probability of a given event is
Standard error of the estimate = 2.637 defined as the expected frequency of occurrence of the event
t-test for the significance of the slope = 3.961 among events of a like sort.” (Garrett)
Degrees of freedom = 4
Two-tailed probability = .0149 The probability theory provides a means of getting an idea of the
r-squared = .807 likelihood of occurrence of different events resulting from a random
experiment in terms of quantitative measures ranging between zero
You might make a statement in a report like this: A simple linear and one. The probability is zero for an impossible event and one for
regression was performed on six months of data to determine if there an event which is certain to occur.
was a significant relationship between advertising expenditures and
sales volume. The t-statistic for the slope was significant at the .05 Approaches of Probability Theory
critical alpha level, t(4)=3.96, p=.015. Thus, we reject the null
hypothesis and conclude that there was a positive significant 1. Classical Probability
relationship between advertising expenditures and sales volume.
26
The classical approach to probability is one of the oldest and simplest Pr. of a black ball = 20/45 = 4/9 = p, 25 Pr. of a white ball = 25/45 =
school of thought. It has been originated in 18th century which 5/9 = q
explains probability concerning games of chances such as throwing
coin, dice, drawing cards etc. p = 4/9 and q = 5/9 (p + q= 4/9 + 5/9= 1)

The definition of probability has been given by a French 2. Relative Frequency Theory of Probability
mathematician named “Laplace”. According to him probability is the
ratio of the number of favourable cases among the number of equally This approach to probability is a protest against the classical approach.
likely cases. It indicates the fact that if n is increased upto the ∞, we can find out
the probability of p or q.
Or in other words, the ratio suggested by classical approach is:
Example:
Pr. = Number of favourable cases/Number of equally likely cases
If n is ∞, then Pr. of A= a/n = .5, Pr. of B = b/n = 5
For example, if a coin is tossed, and if it is asked what is the
probability of the occurrence of the head, then the number of the If an event occurs a times out of n its relative frequency is a/n. When
favourable case = 1, the number of the equally likely cases = 2. n becomes ∞, is called the limit of relative frequency.

Pr. of head = 1/2 Pr. (A) = limit a/n

Symbolically it can be expressed as: where n → ∞

P = Pr. (A) = a/n, q = Pr. (B) or (not A) = b/n Pr. (B) = limit bl.t. here → ∞.

1 – a/n = b/n = (or) a + b = 1 and also p + q = 1 Axiomatic approach

p = 1 – q, and q = 1 – p and if a + b = 1 then so also a/n + b/n = 1 An axiomatic approach is taken to define probability as a set function
where the elements of the domain are the sets and the elements of
In this approach the probability varies from 0 to 1. When probability range are real numbers. If event A is an element in the domain of this
is zero it denotes that it is impossible to occur. function, P(A) is the customary notation used to designate the
corresponding element in the range.
If probability is 1 then there is certainty for occurrence, i.e. the event
is bound to occur. Probability Function

Example: A probability function p(A) is a function mapping the event space A


of a random experiment into the interval [0,1] according to the
From a bag containing 20 black and 25 white balls, a ball is drawn following axioms;
randomly. What is the probability that it is black.
Axiom 1. For any event A, 0 ≤ P(A) ≤ 1

27
Axiom 2. P(Ω) = 1 Then by the definition of probability,

Axiom 3. If A and B are any two mutually exclusive events then, P(AUB) = P(A) + P(B)- P(A∩B).

                              P(A ∪ B)) = P(A) + P(B) Example:

As given in the third axiom the addition property of the probability If the probability of solving a problem by two students George and
can be extended to any number of events as long as the events are James are 1/2 and 1/3 respectively then what is the probability of the
mutually exclusive. If the events are not mutually exclusive then; problem to be solved.

P(A ∪ B) = P(A) + P(B) – P(A∩B) Solution:

P(A∩B) is Φ if both the events are mutually exclusive. Let A and B be the probabilities of solving the problem by George and
James respectively.
If there are two types of objects among the objects of similar or other
natures then the probability of one object i.e. Pr. of A = .5, then Pr. of Then P(A)=1/2 and P(B)=1/3.
B = .5.
The problem will be solved if it is solved at least by one of them also.

Addition and So, we need to find P(AUB).

Multiplication Theorems By addition theorem on probability, we have


 THESTREAK24 MAY 2018 3

Addition theorem on probability: P(AUB) = P(A) + P(B)- P(A∩B).

If A and B are any two events then the probability of happening of at P(AUB) = 1/2 +.1/3 – 1/2 * 1/3 = 1/2 +1/3-1/6 = (3+2-1)/6 = 4/6 = 2/3
least one of the events is defined as P(AUB) = P(A) + P(B)- P(A∩B).
Note:
Proof:
If A and B are any two mutually exclusive events then P(A∩B)=0.
Since events are nothing but sets,
Then P(AUB) = P(A)+P(B).
From set theory, we have
 
n(AUB) = n(A) + n(B)- n(A∩B).
Multiplication theorem on probability
Dividing the above equation by n(S), (where S is the sample space) If A and B are any two events  of a sample space such that P(A) ≠0
and P(B)≠0, then
n(AUB)/ n(S) = n(A)/ n(S) + n(B)/ n(S)- n(A∩B)/ n(S)

28
P(A∩B) = P(A) * P(B|A) = P(B) *P(A|B). (1)    If 3 events A,B and C are independent the

Example:  If P(A) =  1/5  P(B|A) =  1/3  then what is P(A∩B)? P(A∩B∩C) = P(A)*P(B)*P(C).

Solution: P(A∩B) = P(A) * P(B|A) = 1/5 * 1/3 = 1/15 (2)    If A and B are any two events, then P(AUB) = 1-P(A’)P(B’).

INDEPENDENT EVENTS:
Baye’s Rule
Two events A and B are said to be independent if there is no change in  THESTREAK24 MAY 2018 3
the happening of an event with the happening of the other event.
Bayes’ theorem is a way to figure out conditional probability.
i.e. Two events A and B are said to be independent if Conditional probability is the probability of an event happening, given
that it has some relationship to one or more other events. For example,
P(A|B) = P(A) where P(B)≠0. your probability of getting a parking space is connected to the time of
day you park, where you park, and what conventions are going on at
P(B|A) = P(B) where P(A)≠0. any time. Bayes’ theorem is slightly more nuanced. In a nutshell, it
gives you the actual probability of an event given information
i.e. Two events A and B are said to be independent if about tests.

P(A∩B) = P(A) * P(B). “Events” Are different from “tests.” For example, there is a test for
liver disease, but that’s separate from the event of actually having
Example: liver disease.
While laying the pack of cards, let A be the event of drawing a Tests are flawed: just because you have a positive test does not mean
diamond and B be the event of drawing an ace. you actually have the disease. Many tests have a high false positive
rate. Rare events tend to have higher false positive rates than more
Then P(A) =  13/52 = 1/4 and P(B) =  4/52=1/13 common events. We’re not just talking about medical tests here. For
example, spam filtering can have high false positive rates. Bayes’
Now, A∩B = drawing a king card from hearts. theorem takes the test results and calculates your real probability that
the test has identified the event.
Then P(A∩B) =  1/52
Bayes’ Theorem (also known as Bayes’ rule) is a deceptively simple
Now, P(A/B) = P(A∩B)/P(B) = (1/52)/(1/13) = 1/4 = P(A). formula used to calculate conditional probability. The Theorem was
named after English mathematician Thomas Bayes (1701-1761). The
So, A and B are independent. formal definition for the rule is:
[Here, P(A∩B) = =    = P(A) * P(B)]

Note:

29
In most cases, you can’t just plug numbers into an equation; You have
to figure out what your “tests” and “events” are first. For two events, Probability Distribution:
A and B, Bayes’ theorem allows you to figure out p(A|B) (the
probability that event A happened, given that test B was positive) Binominal Poisson,
from p(B|A) (the probability that test B happened, given that event A
happened). It can be a little tricky to wrap your head around as Normal Distribution
technically you’re working backwards; you may have to switch your  THESTREAK24 MAY 2018 3
tests and events around, which can get confusing. An example should
clarify what I mean by “switch the tests and events around.” Probability Distributions

Probability theory is the foundation for statistical inference.  A


Bayes’ Theorem Example probability distribution is a device for indicating the values that a
random variable may have.  There are two categories of random
You might be interested in finding out a patient’s probability of
variables.  These are discrete random variables and continuous
having liver disease if they are an alcoholic. “Being an alcoholic” is
random variables.
the test (kind of like a litmus test) for liver disease.
Discrete random variable
A could mean the event “Patient has liver disease.” Past data tells you
that 10% of patients entering your clinic have liver disease. P(A) =
The probability distribution of a discrete random variable specifies all
0.10.
possible values of a discrete random variable along with their
respective probabilities.
B could mean the litmus test that “Patient is an alcoholic.” Five
percent of the clinic’s patients are alcoholics. P(B) = 0.05.
Examples can be
You might also know that among those patients diagnosed with liver
 Frequency distribution
disease, 7% are alcoholics. This is your B|A: the probability that a
 Probability distribution (relative frequency distribution)
patient is alcoholic, given that they have liver disease, is 7%.
 Cumulative frequency
Bayes’ theorem tells you:
Examples of discrete probability distributions are the binomial
P(A|B) = (0.07 * 0.1)/0.05 = 0.14
distribution and the Poisson distribution.
In other words, if the patient is an alcoholic, their chances of having
liver disease is 0.14 (14%). This is a large increase from the 10%
Binomial Distribution
suggested by past data. But it’s still unlikely that any particular patient
has liver disease. A binomial experiment is a probability experiment with the following
properties.

1.  Each trial can have only two outcomes which can be considered
success or failure.
2.  There must be a fixed number of trials.

30
3.  The outcomes of each trial must be independent of each other.
4.  The probability of success must remain the same in each trial.

The outcomes of a binomial experiment are called a binomial


distribution.

Poisson Distribution

The Poisson distribution is based on the Poisson process.  

1.  The occurrences of the events are independent in an interval.


2.  An infinite number of occurrences of the event are possible in the
interval.
3.  The probability of a single event in the interval is proportional to Graph of a Normal Distribution
the length of the interval.
4.  In an infinitely small portion of the interval, the probability of Characteristics of the normal distribution
more than one occurrence of the event is negligible. 1.  It is symmetrical about m.

Continuous probability distributions 2.  The mean, median and mode are all equal.

A continuous variable can assume any value within a specified 3.  The total area under the curve above the x-axis is 1 square unit. 
interval of values assumed by the variable.  In a general case, with a Therefore 50% is to the right of m and 50% is to the left of m.
large number of class intervals, the frequency polygon begins to
resemble a smooth curve. 4.  Perpendiculars of:
    ± s contain about 68%; 
A continuous probability distribution is a probability density function.      ±2 s contain about 95%;
The area under the smooth curve is equal to 1 and the frequency of     ±3 s contain about 99.7%
occurrence of values between any two points equals the total area of the area under the curve.
under the curve between the two points and the x-axis.
The standard normal distribution
The Normal Distribution
A normal distribution is determined by m and s.  This creates a family
The normal distribution is the most important distribution in of distributions depending on whatever the values of m and s are.  The
biostatistics.  It is frequently called the Gaussian distribution.  The two standard normal distribution has m =0 and s =1.
parameters of the normal distribution are the mean (m) and the
standard deviation (s).  The graph has a familiar bell-shaped curve.  Finding normal curve areas

1.  The table gives areas between – and the value of .  

31
2.  Find the z value in tenths in the column at left margin and locate its
row.  Find the hundredths place in the appropriate column.

3. Read the value of the area (P) from the body of the table where the
row and column intersect.  Note that P is the probability that a given
value of z is as large as it is in its location.  Values of P are in the form
of a decimal point and four places.  This constitutes a decimal percent.

Finding probabilities

We find probabilities using the table and a four-step procedure as


illustrated below.

a) What is the probability that z < -1.96?     (1) Sketch a normal curve
    (2) Draw lines for lower z = -1.96, and upper z = 1.96
    (3) Find the area in the table corresponding to each value
    (4) The answer is the area between the values–subtract lower from
upper P(-1.96 < z < 1.96) = .9750 – .0250 = .9500

 c)  What is the probability that z > 1.96?

    (1) Sketch a normal curve


    (2) Draw a line for z = -1.96
    (3) Find the area in the table
    (4) The answer is the area to the left of the line P(z < -1.96) = .0250

b)  What is the probability that -1.96 < z < 1.96?


    (1) Sketch a normal curve
    (2) Draw a line for z = 1.96
    (3) Find the area in the table
    (4) The answer is the area to the right of the line; found by

32
subtracting table value from 1.0000; P(z > 1.96) =1.0000 – .9750      x = 100
= .0250

Applications of the Normal distribution

The normal distribution is used as a model to study many different


variables.  We can use the normal distribution to answer probability
questions about random variables.  Some examples of variables that
are normally distributed are human height and intelligence.

Solving normal distribution application problems

In this explanation we add an additional step.  Following the model of


the normal distribution, a given value of x must be converted to a z   
score before it can be looked up in the z table.
(3) Convert x to a z score
(1) Write the given information
(2) Sketch a normal curve
(3) Convert x to a z score
(4) Find the appropriate value(s) in the table
(5) Complete the answer
          
Illustrative Example:  Total fingerprint ridge count in humans is
approximately normally distributed with mean of 140 and standard  (4) Find the appropriate value(s) in the table
deviation of 50.  Find the probability that an individual picked at
random will have a ridge count less than 100.  We follow the steps to     A value of z = -0.8 gives an area of .2119 which corresponds to the
find the solution. probability P (z < -0.8)
(1) Write the given information  (5) Complete the answer
    m = 140     The probability that x is less than 100 is .2119.
    s = 50

Application of Binomial; Poisson


and Normal distributions
The binomial distribution has its applications in experiments in
probability subject to certain constraints. These are:

33
1. There is a fixed number of trials – for example toss a coin 20 Under conditions of certainty, accurate, measurable, and reliable
times. information on which to base decisions is available.
2. The outcomes are independent and there are just two possible
outcomes-in the example I will use, these are head and tail. The cause and effect relationships are known and the future is highly
3. The probability of a head plus the probability of a tail must predictable under conditions of certainty. Such conditions exist in case
equal 1. of routine and repetitive decisions concerning the day-to-day
4. The probability of 8 heads and 12 tails would be 20C8 x operations of the business.
P(H)^8 x P(T)^12.
5. Now any experiment in which the outcomes are of just two Decision-making under Risk:
kinds and whose probability combined equals 1, can be regarded as
binomial. When a manager lacks perfect information or whenever an
information asymmetry exists, risk arises. Under a state of risk, the
Data from the analyses of reference samples often must be used to decision maker has incomplete information about available
determine the quality of the data being produced by laboratories that alternatives but has a good idea of the probability of outcomes for
routinely make chemical analyses of environmental samples. When each alternative.
a laboratory analyzes many reference samples, binomial
distributions can be used in evaluating laboratory performance. The While making decisions under a state of risk, managers must
number of standard deviations (that is, the difference between the determine the probability associated with each alternative on the basis
reported value and most probable value divided by the theoretical of the available information and his experience.
standard deviation) is calculated for each analysis. Individual values
exceeding two standard deviations are considered unacceptable, and Decision-making under Uncertainty:
a binomial distribution is used to determine if overall performance is
satisfactory or unsatisfactory. Similarly, analytical bias is examined Most significant decisions made in today’s complex environment are
by applying a binomial distribution to the number of positive and formulated under a state of uncertainty. Conditions of uncertainty
negative standard deviations. exist when the future environment is unpredictable and everything is
in a state of flux. The decision-maker is not aware of all available
UNIT 5 Decision Making Environments
alternatives, the risks associated with each, and the consequences of
each alternative or their probabilities.

Decision under Certainty, The manager does not possess complete information about the
alternatives and whatever information is available, may not be
completely reliable. In the face of such uncertainty, managers need to
Uncertainty, and Risk make certain assumptions about the situation in order to provide a
reasonable framework for decision-making. They have to depend
Decision-making under Certainty upon their judgment and experience for making decisions.
A condition of certainty exists when the decision-maker knows with Modern Approaches to Decision-making under Uncertainty:
reasonable certainty what the alternatives are, what conditions are
associated with each alternative, and the outcome of each alternative.

34
There are several modern techniques to improve the quality of technique of decision-making allows the decision-maker to trace the
decision-making under conditions of uncertainty. optimum path or course of action.

The most important among these are: Preference or Utility Theory:

(1) Risk analysis, This is another approach to decision-making under conditions of


uncertainty. This approach is based on the notion that individual
(2) Decision trees and attitudes towards risk vary. Some individuals are willing to take only
smaller risks (“risk averters”), while others are willing to take greater
(3) Preference theory. risks (“gamblers”). Statistical probabilities associated with the various
courses of action are based on the assumption that decision-makers
Risk Analysis: will follow them.

Managers who follow this approach analyze the size and nature of the 3For instance, if there were a 60 percent chance of a decision being
risk involved in choosing a particular course of action. right, it might seem reasonable that a person would take the risk. This
may not be necessarily true as the individual might not wish to take
For instance, while launching a new product, a manager has to the risk, since the chances of the decision being wrong are 40 percent.
carefully analyze each of the following variables the cost of launching The attitudes towards risk vary with events, with people and positions.
the product, its production cost, the capital investment required, the
price that can be set for the product, the potential market size and Top-level managers usually take the largest amount of risk. However,
what percent of the total market it will represent. the same managers who make a decision that risks millions of rupees
of the company in a given program with a 75 percent chance of
Risk analysis involves quantitative and qualitative risk assessment, success are not likely to do the same with their own money.
risk management and risk communication and provides managers with
a better understanding of the risk and the benefits associated with a Moreover, a manager willing to take a 75 percent risk in one situation
proposed course of action. The decision represents a trade-off between may not be willing to do so in another. Similarly, a top executive
the risks and the benefits associated with a particular course of action might launch an advertising campaign having a 70 percent chance of
under conditions of uncertainty. success but might decide against investing in plant and machinery
unless it involves a higher probability of success.
Decision Trees:
Though personal attitudes towards risk vary, two things are certain.
These are considered to be one of the best ways to analyze a decision.
A decision-tree approach involves a graphic representation of Firstly, attitudes towards risk vary with situations, i.e. some people are
alternative courses of action and the possible outcomes and risks risk averters in some situations and gamblers in others.
associated with each action.
Secondly, some people have a high aversion to risk, while others have
By means of a “tree” diagram depicting the decision points, chance a low aversion.
events and probabilities involved in various courses of action, this

35
Most managers prefer to be risk averters to a certain extent, and may
thus also forego opportunities. When the stakes are high, most
managers tend to be risk averters; when the stakes are small, they tend
to be gamblers.

Decision Tree Analysis


 THESTREAK24 MAY 2018 3

Decision Tree may be understood as the logical tree, is a range of


conditions (premises) and actions (conclusions), which are depicted as
nodes and the branches of the tree which link the premises with
conclusions. It is a decision support tool, having a tree-like
representation of decisions and the consequences thereof. It uses
‘AND’ and ‘OR’ operators, to recreate the structure of if-then rules.  Key A is a decision node, wherein the decision is taken, i.e. to test the
product or drop the same.
A decision tree is helpful in reaching the ideal decision for intricate  Key B is an outcome node, which shows all possible outcomes, that
processes, especially when the decision problems are interconnected can be taken. As per the given situation, there are only two outcomes,
and chronological in nature. i.e. favorable or not.
 Key C is again a decision node, that describes the market test is
A decision tree does not constitute a decision but assists in making positive, so the firm’s management will decide whether to go further
one, by graphically representing the material information related to with complete marketing or drop the product.
the given problem, in the form of a tree. It diagrammatically depicts  Key D is one more decision node, but does not shows any choice,
various courses of action, likely outcomes, states of nature, etc, as which depicts that if the market test is unfavorable then the decision is
nodes, branches or sub-branches of a horizontal tree. to drop the product.
 Key E is again an outcome node.
Nodes
The decision tree can be applied to various areas, where decisions
There are two types of Nodes: are pending such as make or buy decision, investment decision,
marketing strategy, the introduction of a new project. The decision
 Decision Node: Represented as square, wherein different courses of maker will go for the alternative that increases the anticipated profit or
action arise from decision node in main branches. the one which reduces the overall expected cost at each decision point.
 Chance Node: Symbolised as a circle, at the terminal point of
decision node, the chance node is present, where they emerge as sub- Types of Decision Tree
branches. These depict probabilities and outcomes.

For instance: Think of a situation where a firm introduces a new


product. The decision tree presented below gives a clear idea of
managerial problems.

36
Once the decision tree is described precisely, and the data about
outcomes along with their probabilities is gathered, the decision
In a single stage decision tree, the decision maker can find only one alternatives can be evaluated as follows:
solution, which is the best course of action, on the basis of the
information gathered. On the other hand, multi-stage decision 1. Start from the extreme right-hand end of the tree and start calculating
tree involves a series of the decision to be taken. NPV for each chance points as you proceed leftward.
2. Once the NPVs are calculated for each chance point, evaluate the
Decision Tree Analysis alternatives at the final stage decision points in terms of their NPV.
3. Select the alternative which has the highest NPV and cut the branch of
The Decision Tree Analysis is a schematic representation of several inferior decision alternative. Assign value to each decision point
decisions followed by different chances of the occurrence. Simply, a equivalent to the NPV of the alternative selected.
tree-shaped graphical representation of decisions related to the 4. Again, repeat the process, proceed leftward, recalculate NPV for each
investments and the chance points that help to investigate the possible chance point, select the decision alternative which has the highest
outcomes is called as a decision tree analysis. NPV value and then cut the branch of the inferior decision alternative.
Assign the value to each point equivalent to the NPV of selected
The decision tree shows Decision Points, represented by squares, are alternative and repeat this process again and again until a final
the alternative actions along with the investment outlays, that can be decision point is reached.
undertaken for the experimentation. These decisions are followed by
the chance points, represented by circles, are the uncertain points, Thus, decision tree analysis helps the decision maker to take all the
where the outcomes are dependent on the chance process. Thus, the possible outcomes into the consideration before reaching a final
probability of occurrence is assigned to each chance point. investment decision.

A decision tree is a decision support tool that uses a tree-


like model of decisions and their possible consequences,
including chance event outcomes, resource costs, and utility. It is one
way to display an algorithm that only contains conditional control
statements.

37
Decision trees are commonly used in operations research, specifically Business analytics has been existence since very long time and has
in decision analysis, to help identify a strategy most likely to reach evolved with availability of newer and better technologies. It has its
a goal, but are also a popular tool in machine learning. roots in operations research, which was extensively used during
World War II. Operations research was an analytical way to look at

Concept of Business Analytics: data to conduct military operations. Over a period of time, this
technique started getting utilized for business. Here operation’s

Meaning, Types research evolved into management science. Again, basis for


management science remained same as operation research in data,
 27 AUG 2019 1 COMMENT
decision making models, etc.
The word analytics has come into the foreground in last decade or so.
The proliferation of the internet and information technology has made As the economies started developing and companies became more and
analytics very relevant in the current age. Analytics is a field which more competitive, management science evolved into business
combines data, information technology, statistical analysis, intelligence, decision support systems and into PC software.
quantitative methods and computer-based models into one. This all
are combined to provide decision makers all the possible scenarios to Scope of Business Analytics
make a well thought and researched decision. The computer-based Business analytics has a wide range of application and usages. It
model ensures that decision makers are able to see performance of can be used for descriptive analysis in which data is utilized to
decision under various scenarios. understand past and present situation. This kind of descriptive analysis
is used to asses’ current market position of the company and
Importance of Business Analytics effectiveness of previous business decision.
 Business analytics is a methodology or tool to make a sound It is used for predictive analysis, which is typical used to asses’
commercial decision. Hence it impacts functioning of the whole previous business performance.
organization. Therefore, business analytics can help improve
profitability of the business, increase market share and revenue and Business analytics is also used for prescriptive analysis, which is
provide better return to a shareholder. utilized to formulate optimization techniques for stronger business
 Facilitates better understanding of available primary and secondary performance.
data, which again affect operational efficiency of several departments.
 Provides a competitive advantage to companies. In this digital age For example, business analytics is used to determine pricing of
flow of information is almost equal to all the players. It is how this various products in a departmental store based past and present set of
information is utilized makes the company competitive. Business information.
analytics combines available data with various well thought models to
improve business decisions. Data for Analytics
 Converts available data into valuable information. This information
can be presented in any required format, comfortable to the decision Business analytics uses data from three sources for construction of the
maker. business model. It uses business data such as annual reports, financial
ratios, marketing research, etc. It uses the database which contains
Evolution of Business Analytics various computer files and information coming from data analysis.

38
Challenges Usually, many different but co-dependent variables are analyzed to
predict a trend in this type of analysis. For example, in the healthcare
Business analytics can be possible only on large volume of data. It is domain, prospective health risks can be predicted based on an
sometime difficult obtain large volume of data and not question its individual’s habits/diet/genetic composition. Therefore, these models
integrity. are most important across various fields.

Different Types Of Data Analytics 4. Prescriptive Analytics

Let me take you through the main types of analytics and the scenarios This type of analytics explains the step-by-step process in a situation.
under which they are normally employed. For instance, a prescriptive analysis is what comes into play when
your Uber driver gets the easier route from Gmaps. The best route was
1. Descriptive Analytics chosen by considering the distance of every available route from your
pick-up route to the destination and the traffic constraints on each
As the name implies, descriptive analysis or statistics can summarize
road.
raw data and convert it into a form that can be easily understood by
humans. They can describe in detail about an event that has occurred
in the past. This type of analytics is helpful in deriving any pattern if
any from past events or drawing interpretations from them so that
Application of Business Analytics
better strategies for the future can be framed  27 AUG 2019 1 COMMENT

Companies use Business Analytics (BA) to make data-driven


This is the most frequently used type of analytics across organizations. decisions. The insight gained by BA enables these companies to
It’s crucial in revealing the key metrics and measures within any automate and optimize their business processes. In fact, data-driven
business. companies that utilize Business Analytics achieve a competitive
advantage because they are able to use the insights to:
2. Diagnostic Analytics
 Conduct data mining (explore data to find new patterns and
The obvious successor to descriptive analytics is diagnostic analytics. relationships)
Diagnostic analytical tools aid an analyst to dig deeper into an issue at  Complete statistical analysis and quantitative analysis to explain why
hand so that they can arrive at the source of a problem. certain results occur
 Test previous decisions using A/B testing and multivariate testing
In a structured business environment, tools for both descriptive and  Make use of predictive modeling and predictive analytics to forecast
diagnostic analytics go hand-in-hand! future results
3. Predictive Analytics Business Analytics also provides support for companies in the process
Any business that is pursuing success should have foresight. of making proactive tactical decisions, and BA makes it possible for
Predictive analytics helps businesses to forecast trends based on the those companies to automate decision making in order to support real-
current events. Whether it’s predicting the probability of an event time responses.
happening in future or estimating the accurate time it will happen can
all be determined with the help of predictive analytical models. Challenges with Business Analytics

39
Penn State University’s John Jordan described the challenges with
Business Analytics: there is “a greater potential for privacy invasion,
greater financial exposure in fast-moving markets, greater potential
for mistaking noise for true insight, and a greater risk of spending lots
of money and time chasing poorly defined problems or
opportunities.” Other challenges with developing and implementing
Business Analytics include…

 Executive Ownership: Business Analytics requires buy-in from


senior leadership and a clear corporate strategy for integrating
predictive models
 IT Involvement: Technology infrastructure and tools must be able to
handle the data and Business Analytics processes
 Available Production Data vs. Cleansed Modeling Data: Watch for
technology infrastructure that restrict available data for historical
modeling, and know the difference between historical data for model
development and real-time data in production
 Project Management Office (PMO): The correct project
management structure must be in place in order to implement
predictive models and adopt an agile approach
 End user Involvement and Buy-In: End users should be involved in
adopting Business Analytics and have a stake in the predictive model
 Change Management: Organizations should be prepared for the
changes that Business Analytics bring to current business and
technology operations
 Explainability vs. the “Perfect Lift”: Balance building precise
statistical models with being able to explain the model and how it will
produce results

40

You might also like