Lecture Notes On Linear Programming
Lecture Notes On Linear Programming
Convex set, hyperplane, extreme points, convex polyhedron, basic solutions and basic feasible
The set of all feasible solutions of an L.P.P.is a convex set. The objective function of an L.P.P.
assumes its optimal value at an extreme point of the convex set of feasible solutions. A b.f.s. to
an L.P.P. corresponds to an extreme point of the convex set of all feasible solutions.
Duality theory-The dual of the dual is the primal, relation between the objective values of dual
and the primal problems. Dual problems with at most one unrestricted variable and one
constraint of equality.
Inventory Control.
CHAPTER I
Examples
2. (Diet problem) A patient needs daily 5mg, 20mg and 15mg of vitamins A, B
and C respectively. The vitamins available from a mango are 0.5mg of A,
1mg of B, 1mg of C, that from an orange is 2mg of B, 3mg of C and that
from an apple is 0.5mg of A, 3mg of B, 1mg of C. If the cost of a mango, an
orange and an apple be Rs 0.50, Rs 0.25 and Rs 0.40 respectively, find the
minimum cost of buying the fruits so that the daily requirement of the
patient be met. Formulate the problem mathematically.
Solution: The problem is to find the minimum cost of buying the fruits. Let z be
the objective function. Let the number of mangoes, oranges and apples to be
bought so that the cost is minimum and to get the minimum daily requirement
of the vitamins be x1, x2, x3 respectively. Then the objective function is given
by
x1 + 2x2 + 3x3 ≥ 20
x1 + 3x2 + x3 ≥
15
and x1 ≥ 0, x2 ≥ 0, x3 ≥ 0
And x1 , x2 , x3 ≥ 0
Solution: Let the 3:3:4 mixture be called A and 2:4:2 mixture be called B.
Let x1 , x2 tons of these two mixtures be produced to get maximum profit.
Thus the objective function is z = 15x1 + 12x2 which is to be maximized.
Let us denote Nitrogen, Phosphate and Potash as N Ph and P respectively.
N Ph P
Then in the mixture A , = = = k1(say).
3 3 4
⟹ N = 3k1 , Pℎ = 3k1 , P = 4k1
⟹ x1 = 10k1 .
Similarly for the mixture B , N = 2k2 , Pℎ = 4k2 , P = 2k2k1
⟹ x2 = 8k2 .
3 1
Thus the constraints are x1 + x2 ≤ 180 [since in A, the amount of
3k1 10 4
nitrogen is x1 = 3
x1].
10k 10
3 1 1 2 1
Similarly x1 + x2 ≤ 250 and x1 + x2 ≤ 220 .
10 2 5 4
Hence the problem is
Maximize z = 15x1 + 12x2
3 1
Subject to x1 + x2 ≤ 180
10 4
3
x + 1 x ≤ 250
10 1 2 2
2 1
x1 + x2 ≤ 220
5 4
And x1 , x2 ≥ 0.
6. A coin to be minted contains 40% silver, 50% copper, 10% nickel. The mint
has available alloys A, B, C and D having the following composition and
costs, and availability of alloys:
% % % Costs per
silver copper nickel Kg
A 30 60 10 Rs 11
B 35 35 30 Rs 12
C 50 50 0 Rs 16
D 40 45 15 Rs 14
Availabil Total 1000 Kgs
ity of
alloys →
Nurses report to the hospital wards at the beginning of each period and work for
eight consecutive hours. The hospital wants to determine the minimum number of
nurses so that there may be sufficient number of nurses available for each period.
Formulate this as a L.P.P.
x1 + x2 ≥ 70.
x2 + x3 ≥ 60
x3 + x4 ≥ 50
x4 + x5 ≥ 20
x5 + x6 ≥ 30
x6 + x1 ≥ 60 , xj ≥ 0, j = 1, 2, … … , 6
From the discussion above, now we can mathematically formulate a general Linear
Programming Problem which can be stated as follows.
……………………………………………..
By using the matrix and vector notation the problem can be expressed in a compact
form as
In all practical discussions, bi ≥ 0 ∀i. If some of them are negative, we make them
by multiplying both sides of the inequality by (-1).
Optimize z = c T x subject to Ax = b, x ≥ 0 .
Feasible solution to a L.P.P: A set of values of the variables, which satisfy all the
constraints and all the non-negative restrictions of the variables, is known as the
feasible solution (F.S.) to the L.P.P.
There are two ways of solving a linear programming problem: (1) Geometrical
method and (2) Algebraic method.
Examples
12
8x+5y=60
8
4x+5y=40
0 7 10
Z=1150
Z=450
The constraints are treated as equations along with the non negativity relation. We
confine ourselves to the first quadrant of the xy plane and draw the lines given by
those equations. Then the directions of the inequalities indicate that the striped
region in the graph is the feasible region. For any particular value of z, the graph of
the objective function regarded as an equation is a straight line (called the profit
line in a maximization problem) and as z varies, a family of parallel lines is
generated. We have drawn the line corresponding to z=450. We see that the profit
z is proportional to the perpendicular distance of this straight line from the origin.
Hence the profit increases as this line moves away from the origin. Our aim is to
find a point in the feasible region which will give the maximum value of z. In order
to find that point we move the profit line away from origin keeping it parallel to
itself. By doing this we find that (5,4) is the last point in the feasible region which
the moving line encounters. Hence we get the optimal solution zmax = 1150 for
= 5, y = 4 .
2. Solve graphically:
Minimize z = 3x + 5y
Subject to 2x + 3y ≥
12
−x + y ≤ 3
x ≤ 4
y ≤ 3
-x+y=3
6
2x+3y=12 Z=30
-3 0 4 6 10
Here the striped area is the feasible region. We have drawn the cost line
corresponding to z=30. As this is a minimization problem the cost line is moved
towards the origin and the cost function takes its minimum at zmin = 19.5 for
= 1.5, y = 3 .
3. Solve graphically:
Minimize z = x + y
Subject to 5x + 9y ≤ 45
x+ y ≥ 2
y ≤ 4 , x, y ≥ 0
Here the striped area is the feasible region. We have drawn the cost line
corresponding to z=4. As this is a minimization problem the cost line
when moved towards the origin coincides with the boundary line
x + y = 2 and the optimum value is attained at all points lying on the
line segment joining (2,0) and (0,2) including the end points. Hence there
are an infinite number of solutions. In this case we say that alternative
optimal solution exists.
z=4
y=4
5x+9y=45
x+y=2
4.
5.
6. Solve graphically
Maximize z = 3x +
4y Subject to x − y
≥0
x+ y≥1
−x + 3y ≤ 3 , x, y ≥ 0
x-y=0
-x+3y=3
z=12
-3 0 4
x+y=1
The striped region in the graph is the feasible region which is unbounded.. For any
particular value of z, the graph of the objective function regarded as an equation is
a straight line (called the profit line in a maximization problem) and as z varies, a
family of parallel lines is generated. We have drawn the line corresponding to
z=12. We see that the profit z is proportional to the perpendicular distance of this
straight line from the origin. Hence the profit increases as this line moves away
from the origin. As we move the profit line away from origin keeping it parallel to
itself we see that there is no finite maximum value of z.
Ex: Keeping everything else unaltered try solving the problem as a minimization
problem.
7. Solve graphically
Maximize z = 2x −
3y Subject to x + y ≤
2
x+ y≥4
x, y ≥ 0
x+y=4
x+y=2
0
It is clear that there is no feasible region.
In algebraic method, the problem can be solved only when all constraints are
equations. We now show how the constraints can be converted into equations.
x1 − 2x2 + 7x3 ≤ 4
is connected by the sign ≤ . Then a variable x4 is added to the left hand side and it
is converted into an equation
x1 − 2x2 + 7x3 + x4 = 4
From the above it is clear that the slack variables are non-negative quantities.
x1 − 2x2 + 7x3 ≥ 4
is converted into an equation by subtracting a variable x4 from the left hand side.
x1 − 2x2 + 7x3 − x4 = 4
zad = c1x1 + c2x2 + ⋯ … + crxr + 0xr+1 + 0xr+2 + ⋯ … + 0xn , and then the
problem can be written as
where aj = (a1j, a2j, … … , amj)T is a column vector associated with the vector
xj , j = 1,2, … … , n .
x = (x1, x2, … … , xr, xr+1 , xr+2 , … … , xn)T is a n-component column vector, and
It is worth noting that the column vectors associated with the slack variables are all
unit vectors. As the cost components of the slack and surplus variables are all zero,
it can be verified easily that the solution set which optimizes zad also optimizes z.
Hence to solve the original L.P.P it is sufficient to solve the standard form of the
L.P.P. So, for further discussions we shall use the same notation for zad and z.
Problems
Subject to 4x1 + x2 − x3 + x4 = 7
2x1 − 3x2 + x3 + x5 = 12
x1 + x2 + x3 = 8
4x1 + 7x2 − x3 — x6 = 16 , xj ≥ 0, j =
1,2, … ,6 .
Subject to 4x1 + x2 + x3 ≥ 4
7x1 + 4x2 − x3 ≤ 25 , xj ≥ 0, j = 1,3 , x2 unrestricted
in sign .
Solution: Introducing slack and surplus variables and writing x2 = x / − x//,
2 2
where x/ ≥ 0, x// ≥ 0,
2 2
the problem in the standard form is
Maximize z = 2x1 + 3x/ − 3x// − x3 + 0x4 + 0x5
2 2
CHAPTER II
Let us consider m linear equations with n variables (n > m) and let the set of
equations be
…………………………………………
Ax = b, where,
We further assume that R(A) = m, which indicates that all equations are linearly
independent and none of them are redundant.
BxB = b,
Where B is the basis matrix and xB is the m component column vector consisting
of the basic variables. Using the matrix inversion method of finding the solution of
a set of equations
Non-Degenerate Basic Solution: If the values of all the basic variables are non-
zero then the basic solution is known as a Non-Degenerate Basic Solution.
Degenerate Basic Solution: If the value of at least one basic variable is zero then
the basic solution is known as a Non-Degenerate Basic Solution.
Basic Feasible Solution (B.F.S): The solution set of a L.P.P. which is feasible as
well as basic is known as a Basic Feasible Solution.
Degenerate B.F.S: The solution to a L.P.P. where the value of at least one basic
variable is zero is called a Degenerate B.F.S.
Examples
1. Find the basic solutions of the system of equations given below and identify
the nature of the solution.
2x1 + 4x2 − 2x3 = 10
10x1 + 3x2 + 7x3 = 33
2. Given that x1 = 2, x2 = −1, x3 = 0 is a solution of a system of equations
3x1 − 2x2 + x3 = 8
9x1 − 6x2 + 4x3 = 24
Is this solution basic? Justify.
CHAPTER III
Point Set: Point sets are sets whose elements are all points in En .
Line: If x1 = (x11 , x12 , … … , x1n)T and x2 = (x21 , x22 , … … , x2n)T be two points in
En , then the line joining the points x1 and x2 , (x1 ≠ x2) is a set X of points given
by
Line segment: If x1 = (x11 , x12 , … … , x1n)T and x2 = (x21 , x22 , … … , x2n)T be two
points in En , then the line segment joining the points x1 and x2 , (x1 ≠ x2) is a set
X of points given by
X = {x: c T x = k} ,
A hyperplane can be defined as a set of points which will satisfy c1x1 + c2x2 +
⋯ … + cnxn = k.
A hyperplane divides the space En into three mutually exclusive disjoint sets given
by x = (x1 , x2 , … … , xn)T
In a L.P.P. , the objective function and the constraints with equality sign are all
hyperplanes.
Hypersphere: A hypersphere in En with centre at a = (a1 , a2 , … … , an)T and
radius s > 0 is defined to be the set X of points given by X = {x: |x − a| = s} ,
where = (x1 , x2 , … … , xn)T .
An interior point of a set: A point a is an interior point of the set S if there exists an
s- neighbourhood about the point a which contains only points of the same set.
From the definition it is clear that an interior point of a set S is an element of the
set S.
Closed set: A set S is said to be closed if it contains all its boundary points.
Bounded set: A set S is said to be strictly bounded set if there exists a positive
number r such that for any point x belonging to S , |x| ≤ r . For every x
belonging to S , if x ≥ r, then the set is bounded from below.
X = {x: x =
i=1 λixi, λi = 1 and λi ≥ 0 for all i}.
Σk Σk i=1
From the above definition it is also clear that a line segment is a convex
combination of two distinct points in the same vector space.
Convex Set: A point set is said to be a convex set if the convex combination of any
two points of the set is in the set. In other words, if the line segment joining any
two distinct points of the set is in the set then the set is known as a convex set.
Extreme points of a convex set: A point x is an extreme point of the convex set C if
it cannot be expressed as a convex combination of two other distinct points x1, x2
of the set C, i.e, x cannot be expressed as
From the definition, it is clear that all extreme points of a convex set are boundary
points but all boundary points are not necessarily extreme points. Every point of
the boundary of a circle is an extreme point of the convex set which includes the
boundary and interior of the circle. The extreme points of a square are its four
vertices.
Convex hull: If X be a point set, then Convex hull of X which is denoted by C(X),
is the set of all convex combinations of set of points from X. If the set X consists of
a finite number of points then the convex hull C(X) is called a convex polyhedron.
For a convex polyhedron, any point in the set can be expressed as a convex
combination of its extreme points.
Let x1, x2 be two distinct points of X. Then x1, x2 ∈ X1 and x1, x2 ∈ X2. Let x3 be a
point given by
x3 = λx1 + (1 − λ)x2, 0 ≤ λ ≤ 1 .
As x3 is a convex combination of the points x1, x2 and X1, X2 are convex sets, then
x3 is a point of X1 as well as X2. Hence x3 is a point of X = X1 ∩ X2 . Hence X is a
convex set.
Proof: Let the point set X be a hyperplane given by X = {x: c T x = k} . Let x1, x2
be two distinct points of X. Then cTx1 = k and cTx2 = k . Let x3 be a point
given by x3 = λx1 + (1 − λ)x2, 0 ≤ λ ≤ 1 .
Proof: Let S be a point set consisting of a finite number of points x1, x2, … … . , xk
in Rn.
Then
Now, k
i=1 ci = λ i=1 ai + (1 − λ) Σki=1 bi=1 and ci ≥ 0 as ai ≥ 0, bi ≥ 0 and
Σk
0≤λ≤1. Σ
If the set X has only one point, then there is nothing to prove.
But x3 is a convex combination of two distinct points x1 and x2 of the set X. Thus
X is a convex set.
Now the finite number of constraints represented by Ax = b are closed sets and
also the set of inequations (finite) represented by x ≥ 0 are closed sets and
therefore the intersection of a finite number of closed sets which is the set of all
feasible solutions is a closed set.
Note: If the L.P.P has at least two feasible solutions then it has an infinite number
of feasible solutions
Proof: Let A = (a1, a2, … … , an) be the coefficient matrix of order m x n, n > m
and let us assume that B be the basis matrix B = (a1, a2, … … , am) where
a1, a2, … … , am are the column vectors corresponding to the first m variables
x1, x2, … … , xm.
Let x be not an extreme point of the convex set X. Then there exist two points
x / , x//, x/ ≠ x// in X such that it is possible to express x as
x/ = [u1, v1], x// = [u2, v2] where u1 contains m components of x/, corresponding
to the variables x1, x2, … , xm and v1 contains the remaining (n − m)
components of x/. Similarly u2 and v2 contains the first m and the remaining (n
− m) components of x// respectively.
Thus x/ = [u1, 0], x// = [u2, 0] . Hence u1 and u2 are the m components of the
solution set corresponding to the basic variables x1, x2, … , xm for which the basis
matrix is B. Then u1 = B–1b and u2 = B–1b . Hence, xB = u1 = u2 . So the three
points x, x/, x// are not different and therefore x cannot be expressed as a convex
combination of two distinct points. So a B.F.S is an extreme point.
Conversely, let us assume that x is an extreme point of the convex set X of
feasible solutions of the equation Ax = b, x ≥ 0.
Let x = [x1, x2, … , xk, 0, … ,0] , number of zero components are n − k, xj ≥ 0 for
j = 1,2, … , k.
If the column vectors a1, a2, … … , ak associated with the variables x1, x2, … , xk
respectively are L.I (which is possible only for k ≤ m) then x, the extreme point
of the convex set, is a B.F.S and we have nothing to prove.
Let ð > 0 , then from the above two equations we get j=1 (xj ± ðλj)aj = b .
Σk
xj
Consider ð such that 0 < ð < l, where l = min ( ), λ ≠0.
j | j| j
Note: There is a one to one correspondence between the extreme points and B.F.S
in case of non-degenerate B.F.S.
Eamples
1 1 2 2
=4
Proof: Let x = (x1, x2, … … , xn)T be a feasible solution to the set of equations
= b, x ≥ 0 . Out of n components of the feasible solution, let k components be
positive and the remaining n − k components be zero (1 ≤ k ≤ n) and we also
make an assumption that the first k components are positive and the last n − k
components are zero.
(i) k ≤ m and the column vectors a1, a2, … … , ak are linearly independent (L.I)
(ii) k > m
(iii) k ≤ m and the column vectors a1, a2, … … , ak are linearly dependent (L.D)
Case(i)If k ≤ m and the column vectors a1, a2, … … , ak are L.I , then by definition
the F.S. is a B.F.S. If = m , the solution is a non degenerate B.F.S and if k < m ,
the solution is a degenerate B.F.S.
Case(ii)If k > m and the columns a1, a2, … … , ak are L.D, the solution is not basic.
By applying a technique given below, the number of positive components in the
solution can be reduced one by one till the corresponding column vectors are L.I.
( This will be possible as a set of one non-null vector is L.I.)
Procedure: As the column vectors a1, a2, … … , ak are L.D, there exist scalars
λj, j = 1,2, … , k , not all zero, such that
Now at least one λj is positive (if not, multiply both sides of equation (1) by −1 ).
Let = maxj j
( ) , j = 1,2, … , k ,
xj
x/ = [x1 − 1 , x2 − 2
, … … , xk − k
, 0, … … ,0]
j
Then x/ = x − ≥ 0, j = 1,2, … … , k , at least one of a them is equal to zero.
j j
(iii)In this case, as the vectors are L.D, we use the above procedure to get a B.F.S.
Theorem (statement only) The necessary and sufficient condition that all basic
solutions will exist and will be non-degenerate is that , every set of m column
vectors of the augmented matrix [A b] is linearly independent.
Problems
By cross multiplication,
1 2
= 1
=k= 1
(say).
34 –34 = –34 34
Hence −a1 + a2 + a3 = 0.
j
Therefore u = max ( , λ > 0) = max (1 , 1
)=1.
j xj j 3 2 2
x/ = [x1 − 1 , x2 − 2
, x3 − 3] = [1 + 2, 3 − 2, 2 − 2] = [3,1,0] which is a
B.F.S.
1 2 3
A = [a1 a2 a3] = [ 2 −1 1] and R(A) = 2 . The equations Ax = b can be
written as x1a1 + x2a2 + x3a3 = b . As (1,1,2) is a solution of Ax = b, we
have
a1 + a2 + 2a3 = b...................(1)
Hence the two equations are L.I. , but a1 , a2 , a3 are L.D. So there exist three
constants λ1, λ2, λ3, (not all zero) such that λ1a1 + λ2a2 + λ3a3 = 0 ,
1 2 3
or, λ1 [ ] + λ2 [ ] + λ3 [ ] = 0 which gives
2 −1 1
λ1 + 2λ2 + 3λ3 = 0 and 2λ1 − λ2 + λ3 = 0
By cross multiplication,
1 2 1
= = = k = 1 (say).
5 5 –5 5
Then we get λ1 = 1, λ2 = 1, λ3 = −1 .
Hence a1 + a2 − a3 = 0................................(2)
j
Therefore u = max ( , λ > 0) = max (1 , 1
) = 1 which occurs at j = 1,2.
j xj j 1 1
0a1 + 0a2 + 3a3 = b which shows that (0,0,3) is a feasible solution and as
a1 , a3 and a2 , a3 are L.I , the solution is a B.F.S.
1
Again taking = 2
= 1
= k/ = 1
, we get −a1 − a2 + a3 = 0 which gives
5 5 –5 –5
another B.F.S. as (3,3,0).
Let x = [x1, x2, … … , xn] T be an optimal feasible solution to the problem which
makes the objective function maximum. Out of x1, x2, … … , xn let k components
(1 ≤ k ≤ n) are positive and the remaining (n − k) components are zero. We
further make an assumption that the first k components are positive. Thus the
optimal solution is x = [x1, x2, … , xk, 0, … ,0]T , (n − k) zero components, and
j=1 cjxj .
z = c T x = Σk
We know Σk
j=1 xj aj = b, xj ≥ 0, j = 1,2, … , k.................(1)
Then Σk
j=1 λjaj = 0 with at least one λj > 0.......................(2)
Taking u = j
( ) which is a positive quantity, and the solution set
maxj xj
/
x = [x , x , … , x , 0, … ,0]T where x/ = — j ≥ 0, j = 1,2, … … , k which
/ / /
x
1 2 k j j
=
Σk j
c (x —
Σk
)= c x − 1 Σk c λ = z − 1 Σk cλ .
j=1 j j j=1 j j j=1 j j j=1 j j
k
If Σ j=1 cjλj =0 (which will be proved at the end) , then z1 = z and the solution set
x/ is also an optimal solution. If the column vectors corresponding to
x / , x / , … , x/ are L.I. then the optimal solution x/ is a B.F.S. If the column
1 2 k–1
vectors are L.I then repeating the above procedure at most a finite number of times
we will finally get a B.F.S (as a single non-null vector is L.I.) which is also an
optimal solution.
To prove that
Σk j=1 cjλj = 0, if possible let cjλj ≠ 0.
Σk
j=1
Then > 0 or < 0. We multiply cjλj by a quantity ð(≠ 0), such that
j=1 cjλj
Σk Σk
j=1
k
ð j=1cjλj > 0.
Σ
Hence cjxj , or,
j=1 cjxj + ð cjλj > cj(xj + ðλj) > z . …. (3)
Σk Σk j=1 Σk j=1 Σk j=1
Hence for particular values of ð it is always possible to get xj + ðλj ≥ 0 for all
j. So the solution set (xj + ðλj), j = 1,2, … , k is a feasible solution of the system
Ax = b. From (3) it is clear that this solution set gives the value of the objective
function greater than z which contradicts the fact that z is the maximum value of
the objective function.
Hence Σk
j=1 cjλj = 0 .
CHAPTER V
Simplex Method
After introduction of the slack and surplus variables and by proper adjustment of z,
let us consider the L.P.P. as
Maximize z = c T x subject to Ax = b, x ≥ 0,
where aj = (a1j, a2j, … … , amj)T is a column vector associated with the vector
xj , j = 1,2, … … , n .
x = (x1, x2, … … , xr, xr+1 , xr+2 , … … , xn)T is a n-component column vector, where
xr+1 , xr+2 , … … , xn are either slack or surplus variables and
As none of the m converted equations are redundant then there exists at least
one set of m column vectors, say, β1, β2, … , βm of the coefficient matrix A which are
linearly independent. Then one basis matrix B which is a submatrix of A is
given by B = [β1 β2 … βm] .
Let xB1, xB2, … , xBm be the variables associated with the basic vectors
β1, β2, … , βm respectively. Then the basic variable vector is
xB = [xB1 xB2 … xBm]T.
zB is the value of the objective function corresponding to the B.F.S, where the
basis matrix is B .
Now, β1, β2, … , βm are L.I and so is a basis of Em. Therefore all the vectors aj can
be expressed as a linear combination of β1, β2, … , βm .
T
Let aj = β1y1j +β2y2j + ⋯ + βmymj = Byj where yj = [y1j y2j … ymj] ,
Therefore yj = B–1aj .
If the coefficient matrix A contains m unit column vectors which are L.I, then
this set of vectors constitute a basis matrix. Let e1, e2, … , ei, … , em be m
independent unit vectors of the coefficient matrix, all of which may not be
placed in the ascending order of i (i = 1,2, … , m). For example, e1, e2, e3 may
occur at the 5th ,7th , 3rd column of A respectively. But the basis matrix B is the
identity matrix. Hence the components of the solution set corresponding to the
basic variables are xBi = bi , i = 1,2, … , m and yj = B–1aj = aj , that is the vectors
yj are nothing but the column vectors aj due to this transformation.
Note: In the simplex method all equations are adjusted so that the basis matrix is
the identity matrix and bi ≥ 0 for all i.
If mini{ xBi
, yik > 0} = yrk , then the vector in the rth position of the current basis
yik
will be replaced by ak . The rth row of the table is called the key row and yrk is
called the key element. If the value of r is not unique, then again the choice is
arbitrary.
Theorem: Minimum value of z is the negative of the maximum value of (−z) with
the same solution set. In other words, min z = −max(−z) with the same solution
set.
Therefore,
Examples
x1 + 2x2 − 2x3 + x4 = 30
x1 + 3x2 + x3 + x5 = 36 , x1, x2, x3, x4, x5 ≥ 0 .
zB = cTBxB = cB1 xB1 + cB2xB2 = 0, yj = B–1aj = I2aj = aj that is yij = aij . With
the above information we now proceed to construct the initial simplex table.
c 5 2 2 0 0
Basis cB b a1 a2 a3 a4(e1) a5(e2) Min ratio= xBi , yi1 > 0
yi1
a 4∗ 0 30 1∗ 2 -2 1 0 30
= 30∗ →
1
36
a5 0 36 1 3 1 0 1 = 36
1
zj − cj z0 = 0 −5 ∗
-2 -2 0 0
↑
Rule of construction of the second table: The new basis is a1 and a5 and therefore
they must be the columns of the identity matrix. We make the necessary row
operations as follows:
R1 is the key row, a1 is the key column, y11 is the key element.
R/ = new key row = 1
R1
1 y11
R/ = R2 − y11R/
2 1
The same notations will be used in all the tables but the entries will keep changing.
C 5 2 2 0 0
Basis cB B a1 a2 a3 a4 a5 Min ratio = xBi , yi3 > 0
yi3
a1 5 30 1 2 -2 1 0 —−
a 5∗ 0 6 0 1 3∗ -1 1 6
= 2∗ →
3
zj − z0 = 0 8 −12∗ 5 0
cj
z ↑
= 150
R2 is the key row, a3 is the key column, y23 is the key element.
R/ = new key row = 1
R2
2 y23
R/ = R1 − y13R/
1 2
C 5 2 2 0 0
Basis cB B a1 a2 a3 a4 a5
zj − 0 12 0 1 4
cj
Z=174
b1 c3
xB = [xB2] = [x4] = [b2] = [ 600 ] , cB = [cB2] = [c4] = [0],
xB3 x5 1000 cB3 c5 0
b3
zB = cBxB = cB1 xB1 + cB2xB2+cB3xB3 = 0, yj = B–1aj = I3aj = aj that is
T
yij = aij . With the above information we now proceed to construct the
initial simplex table.
C 4 7 0 0 0
Basis cB b a1 a2 a3 a4 a5 Min ratio= xBi , yi1 > 0
yi1
1000
a3 0 1000 2 1 1 0 0
=1000
1
600
a4 0 600 1 1 0 1 0 = 600
1
a5∗ 0 1000 1 2∗ 0 0 1 1000 ∗
2 = 500
zj − cj z = 0 -4 −7∗ 0 0 0
y32 is the key element. R = new key row =
/ 1
R3 , R/ = Ri − yi2R/ , i = 1,2.
3 y32 i 3
500
a3 0 500 3/2 0 1 0 -1/2
=1000/3
100
3/2
a4∗ 0 100 1∗ 1 0 1 -1/2 = 200∗
— 1/2
2
a2 7 500 1/2 0 0 0 1/2 500
=1000
1/2
zj − cj 3500 1 ∗0 0 0 7/2
—
2
y21 is the key element. R/ = new key row = 1
R2 , R/ = Ri − yi1R/ , i = 1,3.
2 y21 i 2
a3 0 200 0 0 1 -3 1
a1 4 200 1 0 0 2 -1
a2 7 400 0 1 0 -1 1
zj − cj 3600 0 0 0 1 3
c -2 3 0 0 0
Basis cB b a1 a2 a3 a4 a5 Min ratio= xBi , yi1 > 0
yi1
a3 0 7 2 -5 1 0 0 7
2
a 4∗ 0 8 4∗ 1 0 1 0 7
2
a5 0 16 7 2 0 0 1 16
7
zj − cj z/ = 0 −2∗ 3 0 0 0
y21 is the key element. R2 = new key row = R2 ,
/ 1
y21
Ri = Ri − yi2R 2, i = 1,2.
/ /
a3 0 3 0 -11/2 1 -1/2 0
a1 2 2 1 ¼ 0 ¼ 0
a5 0 2 0 ¼ 0 -7/4 1
zj − cj /
z =4 0 7/2 0 ½ 0
C 0 2 1 0 0
Basis cB B a1 a2 a3 a4(e1) a5(e2) Min ratio= xBi , yi1 > 0
yi1
7
a4 0 7 1 1 -2 1 0 =7
1
3
a5∗ 0 3 -3 1∗ 2 0 1 = 3∗
1
zj − cj z=0 0 −2 -1 ∗
0 0
y22 is the key element. R = new key row =
/ 1
R2 , R1/ = R1 − y12R/ .
2 y22 2
4
a4∗ 0 4 4∗ 0 -4 1 -1 = 1∗
4
a2 2 3 -3 1 2 0 1 ---
zj − cj z=6 −6 ∗
0 3 0 2
y11 is the key element. R = new key row =
/ 1
R1 , R2 = R2 − y21R .
/ /
1 y11 1
a1 0 1 1 0 -1 ¼ -1/4
a2 2 6 0 1 -1 ¾ ¼
zj − cj z = 12 0 0 -3 3/2 ½
In the third table zj − cj < 0 for = 3 . But yi3 < 0 for all . Hence the problem has
unbounded solution.
Show that the solution is not unique. Write down a general form of all the optimal
solutions.
Solution: Adding two slack variables x3,x4, one to each constraint, the converted
equations are 6 x1 + 10x2 + x3 = 30
Simplex C 5 2 0 0
tables
Basis cB B a1 a2 a3 a4(e1) Min ratio= xBi , yi1 > 0
yi1
a3 0 30 6 10 1 0 30
6 =5
a4∗ 0 20 10∗ 4 0 1 20 ∗
10 = 2
zj − c j z=0 −5∗ -2 0 0
y21 is the key element. R = new key row
/
1
R , R/1= R1 − y12R/
2 y21 2 2
=
0 18 0 38 ∗ 1 -3/5 18 45∗
a3∗ =
( ) 38/5 19
5 2
a1 5 2 1 2/5 0 1/10 =5
2/5
zj − cj z = 10 0 0∗ 0 ½
y12 is the key element. R = new key row
/
1
R , R/ 2= R2 − y22R/
1 y12 1 1
=
a2 2 45/19 0 1 5/38 -3/38
a1 5 20/19 1 0 -2/19 5/38
zj − cj z = 10 0 0 0 ½
In the second table, zj − cj ≥ 0 for all j. Therefore the solution is optimal and
max z = 10 at x1 = 2, x2 = 0. But z2 − c2 = 0 corresponding to a non-basic
vector a2 . Thus the solution is not unique. Using a2 to enter in the next basis, the
third table gives the same value of z but for x1 = 20/19, x2 = 45/19. We know
that if there exists more than one optimal solution, then there exist an infinite
number of optimal 2 solutions, given bythe convex combination of the optimal
solutions x/ = [ ] and x// = 20/19 . Hence all the optimal solutions are given
[ ]
0 45/19
by λx/ + (1 − λ)x//, 0 ≤ λ ≤ 1,
2 20/19
(
=λ[ ]+ 1−λ [ )
]
0 45/19
Artificial variables
x1 + x2 − 3x3 = 8 , xj ≥ 0, j = 1,2,3 .
First constraint is ≤ type and the second one is a ≥ type, so adding a slack and a
surplus variable respectively, the two constraints are converted into equations.
Hence the transformed problem can be written as
To get the initial B.F.S for using the simplex method, we require an identity matrix
as a sub-matrix of the coefficient matrix. To get that, we need to introduce some
more variables which will be called the artificial variables. Even if a constraint
is given as an equation, we still add an artificial variable (A.V) to get an initial
B.F.S. So after introducing artificial variables the above problem is written
as4x1 +
2x2 − x3 + x4 =4
x1 + x2 − 3x3 + x7 = 8 , xj ≥ 0, j = 1, … ,7 .
1. No artificial variables are present in the basis at the optimal stage indicates
that all A.V.s are at the zero level and hence the solution obtained is optimal.
2. At the optimal stage, some artificial variables are present in the basis at the
positive level indicates that there does not exist a F.S to the problem.
3. At the optimal stage, some artificial variables are present in the basis but at
zero level indicates that some constraints are redundant.
In this method, after rewriting the constraints by introducing slack, surplus and
artificial variables, we adjust the objective function by assigning a large negative
cost, say – M to each artificial variable. In the example given above, the objective
function becomes Maximize z = 2x1 + 3x2 − 4x3 + 0x4 + 0x5 − Mx6 − Mx7.
We then solve the problem using simplex method as explained earlier, the only
point to remember is that once an artificial variable leaves a basis we drop the
column corresponding to the vector associated with that A.V.
c 2 -3 0 0 0 -M
Basis cB b a1 a2 a3 a4 a5 a6 Min ratio= xBi , yi1 > 0
yi1
2
a 3∗ 0 2 1∗ -1 1 0 0 0 = 2∗
1
a4 0 46 5 4 0 1 0 0 46 1
5 = 95
a6 -M 32 7 2 0 0 -1 1 32 4
7 = 4 7
zj − cj −7M -2M 0 0 M 0
— 2 +3
∗
c 2 -3 0 0 0 -M
Basis cB b a1 a2 a3 a4 a5 a6 Min ratio= xBi , yi1 > 0
yi1
a1 2 2 1 -1 1 0 0 0 —−
a4 0 36 0 9 -5 1 0 0 36
9 =4
a 6∗ -M 18 0 9∗ -7 0 -1 1 18 ∗
9 =2
zj − cj 0 −9M 7M 0 M 0
+ 1∗ +2
y32 is the key element. R/ = new key row = 1 R3 , R/ = Ri − yi2R/ , i = 1,2
3 y31 i 3
Table 3
c 2 -3 0 0 0
Basis cB b a1 a2 a3 a4 a5
a1 2 4 1 0 2/9 0 -1/9
a4 0 18 0 0 2 1 1
a2 -3 2 0 1 -7/9 0 -1/9
Soln: Introducing slack, surplus and artificial variables, the converted problem is
subject to
x1 − 5x2 + x3 = 10
2x1 − x2 − x4 + x5 =
2
x1 + x2 + x6 = 10 , xj ≥ 0, j = 1, … ,6
c 2 -3 0 0 -M -M
Basis cB b a1 a2 a3 a4 a5 a6 Min ratio= xBi , yi1 > 0
yi1
a3 0 10 1 -5 1 0 0 0 10
=10
1
2
a 5∗ -M 2 2∗ -1 0 -1 1 0 = 1∗
2
a6 -M 10 1 1 0 0 0 1 10
=10
1
zj − cj −3M -2 0 M 0 0
∗
—1
y21 is the key element. R/ = new key row = 1 R2 , R/ = Ri − yi1R/ , i = 1,3
2 y21 i 2
a3 0 9 0 -9/2 1 ½ 0 ---
a3 0 36 0 0 1 2
a1 1 4 1 0 0 -1/3
a2 2 6 0 1 0 1/3
zj − cj 16 0 0 0 1/3
8. Solving by Big M method prove that the following L.P.P. has no F.S.
Maximize z = 2x1 − x2 + 5x3
Subject to x1 + 2x2 + 2x3 ≤ 2
5
x1 + 3x2 + 4x3 = 12
2
4x1 + 3x2 + 2x3 ≥ 24 , xj ≥ 0, j = 1,2,3 .
Solution: Introducing slack, surplus and artificial variables the converted equations
and the adjusted objective functions are
+ 2x2 + 2x3 + x4 =2
5
x1 + 3x2 + 4x3 + x6 = 12
2
4x1 + 3x2 + 2x3 − x5 + x7 = 24 , xj ≥ 0, j = 1, … ,7 .
c 7 -1 5 0 -M 0 -M xBi
Basis cB b a1 a2 a3 a4 a5 a6 a7 min ratio = ,y
i1
yi1
> 0
2
a 4∗ 0 2 1∗ -1 1 0 0 0 0 = 2∗
1
a5 -M 12 5/2 9 -5 1 0 0 0 12 24
5/2 = 5
a7 -M 24 4 9∗ -7 0 -1 1 1 24
4 =6
zj − cj 13 -6M -6M 0 0 M 0
— 2 M +1 -5
— 2∗
y11 is the key element. R/ = new key row = 1 R1 , R/ = Ri − yi1R/ , i = 2,3
1 y11 i 1
a1 7 2 1 -1 1 0 0 0 0
a5 -M 7 0 1 0 0
a7 -M 16 0 0 1 1
zj − cj 0 7M 7M 13 0 M 0
+5 -1 2 M
+2
Note: We need not complete the table if the optimality condition is reached.
CHAPTER VI
Duality Theory
Associated with every L.P.P there exists a corresponding L.P.P. The original
problem is called the primal problem and the corresponding problem as the
dual problem.
We will first introduce the concept of duality through an example.
……………………………………
……………………………………
Maximize zx = c T x subject to Ax ≤ b, x ≥ 0.
Now the corresponding problem is
Minimize zw = bTw subject to ATw ≥ c, w ≥ 0.
Proof: From the previous theorem, for any two F.S. x0 and w0 of the primal and
the dual cTx0 ≤ b T w ∗ [as w ∗ is a F.S. of the dual]
In the same way we can prove that min zw = min b T w = b T w ∗ , that is, w∗ is an
optimal feasible solution of the dual.
Proof: We first assume that the primal has an optimal feasible solution which has
been obtained by simplex method. Let us convert the constraints of the primal in
the following form
Ax + Imxs = b, x ≥ 0, xs ≥ 0,
or, cBTB–1aj ≥ cj
(as e1, e2, … , em are slack vectors and cost component of each one is 0)
Now we have to show that w0 is also an optimal solution to the dual problem.
Similarly, starting with the finite optimal value of the dual problem, if it exists, we
can prove that primal also has an optimal value of the objective function and
max zx = min zw .
Theorem: (b) If either of the primal or the dual has unbounded solution then the
other will have no feasible solution.
Proof: Let us assume that the primal has an unbounded solution. If the dual
problem has a finite optimal solution, then the primal will also have a finite
optimal solution, which is a contradiction. We now prove that the dual has no
feasible solution.
we have max zx = max c T x → ∞ and since for any feasible solution w of the dual,
b T w ≥ max zx = max c T x → ∞ for all feasible solutions w of the dual, which
indicates that there is no feasible w whose components are finite. Hence we can
conclude that the dual has no feasible solution.
Examples
1. Solve the following problem by solving its dual using simplex method.
Min z = 3x1 + x2
Subject to 2x1 + x2 ≥ 14
x1 − x2 ≥ 4 , x1 , x2 ≥ 0
w1 − w2 ≤ 1 , w1, w2 ≥ 0
Now we solve the dual problem by the simplex method.
Simplex tables
C 5 2 2 0
Basis cB B a1 a2 a3 a4 Min ratio= xBi , yi1 > 0
yi1
a3 0 3 2 1 1 0 3
2
1
a 4∗ 0 1 1∗ −1 0 1 = 1∗
1
zj − cj 0 −14 ∗
−4 0 0
y21 is the key element. R = new key row
/
1
R , R/1= R1 − y12R/
2 y21 2 2
=
a 3∗ 0 1 0 3∗ 1 −2 1
3
a1 14 1 1 −1 0 1 —−
zj − c j 14 0 −18 ∗
0 14
y12 is the key element. R = new key row
/
1
R , R/ 2= R2 − y22R/
1 y12 1 1
=
a2 4 1/3 0 1 1/3 -2/3
a1 14 4/3 1 0 1/3 1/3
zj − c j 20 0 0 6 2
Here all zj − cj ≥ 0. So the solution is optimal.
4
Max zw = 20 at w1 = , w2 = 1/3.
3
Note: Advantage of solving the dual problem is that we are able to solve the primal
without using artificial variables.
After introducing surplus variables (as the constraints are ≥ type) we see that
identity matrix is already present in the coefficient matrix. So we do not need to
add artificial variables. Then we change the problem to a maximization problem as
w1 + 3w2 + w4 − w6 = 4 , w1, … , w6 ≥ 0
Simplex tables
c -10 -18 -8 -6 0 0
Basis cB b a1 a2 a3 a4 a5 a6 Min ratio= xBi , yi1 > 0
yi1
a3 -8 3 1 2 1 0 -1 0 3
=10
2
a 4∗ -6 4 1 3∗ 0 1 0 -1 4∗
3
zj − cj -48 -4 −16∗ 0 0 8 6
y22 is the key element. R = new key row =
/ 1
R2 , R/ = R1 − yi2R/
2 y22 1 2
a3 -8 1/3
a2 -18 4/3
Subject to w1 + w2 + w3 ≤ 10
2w1 − w3 ≤ 2
Max zw = w1 − w2 + 3w3 + 0. w4 + 0. w5 + 0. w6
Subject to w1 + w2 + w3 + w4 = 10
2w1 − w3 + w5 =2
2w1 − 2w2 + 3w3 + w6 = 0 , wj ≥ 0, j = 1, … ,6
Simplex tables
c 1 -1 3 0 0 0
Basis cB b a1 a2 a3 a4 a5 a6 Min ratio= xBi , yi1 > 0
yi1
a4 0 10 1 1 1 1 0 0 10
=10
1
a5 0 2 2 0 -1 0 1 0 ……
0
a 6∗ 0 0 2 -2 3∗ 0 0 1 = 0∗
3
zj − cj 0 −1 1 -3 0 0
∗
0
y33 is the key element. R = new key row = R3 , R/ = Ri − yi3R/ , i = 1,2
/ 1
3 y33 i 3
a 4∗ 0 10 1/3 5
∗ 0 1 0 -1/3 10 ∗
3 5/3 = 6
a5 0 2 8/3 -2/3 0 0 1 1/3 ---
zj − cj 0 1 −1∗ 0 0 0 1
y13 is the key element. R = new key row = R13 , R/ = Ri − yi3R/ ,i = 2,3
/ 1
1 y13 i 1
a2 6
a5 6
a3 4
Let us consider an example where there are m origins Oi with the quantity
available at each Oi be ai , i = 1,2, … , m and n destinations Dj with the quantities
required, i.e. the demand at each Dj be bj , j = 1,2, … , n.
We make an assumption
i=1 ai = bj = M. This assumption is not
Σm Σn
j=1
restrictive.
Destinations
D1 D2 … Dj … Dn
O1 x11 x12 … x1j … x1n a1
O2 x21 x22 … x2j … x2n a2
… … … … … … … …
origin Oi x i1 x i2 … x ij … x in ai capacities
… … … … … … … …
Om xm1 xm2 … xmj … xmn am
b1 b2 … bj … bn
Demands
In the above table, xij denotes the number of units transported from the ith origin
to the jth destination.
Let cij denote the cost of transporting each unit from the ith origin to the jth
destination. In general, cij ≥ 0 for all , j .
min z = n
i=1 Σj=1 cijxij
Σm
m
Σi=1 xij = bj , j = 1,2, … , n
and Σm a =
bj .
i=1 i
Σn j=1
Theorem: There exists a feasible solution to each T.P. which is given by xij aibj
= M ,
i = 1,2, … , m, j = 1,2, … , n, where M = i=1 Σn bj .
Σm ai = j=1
Theorem: In each T.P. there exists at least one B.F.S. which makes the objective
function a minimum.
In this table there are mn squares or rectangles arranged in m rows and n columns.
Each squares or rectangle is called a cell. The cell which is in the ith row and the
jth column is called the (i, j)tℎ cell. Each cost component cij is displayed at the
bottom right corner of the corresponding cell. A component xij (if ≠ 0) of a
feasible solution is displayed inside a square at the top left hand corner of the cell
(i, j) . The capacities of the origins and demands of the destinations are listed in
the outer column and the outer row respectively as given in the table below.
Transportation Table
Destinations
D1 D2 … Dj … Dn
c11 c12 c1n
O1 … c1j … a1
O2 c21 c22 … c2j … c2n a2
… … … … … … … …
Origin Oi ci1 cin ai Capacities
ci1 … cij …
… … … … … … … …
Om cm1 cm2 … cmj … cmn am
b1 b2 … bj … bn
Demands
We will discuss two methods of obtaining an initial B.F.S , (i) North-West corner
rule and (ii) Vogel’s Approximation method (VAM) .
Step 1: Compute min(a1, b1) . Select x11 = min(a1, b1) and allocate the value of
x11 in the cell (1,1), i.e. the cell in the North-West corner of the transportation
table.
Step 2: If a1 < b1, the capacity of the origin O1 will be exhausted , so all other
cells in the first row will be empty, but some demand remains in the destination D1.
Compute min(a2, b1 − a1) . Select x21 = min(a2, b1 − a1) and allocate the value
of x21 in the cell (2,1).
destinations
D1 D2 D3 D4
O1 4 6 9 5 16
O2 2 6 4 1 12
origin capacities
O3 5 7 2 9 15
12 14 9 8
demands
Solution:
D1 D2 D3 D4
O1 12 4 4 6 9 5 16
O2 2 10 6 2 4 1 12
O3 5 7 7 2 8 9 15
12 14 9 8
Step 1: Determine the difference between the lowest and next to lowest cost for
each row and each column and display them within first brackets against the
respective rows and columns.
Step 2: Find the row or column for which this difference is maximum. Let this
occur at the ith row. Select the lowest cost in the ith row. Let it be cij . Allocate
xij = min(ai, bj) in the cell (i, j). If the maximum difference is not unique then
select arbitrarily.
Step 3: If ai < bj , cross out the ith row and diminish bj by ai . If bj < ai , cross
out the jth column and diminish ai by bj . If ai = bj , delete only one of the ith
row or the jth column.
Step 4: Compute the row and the column differences for the reduced transportation
table and repeat the procedure discussed above till the capacities of all the origins
are exhausted and the demands of all the destinations are satisfied.
destinations
D1 D2 D3 D4
O1 19 30 50 10 7
O2 70 30 40 60 9
origin capacities
O3 40 8 70 20 18
5 8 7 14
demands
Solution:
D1 D2 D3 D4
O1 O2 5 19 30 50 2 10 7(9) 7(9) 2(40) 2(40)
In a transportation problem, an ordered set of four or more cells are said to form a
loop (i) if and only if two consecutive cells in the ordered set lie either in the same
row or in the same column and (ii) the first and the last cells in the ordered set lie
either in the same row or in the same column.
Optimality conditions
After determining the initial B.F.S. we need to test whether the solution is optimal.
Find the values of zij − cij corresponding to all the non basic variables. If
zij − cij ≤ 0 for all cells corresponding to the non basic variables, the solution is
optimal.
If the condition zij − cij ≤ 0 is not satisfied for all the non basic cells, the
solution is not optimal.
The net evaluations for the non basic cells are calculated using duality theory. We
give below the procedure for finding the net evaluations without proof.
If at least one zij − cij > 0 , the solution is not optimal. As in the simplex method,
our problem now is to get an optimal solution. First we have to select an entering
vector and a departing vector which will move the solution towards optimality.
Determination of the departing cell, the entering cell and the value of the basic
variable in the entering cell-------
If maxi,j{zij − cij , zij − cij > 0} = zpk − cpk , then (p, k) is the entering cell. If the
maximum is not unique then select any one cell corresponding to the maximum
value of the net evaluations.
Allocate a value θ > 0 in the cell (p, k) and readjust the basic variables in the
ordered set of cells containing the simple loop by adding and subtracting the value
θ alternately from the corresponding quantities such that all rim requirements are
satisfied. Select the maximum value of θ in such a way that the readjusted values
of the variables vanish at least in one cell {excluding the cell (p, k)} of the ordered
set and all other variables remain non negative. The cell where the variable
vanishes is the departing cell. If there are more than one such cell, select one
arbitrarily as the departing cell, the remaining cells with variable values zero are
kept in the basis with allocation zero. In this case the solution at the next iteration
will be degenerate. The method of solving for this type of problems will be
discussed later.
Construct a new transportation table with the new B.F.S. and test for optimality.
Repeat the process till an optimal solution is obtained.
Problem: Determine the minimal cost of transportation for the problem given
earlier where a B.F.S is obtained by VAM.
First we have to test whether the initial B.F.S. is optimal. For that we calculate the
net evaluations for all the non basic cells.
So the net evaluations for the non basic cells are ui + vj − cij . We indicate the net
evaluations at the bottom left hand corner of the non basic cells.
As the net evaluation of the cell (2,2) is 18 > 0, the solution is not optimal.
To find the entering cell, calculate maxi,j(ui + vj − cij) for the cells with
positive net evaluation. Since only one cell (2,2) is with positive net evaluation, the
entering cell is (2,2).
To find the departing cell, construct a loop with one vertex as (2,2) and all other
vertices as basic cells.
5 2
7 2
8 10
To get the departing cell, we have to make one of the allocated cells empty. That is
possible if we subtract minimum allocation from the cells marked with negative
sign, 2 in this case, and add the same amount to allocations in the cells marked
with a positive sign so that the row and column requirements are satisfied.
So the next transportation table is
5 19 30 50 2 10 u1 =0
-32 -42
70 2 30 7 40 60 u2 = 32
-19 -18
40 6 8 70 12 20 u3 = 10
-11 -52
v1 = 19 v2 = −2 v3 = 8 v4 = 10
Here all net evaluations zij − cij ≤ 0 for all the non basic cells. Hence the
solution is optimal. The minimum cost of transportation is
x11c11 + x14c14 + x22c22 + x23c23 + x32c32 + x34c34
(i) Σm
i=1 ai > Σn
j=1 bj
This problem can be converted into a balanced transportation problem by
introducing a fictitious destination Dn+1 with demand bn+1 = Σm j=1 bj .
i=1 ai − Σn
With these assumptions, the T.P. will be a balanced one having m origins and n+1
destinations. This problem can now be solved by the previous methods.
(ii) Σm
i=1 ai < Σn
j=1 bj
With these assumptions, the T.P. will be a balanced one having m + 1 origins and
n destinations. This problem can also be solved by the previous methods.
Problem: Solve the following unbalanced T.P. after obtaining a B.F.S. by VAM .
D1 D2 D3 D4
O1 14 19 11 20 10
O2 19 12 14 17 15
O3 14 16 11 18 12
8 12 16 14
Solution: This is an unbalanced T.P. as Σn
j=1 bj = 50 > i=1 ai = 37. So,
Σm
introducing a fictitious origin with capacity 50 − 37 = 13 and assigning the cost
of transportation from this origin to any destination as 0, we rewrite the problem as
follows:
D1 D2 D3 D4
O1 14 19 11 20 10
O2 19 12 14 17 15
O3 14 16 11 18 12
O3 0 0 0 0
8 12 16 14
Initial B.F.S. is obtained below by using VAM .
D1 D2 D3 D4
14 19 10 11 20 10(3) 10(3) 10(3)
O1
19 12 12 2 14 117 15(2)
15(2) 3(3) 3(3) 3(3)
O2
8 16 4 11 18 12(3) 12(3) 12(2) 4(7)
O3 14 12(3)
0 0 0 13 0
O4 13(0)
14 19 10 11 20 u1 = −3
0 -10 -6
19 12 14 17 u2 = 0
12 2 1
-2
8 14 16 4 11 18 u3 = −3
-7 -4
0 0 0 13 0 u4 = −17
0 -5 -3
v1 = 17 v2 = 12 v3 = 14 v4 = 17
Here all cell evaluations are less than or equal to zero. Hence the solution is
optimal. As one cell evaluation is zero, there exists alternative optimal solution.
One optimal solution is
Degeneracy may occur at any stage of the problem. Here we will discuss
degeneracy at the initial B.F.S. and only one basic variable is zero. Even if more
than one basic variable is zero, the problem can be solved similarly.
Allocate a small positive quantity s in the cell where the basic variable is zero and
readjust all basic variables in the cells such that Σm
i=1 ai = j=1 bj = M is
Σn
satisfied. Now solve the problem as we solve a non degenerate problem and put
s = 0 in the final solution.
D1 D2 D3
O1 8 7 3 60
O2 3 8 9 70
O3 11 3 5 80
50 80 80
We find the initial B.F.S. by VAM which is given below.
D1 D2 D3
O1 8 7 60 3
O2 50 3 8 20 9
O3 11 80 3 5
To resolve the degeneracy we allocate a small positive quantity ε to a cell such that
a loop is not formed among some or all of the allocated cells including this new
allocated cell and make them dependent. For a dependent set of cells, unique
determination of ui and vj will not be possible. We allocate a positive quantity ε
to the cell (1,2) and construct the new table and then compute ui and vj and the
cell evaluations.
D1 D2 D3
O1 -11 8 ε 7 60 3 u1 =0
O2 8 9
50 3 20 u2 = 6
O3 5
11 80 3 5 u3 = −4
-18 -6
v1 = −3v2 = 7 v3 = 3
The cell evaluation for the cell (2,2) is positive, so the solution is not optimal. We
allocate maximum possible amount to the cell (2,2) and adjust the allocations in
the other cells such that the allocated cells are independent, total number remain 5
and satisfy the rim requirements.
ε 60
50 8 20
80
8 7 60+ ε 3 u1 =-6
-11 -5
50 3 ε 8 20+ ε 9 u2 = 0
11 80 3 5 u3 = −5
-13 -1
v1 = 3 v2 = 8 v3 = 9
Here all the cell evaluations are all negative. Hence the solution is optimal. Hence
by making s → 0 we get the optimal solution as
Assignment Problem
Optimize z = i=1 Σn c x
j=1 ij ij
Σn
Subject to Σn
j=1 xij = ai = 1, i = 1,2, … , n
n
Σi=1 xij = bj = 1, j = 1,2, … , n
and
i=1 ai = bj = n .
Σn
Σn j=1
From the above it is clear that the problem is to select n cells in a nxn
transportation table, only one cell in each row and each column, such that the sum
of the corresponding costs (profits) be minimum (maximum). Obviously the
solution obtained is a degenerate solution. To get the solution we first state a
theorem.
Theorem: Given a cost or profit matrix C = (cij)nxn , if we form another
matrix C∗ = (cij∗)nxn = cij − ui − vj , ui and vj are arbitrary chosen constants, the
solution of C will be identical with that of C ∗ .
A B C D E A B C D E
1 9 8 7 6 4 1 9 8 7 6 4
2 5 7 5 6 8 2 5 7 5 6 8
3 8 7 6 3 5 → 3 8 7 6 3 5
4 8 5 4 9 3 4 8 5 4 9 3
5 6 7 6 8 5 5 6 7 6 8 5
A B C D E
1 9 8 7 6 4
2 5 7 5 6 8
3 8 7 6 3 5
4 8 5 4 9 3
5 6 7 6 8 5
Subtract the minimum 4, 5, 3,3, 5 of the first, second, third, fourth,
fifth rows from each element of the respective row, repeat the same
for the columns.