35
1 Linear Programming Problem Optimization Problem Problems which seek to maximize or minimize an objective function of a finite number of variables subject to certain constraints are called optimization problems Example 1 1 2 2 n n Maximize z = c x +c x + ............. +c x Subject to; 11 1 12 2 1n n 1 21 1 22 2 2n n 2 m1 1 m2 2 mn n m a x +a x+.............+a x b a x +a x+.............+a x b . . . a x +a x+.............+a x b j x > 0 j (j = 1, 2,............,n) Feasible Solution Any solution of a linear programming problem that

LPM Class Presentation

  • View
    229

  • Download
    0

Embed Size (px)

DESCRIPTION

Operation Management

Citation preview

  • 1

    Linear Programming Problem

    Optimization Problem

    Problems which seek to maximize or minimize an

    objective function of a finite number of variables

    subject to certain constraints are called optimization

    problems

    Example

    1 1 2 2 n n Maximize z = c x +c x +.............+c x

    Subject to;

    11 1 12 2 1n n 1

    21 1 22 2 2n n 2

    m1 1 m2 2 mn n m

    a x +a x +.............+a x b

    a x +a x +.............+a x b

    .

    .

    .

    a x +a x +.............+a x b

    jx > 0 j (j = 1, 2,............,n)

    Feasible Solution

    Any solution of a linear programming problem that

  • 2

    satisfies all the constraints of the model is called a

    feasible solution.

    1 1 2 2 n n Maximize z = c x +c x +.............+c x

    Subject to;

    11 1 12 2 1n n 1

    21 1 22 2 2n n 2

    m1 1 m2 2 mn n m

    a x +a x +.............+a x b

    a x +a x +.............+a x b

    .

    .

    .

    a x +a x +.............+a x b

    jx > 0 j (j = 1, 2,............,n)

    The solution, 1 1 2 2 n n x = s , x = s ,..........,x = s will be a

    feasible solution of the given problem if it does not

    violate any of the constraints of the given problem.

    Programming Problem

    Programming problems always deal with determining

    optimal allocations of limited resources to meet given

    objectives. The constraints or limited resources are

  • 3

    given by linear or non-linear inequalities or equations.

    The given objective may be to maximize or minimize

    certain function of a finite number of variables.

    Linear Programming and Linear Programming

    Problem

    Suppose we have given m linear inequalities or

    equations in n unknown variables 1 2 nx , x ,............and,x

    and we wish to find non-negative values of these

    variables which will satisfy the constraints and

    maximize or minimize some linear functions of these

    variables (objective functions), then this procedure is

    known as linear programming and the problem which is

    described is known as linear programming problem.

    Mathematically it can be described as, suppose we have

    m linear inequalities or equations in n unknown

    variables 1 2 nx , x ,............and,x of the form

    n

    ij j i

    j=1

    a x { ,=, }b (i= 1, 2,....,m) where for each constraint one

    and only one of the signs ,=, holds. Now we wish to

  • 4

    find the non-negative values of jx , j = 1, 2,,n.

    which will satisfy the constraints and maximize or

    minimize a linear function n

    j j

    j=1

    z = c x . Here ija , ib and

    jc are known constant.

    At the short-cut method mathematically the linear

    programming problem can be written as

    Optimize (maximize or minimize) n

    j j

    j=1

    z = c x

    Subject to

    n

    ij j i

    j=1

    a x { ,=, }b (i= 1, 2,....,m)

    jx > 0 j (j = 1, 2,............,n)

    Application of LPM

    (i) Linear programming problem is widely applicable in

    business and economic activities

    (ii) It is also applicable in government, military and

    industrial operations

    (iii) It is also extensively used in development and

    distribution of planning.

  • 5

    Objective Function

    In a linear programming problem, a linear function of

    the type n

    j j

    j=1

    z = c x of the variables jx , j = 1, 2,,n.

    which is to be optimized is called objective function. In

    an objective function no constant term will be appeared.

    i. e. we cannot write the objective function of the type

    n

    j j

    j=1

    z = c x +k

    Example of Linear Programming Problem:

    Suppose m types of machines 1 2 m A , A ,.........,A are producing

    n products namely 1 2 nP , P ,.........P . Let (i) ij a is the hours

    required of the ith machine (i = 1, 2,,m) to produce

    per unit of the jth product (j=1, 2,..,n) (ii) ib (i= 1,

    2,.,m) is the total available hours per week for

    machine i, and (iii) jc is the per unit profit on sale of

    each of the jth product.

    Machines 1P 2P nP Total Available Time

    1A

    2A

    .

    .

    11a 12 a 1na

    21a 22a 2na

    .

    1b

    2b

    .

    .

  • 6

    .

    m A m1a m2a mna

    .

    m b

    Unit Profits 1c 2c nc

    Construct the LPP

    Suppose jx (j = 1, 2, 3,..,n) is the no. of units of

    the jth product produced per week. The objective

    function is given by;

    1 1 2 2 n n z = c x +c x +.............+c x

    The constraints are given by;

    11 1 12 2 1n n 1

    21 1 22 2 2n n 2

    m1 1 m2 2 mn n m

    a x +a x +.............+a x b

    a x +a x +.............+a x b

    .

    .

    .

    a x +a x +.............+a x b

    Since the amount of production cannot be negative so,

    jx 0 (j = 1, 2, 3, 4) .

    The weekly profit is given by 1 1 2 2 n n z = c x +c x +.............+c x .

    Now we wish to determine the values of the variables

    jx 's for which all the constraints are satisfied and the

    objective function will be at maximum. That is

    1 1 2 2 n n Maximize z = c x +c x +.............+c x

  • 7

    Subject to

    11 1 12 2 1n n 1

    21 1 22 2 2n n 2

    m1 1 m2 2 mn n m

    a x +a x +.............+a x b

    a x +a x +.............+a x b

    .

    .

    .

    a x +a x +.............+a x b

    and

    jx 0 (j = 1, 2, 3, 4)

    Formulation of Linear Programming Problem

    (i) Transportation Problem

    Suppose given amount of uniform product are available

    at each of a no. of origins say warehouse. We wish to

    send specified amount of the products to each of a no.

    of different destinations say retail stores. We are

    interested in determining the minimum cost-routing

    from warehouse to the retail stores.

    Let use define,

    m = no. of warehouses

    n = no. of retail stores

  • 8

    ijx the amount of product shipped from the ith

    warehouse to the jth retail store.

    Since negative amounts cannot be shipped so we have

    ijx 0 i, j

    ia = total no. of units of the products available for

    shipment at the ith (i= 1, 2,,m)warehouse.

    jb = the no. of units of the product required at the jth

    retail store.

    Since we cannot supply more than the available amount

    of the product from ith warehouse to the different retail

    stores, therefore we have

    11 11c :x

    1a 11x 1b

    21x 12 x

    2a 22x 2b

    .

    m Origins m2 x n Destinations

    .

    m1x 2n x 1nx

    ma mn mnc :x nb

    1 1

    2 2

    m n

  • 9

    i1 i2 in ix +x +............+x a i= 1, 2,..,m

    We must supply at each retail store with the no. of units

    desired, therefore. The total amount received at any

    retail store is the sum over the amounts received from

    each warehouse. That is

    1j 2j mj jx +x +.............+x =b ; j = 1, 2,.,n

    The needs of the retail stores can be satisfied

    m n

    i

    i=1 j=1

    a b j

    Let us define ij c is the per unit cost of shifting from ith

    warehouse to the jth retail store, then the total cost of

    shifting is given by;

    m n

    ij ij

    i=1 j=1

    z= c x

    We wish to determine ijx s which minimize the cost m n

    ij ij

    i=1 j=1

    z= c x

    subject to the constraints

    i1 i2 in ix +x +............+x a ;i=1, 2,....,m

  • 10

    1j 2j mj jx +x +.............+x =b ; j= 1, 2,........,n

    It is a linear programming problem in mn variables with

    (m+n) constraints.

    (2) The Diet Problem

    Suppose we have given the nutrient content of a no. of

    different foods. We have also given the minimum daily

    requirement for each nutrient and quantities of nutrient

    contained in one of each food being considered. Since

    we know the cost per ounce of food, the problem is to

    determine the diet that satisfy the minimum daily

    requirement of nutrient and also the minimum cost diet.

    Let us define

    m = the no. of nutrients

    n = the no. of foods

    ija = the quantity (mg) of ith nutrient per (oz) of the jth

    food

    ib = the minimum quantity of ith nutrient

    jc = the cost per (oz) of the jth food

  • 11

    jx = the quantity of jth food to be purchased

    The total amount of ith nutrient contained in all the

    purchased foods cannot be less than the minimum daily

    requirements

    Therefore we have

    n

    i1 1 i2 2 in n ij j i

    j=1

    a x +a x +............+a x = a x b

    The total cost for all purchased foods is given by;

    n

    j j

    j=1

    z = c x

    Now our problem is to minimize cost n

    j j

    j=1

    z = c x subject

    to the constraints

    n

    i1 1 i2 2 in n ij j i

    j=1

    a x +a x +............+a x = a x b and

    jx 0

    This is called the linear programming problem.

    Feasible Solution

    Any set of values of the variables jx which satisfies

    the constraints

    n

    ij j i

    j=1

    a x { , , b , where ija and ib

  • 12

    are constant is called a solution to the linear

    programming problem and any solution which satisfies

    the non-negative restrictions i. e. jx 0 is called a

    feasible solution.

    Optimal Feasible Solution

    In a linear programming problem there is an infinite no.

    of feasible solutions and out of all these solutions we

    must find one feasible solution which optimize the

    objective function

    n

    j j

    j=1

    z = c x is called optimal feasible

    solution

    In other words, any feasible solution which satisfies the

    following conditions;

    (i)

    n

    ij j i

    j=1

    a x { , , b

    (ii) jx 0

    (iii) optimize objective function

    n

    j j

    j=1

    z = c x , is called a

    optimal feasible solution.

  • 13

    Slack and Surplus Variables

    In LP problems, generally the constraints are not all

    equations. Since equations are easy to handle as

    compared to inequalities, a simple conversion is needed

    to make the inequalities into equality. Let us consider

    first, the constraints having less than or equal signs ().

    Any constraint of this category can be written as

    h1 1 h2 2 hn n ha x +a x +..............+a x b (1)

    Let us introduce a new variable n+h x which satisfies

    that n+h x 0 where n

    n+h h hj j

    j=1

    x b a x 0 , to convert the inequalities to the

    equality such that

    h1 1 h2 2 hn n n+h ha x +a x +..............+a x +x b (2)

    The new variable n+h x is the difference between the

    amount available of resource and the amount actually

    used and it is called the slack variables.

    Next we consider the constraints having signs greater

    than or equal (). A typical inequality in this set can

  • 14

    be written as;

    k1 1 k2 2 kn n ka x +a x +..............+a x b (3)

    Introducing a new variable n+k x 0 , the inequality can

    be written as equality which is given by;

    k1 1 k2 2 kn n n+k ka x +a x +..............+a x -x b (4)

    Here the variable n+k x is called the surplus variable,

    because it is the difference between resources used and

    the minimum amount to be produced is called the

    surplus.

    Therefore using algebraic method for solving a linear

    programming problem, the linear programming problem

    with original constraints can be transformed into a LP

    problem with constraints of simultaneously linear

    equation form by using slack and surplus variable

    Example: Considering the LP problem

    1 2Min: -x -3x

    St

    1 2x -2x 4

    1 2-x +x 3

  • 15

    1 2x , x > 0

    Now introducing two new variables 3 4x and x , the

    problem can be written as;

    1 2 3 4Min: -x -3x +0.x +0.x

    St:

    1 2 3

    1 2 4

    1 2 3 4

    x -2x x =4

    -x +x x = 3

    x , x , x , x > 0

    Here 3x is the slack variable and 4x is the surplus

    variable.

    Effect of Introducing Slack and Surplus Variables

    Suppose we have a linear programming problem 1P

    such that

    Optimize

    1 1 2 2 n nZ= c x +c x +..............+c x (1)

    Subject to the condition

    h1 1 h2 2 hn n ha x +a x +..............+a x { , , }b (2)

    Where one and only one of the signs in the bracket hold

    for each constraint

  • 16

    The problem is converted to another linear

    programming problem 2P such that

    1 1 2 2 n n n+1 mZ= c x +c x +..............+c x +0.x +............+0.x (3)

    Subject to the condition

    h1 1 h2 2 hn n hn+1 n+1 hm m hAx = a x +a x +..............+a x a x ........... a x b (4)

    Where ij n mA= a and ja ( j = 1, 2,.,m) is the jth

    column of A.

    We claim that optimizing (3) subject to (4) with

    jx 0 is completely equivalent to optimizing (1)

    subject to (2) with jx 0

    To prove this, we first note that if we have any feasible

    solution to the original constraints, then our method of

    introducing slack or surplus variables will yield a set of

    non-negative slack or surplus variables such that

    equation (4) is satisfied with all variables non-negative

    consequently if we have a feasible solution to (4) with

    all variables non-negative, then its first n components

    will yield a feasible solution to (2) .Thus there exist one

    to-one correspondence between the feasible solutions

  • 17

    to the original set of constraints and the feasible

    solution to the set of simultaneous linear equations.

    Now if * * * *

    1 2 mX = (x x ,........,x ) 0 is a feasible optimal

    solution to linear programming 2P then the first n

    components of *X that is * * *

    1 2 n(x x ,........,x ) is an optimal

    solution by annexing the slack and surplus variables

    to any optimal solution to 1P we obtain an optimal

    solution to 2P

    Therefore, wet may conclude that if slack and surplus

    variables having a zero cost are introduced to convert

    the original set of constraint into a set of simultaneous

    linear equations, so the resulting problem is equivalent

    to the original problem.

    Example: Consider the following inequalities

    1 2Maximize: z= 3x +2x

    Subject to constraints

    1 2

    2

    1 2

    x +x 6

    x 3

    x , x 0

    Find basic solutions, and optimal feasible solution.

  • 18

    Solution. By introducing slack variables 3 4x and x , the

    problem is put into the following standard format

    1 2 3

    2 4

    1 2 3 4

    x +x x =6

    x x =3

    x , x ,x ,x 0

    So, the constraint matrix A is given by;

    1 1 1 0A =

    0 1 0 1

    = 1 2 3 4(a , a , a , a ) , 6

    b=3

    Rank(A) = 2

    Therefore, the basic solutions will be corresponding to

    finding a 2 2 basis B. Following are the possible ways

    of extracting B out of A

    (i) 1 21 1

    B=(a , a ) =0 1

    , -1 1 -1B =0 1

    , -1B1 -1 6 3

    x =B b= =0 1 3 3

    , 3n4

    x 0x = =

    x 0

    (ii) 1 31 1

    B=(a , a )= 0 0

    , Since |B|=0, it is not possible to find

    -1B and hence Bx

    (iii) 1 41 0

    B=(a , a )=0 1

    ; -1 1 0B =0 1

    21 -1

    B n

    34

    xx 1 0 6 6 0x = =B b= = x = =

    xx 0 1 3 3 0

    (iv) 2 31 1

    B=(a , a )=1 0

    -10 1

    B =1 1

    2 1-1

    B n

    3 4

    x x0 1 6 3 0x = =B b= = x = =

    x x1 1 3 3 0

    (v) 2 41 0

    B=(a , a )=1 1

    ; -1 1 0B =-1 1

    ; 12 -1B n34

    xx 1 0 6 6 0x = =B b= = x = =

    xx -1 1 3 -3 0

    (vi) 3 41 0

    B=(a , a )=0 1

    ; -1 1 0B =0 1

    ; 3 1-1B n4 2

    x x1 0 6 6 0x = =B b= = x = =

    x x0 1 3 3 0

  • 19

    Hence we have the following five basic solutions

    1

    3

    3x =

    0

    0

    ; 2

    6

    0x =

    0

    3

    ; 3

    0

    3x =

    3

    0

    ; 4

    0

    6x =

    0

    -3

    ; 5

    0

    0x =

    6

    3

    Of which except 4x are BFS because it violates

    non-negativity restrictions. The BFS belong to a four

    dimensional space. These basic feasible solutions are

    projected in the 1 2(x , x ) space gives rise to the following

    four points.

    3 6 0 0 0, , ,

    3 0 3 6 0

    From the graphical representation the extreme points

    are (0, 0), (0, 3), (3, 3) and (6,0) which are the same as

    the BFSs. Therefore the extreme points are precisely the

    BFS. The no. of BFS is 4 less than 6.

    The optimal BFS is 60

    Corner Point Feasible Solution

    A feasible solution which does not lie on the line

    segment, connection any other two feasible solution is

    called a corner point feasible solution.

  • 20

    The corner point feasible solution is 33

    Properties of Corner Point Feasible Solution

    (i) If there is an exactly one optimal solution of the

    linear programming problem, then it is a corner point

    feasible solution.

    (ii) If there are more than two optimal solutions of the

    given problem, then at least two of them are adjacent

    corner points.

    (iii) In a linear programming problem there are a finite

    number of corner points

    (iv) If a corner point feasible solution is better than its

    adjacent corner point solution, then it is better than all

    other feasible solutions.

    Methods for Solving Linear Programming Problems

    (1) Graphical Method

    (2) Algebraic Method

    (3) Simplex Method

    Graphical Method

    The graphical method to solve a linear programming

  • 21

    problem involves two basic steps

    (1) At the first step we have to determine the feasible

    solution space.

    We represent the values of the variable 1x to the X

    axis and the their corresponding values of the variable

    2x to the Y axis. Any point lying in the first quadrant

    satisfies 1x > 0 and 2x 0 . The easiest way of

    accounting for the remaining constraints for

    optimization objective function is to replace inequalities

    with equations and then plot the resulting straight lines

    Next we consider the effect of the inequality. All the

    inequality does is to divide the 1 2(x , x ) -plane into two

    spaces that occur on both sides of the plotted line: one

    side satisfies the inequality and the other one does not.

    Any point lying on or below the line satisfies the

    inequality. A procedure to determine the feasible side is

    to use the origin (0, 0) as a reference point.

    Step 2: At the second step we have to determine the

    optimal solution.

  • 22

    Problem: Find the non-negative value of the

    variables 1 2x and x which satisfies the constraints

    1 23x +5x 15

    1 25x +2x 10

    And which maximize the objective function

    1 2z = 5x +3x

    Solution: We introduce an 1 2x x co-ordinate system.

    Any point lying in the first quadrant has 1 2x ,x 0 .

    Now we show the straight lines 1 23x +5x =15 and

    1 25x +2x =10 on the graph. Any point lying below

    the line 1 23x +5x =15 satisfies the 1 23x +5x 15 .

    Similarly any point lying below the line 1 25x +2x =10

    satisfies the constraint

    2x

    C(0,5)

    1 25x +2x =10

    A(3.0)

    F(1.053, 2.3684)

  • 23

    1 2z = 5x +3x 1 2z = 5x +3x

    So, the region FDOA containing the set of points

    satisfying both the constraints and the non negative

    restriction. So, the points in this region are the feasible

    solution. Now we wish to find the line with the largest

    value of 1 2z = 5x +3x which has at least one point in

    common with the region of feasible solution. The line is

    drawn in the graph above. It shows that the value of 1x

    and 2x at the point A are the required solution.

    Here 1x =1.053 and 2x 2.368 approximate.

    Now from the objective function we get the maximum

    value of z which is given by z = 5 1.053+3 2.368=12.37

  • 24

    Existence of Extreme Basic Feasible Solution:

    Reduction of any feasible solution to a basic feasible

    solution

    Let us consider a linear programming problem with m

    linear equations in n unknowns such that

    AX = b

    X 0

    Which has at least one basic feasible solution without

    loss of generality suppose that Rank(A) = m and let

    1 2 nX=(x , x ,......,x )be as feasible solution. Further suppose

    that 1 2 px , x ,......,x >0 and that p+1 P+2 nx , x ,......,x =0 . And let

    1 2 pa , a ,......,a be the respective columns of A

    corresponding to the variables 1 2 px , x ,......,x . If

    1 2 pa , a ,......,a are linearly independent then X is a basic

    feasible solution. in such case p m . If p=m from the

    theory of system of linear equation, the solution is

    non-degeneracy basic feasible solution.

    If p

  • 25

    zero.

    If 1 2 pa , a ,......,a are dependent then there exist scalars

    1 2, ,......, p with at least one positive j such that

    1

    0p

    j j

    j

    a

    Considering the following point X with

    j 0

    j

    x ; 1,2,....,x =

    0; 1, 2,.....

    j j p

    j p p n

    where j k

    oj=1,2,....,p

    k

    x x=Minimum ; 0 = >0j

    j

    If j 0 , then jx >0 , since both jx and 0 are positive.

    If 0j , then by the definition of 0 we have

    j

    o j 0

    xx j

    j

    . Thus jx >0

    Furthermore

    kk k 0 k

    k

    xx = x - =x - =0k k

    . Hence x has at most (p-1)

    positive components.

    Also,

  • 26

    n

    j j

    j=1

    n

    j j 0

    j=1

    n n

    j j 0 j

    j=1 j=1

    Ax = a x

    = a (x )

    = a x a

    = b

    j

    j

    Thus we have a constructed feasible solution x since

    Ax =b , x 0 with at most (p-1) positive

    components. If the columns of A corresponding to these

    positive components are linearly independent then x

    is basic feasible solution. Otherwise the process is

    repeated. Eventually a basic feasible solution (BFS) will

    be obtained.

    Example: Consider the following inequalities

    1 2Maximize: z= 3x +2x

    Subject to constraints

    1 2

    2

    1 2

    x +x 6

    x 3

    x , x 0

    Find basic solution, BFS and extreme points.

    Solution. By introducing slack variables 3 4x and x , the

    problem is put into the following standard format

  • 27

    1 2 3

    2 4

    1 2 3 4

    x +x x =6

    x x =3

    x , x ,x ,x 0

    So, the constraint matrix A is given by;

    1 1 1 0A =

    0 1 0 1

    = 1 2 3 4(a , a , a , a ) , 6

    b=3

    Rank(A) = 2

    Therefore, the basic solutions corresponding to finding

    a 2 2 basis B. Following are the possible ways of

    extracting B out of A

    (i) 1 21 1

    B=(a , a ) =0 1

    , -1 1 -1B =0 1

    , -1B1 -1 6 3

    x =B b= =0 1 3 3

    , 3n4

    x 0x = =

    x 0

    (ii) 1 31 1

    B=(a , a )= 0 0

    , Since |B|=0, it is not possible to find

    -1B and hence Bx

    (iii) 1 41 0

    B=(a , a )=0 1

    ; -1 1 0B =0 1

    21 -1

    B n

    34

    xx 1 0 6 6 0x = =B b= = x = =

    xx 0 1 3 3 0

    (iv) 2 31 1

    B=(a , a )=1 0

    -10 1

    B =1 1

    2 1-1

    B n

    3 4

    x x0 1 6 3 0x = =B b= = x = =

    x x1 1 3 3 0

    (v) 2 41 0

    B=(a , a )=1 1

    ; -1 1 0B =-1 1

    ; 12 -1B n34

    xx 1 0 6 6 0x = =B b= = x = =

    xx -1 1 3 -3 0

    (vi) 3 41 0

    B=(a , a )=0 1

    ; -1 1 0B =0 1

    ; 3 1-1B n4 2

    x x1 0 6 6 0x = =B b= = x = =

    x x0 1 3 3 0

    Hence we have the following five basic solutions

  • 28

    1

    3

    3x =

    0

    0

    ; 2

    6

    0x =

    0

    3

    ; 3

    0

    3x =

    3

    0

    ; 4

    0

    6x =

    0

    -3

    ; 5

    0

    0x =

    6

    3

    Of which except 4x are BFS because it violates

    non-negativity restrictions. The BFS belong to a four

    dimensional space. These basic feasible solutions are

    projected in the 1 2(x , x ) space gives rise to the following

    four points.

    3 6 0 0 0, , ,

    3 0 3 6 0

    From the graphical representation the extreme points

    are (0, 0), (0, 3), (3, 3) and (6,0) which are the same as

    the BFSs. Therefore the extreme points are precisely the

    BFS. The no. of BFS is 4 less than 6.

    General Mathematical Formulation for Linear

    Programming

    Let us define the objective function which to be

    optimized

    1 1 2 2 n n z = c x +c x +...................+c x

    We have to find the values of the decision variables

    1 2 nx , x ,.........,x on the basis of the following m

  • 29

    constraints;

    11 1 12 2 1n n 1

    21 1 22 2 2n n 2

    m1 1 m2 2 mn n m

    a x +a x +.........+a x ( ,=, )b

    a x +a x +.........+a x ( ,=, )b

    a x +a x +.........+a x ( ,=, )b

    and

    jx 0; j = 1, 2,.......,n

    The above formulation can be written as the following

    compact form by using the summation sign;

    Optimize (maximize or minimize)

    n

    j j

    j=1

    z = c x

    Subject to the conditions;

    n

    ij j i

    j=1

    a x ( ,=, )b ;i=1, 2,.......,m

    and

    jx 0; j = 1, 2,.......,n

    The constants jc ; j =1, 2,......,n are called the cost

    coefficients; the constants ib ; i =1, 2,.......,m are called

    stipulations and the constants ij a ; i =1, 2,.....,m; j=1,2,.....,n

    are called structural coefficients. In matrix notation the

  • 30

    above equations can be written as;

    Optimize z = CX

    Subject to the conditions

    AX( ,=, )B

    where

    1 2 n 1 nC= c c ... ... c ;

    1

    2

    n n 1

    x

    x

    .

    X= .

    .

    .

    x

    ;

    11 12 1n

    21 22 2n

    m1 m2 mn m n

    a a ...... a

    a a ...... a

    A= . . . .

    . . . .

    a a ...... a

    ;

    1

    2

    m m n

    b

    b

    .

    B= .

    .

    .

    b

    Where, A is called the coefficient matrix, X is called the

    decision vector, B is called the requirement vector and

    C is called the cost vector of linear programming

    problem

    The Standard Form of LP Problem

    The use of basic solutions to solve the general LP

    models requires putting the problem in standard form.

    The followings are the characteristics of the standard

    form

    (i) All the constraints are expressed in the form of

  • 31

    equations except the non-negative restrictions on the

    decision variables which remain inequalities

    (ii) The right hand side of each constraint equation is

    non-negative

    (iii) All the decision variables are non-negative

    (iv) The objective function may be of the maximization

    or the minimization type

    Conversion of Inequalities into Equations:

    The inequality constraint of the type ,( ) can be

    converted to an equation by adding or subtracting a

    variable from the left-hand sides of such constraints.

    These new variables are called the slack variables or

    simply slacks. They are added if the constraints are of

    the types and subtracted if the constraints are of the

    types. Since in the cases of type the subtracted

    variables represent the surplus of the left-hand side over

    right-hand side, it is commonly known as the surplus

    variables and is in fact a negative slack.

    For example

  • 32

    1 2 1x +x b

    Is equivalent to

    1 2 1 1x +x s = b

    If 1 2 2x +x b

    Is equivalent to

    1 2 1 2x +x =bs

    The general LP problem that discussed above can be

    expressed as the following standard form;

    n

    j j

    j=1

    z = c x

    Subject to the conditions

    n

    ij j i i

    j=1

    a x s =b ;i=1, 2,.......,m

    jx 0; j = 1, 2,.......,n

    and

    is 0; i = 1, 2,.....,m

    In the matrix notation, the general LP problem can be

    written as the following standard form;

    Optimize z = CX

    Subject to the conditions

    AX S = B

  • 33

    X 0

    S 0

    Example: Express the following LP problem in a

    standard form;

    Maximize 1 2z= 3x +2x

    Subject to the conditions;

    1 2

    1 2

    1 2

    2x +x 2

    3x +4x 12

    x ,x 0

    Solution: Introducing slack and surplus variables, the

    problem can be expressed as the standard form and is

    given below;

    Maximize 1 2z= 3x +2x

    Subject to the conditions;

    1 2 1

    1 2 2

    1 2 1 2

    2x +x =2

    3x +4x =12

    x ,x , , 0

    s

    s

    s s

    Conversion of Unrestricted Variable into

    Non-negative Variables

    An unrestricted variable j x can be expressed in terms

    of two non-negative variables by using substitution

  • 34

    such that + -

    j j jx = x -x ; + -

    j jx ,x 0

    For example, if jx = -10 , then + -

    j jx =0, and x = 10 . If jx = 10 ,

    then + -

    j jx =10, and x = 0 .

    The substitution is effected in all constraints and in the

    objective function. After solving the problem in terms

    of +

    jx and -

    jx , the value of the original variable jx is

    then determined through back substitution.

    Example: Express the following linear programming

    problem in the standard form;

    Maximize, 1 2 3z= 3x +2x +5x

    Subject to

    1 2

    1 2 3

    1 3

    2x -3x 3

    x +2x 3x 5

    3x +2x 2

    1 2x , x 0

    Solution: Here 1x and 2x are restricted to be

    non-negative while 3x is unrestricted. Let us express

    as,+ -

    3 3 3x = x -x where, + -

    3 3x 0 and x 0 . Now introducing

    slack and surplus variable the problem can be written as

  • 35

    the standard form which is given by;

    Maximize, + -

    1 2 3 3z= 3x +2x +5(x -x )

    Subject to the conditions;

    1 2 1

    + -

    1 2 3 3 2

    + -

    1 3 3 3

    + -

    1 2 3 3 1 2 3

    2x -3x 3

    x +2x 3x -3x =5

    3x +2x -2x =2

    x ,x ,x , x , , , 0

    s

    s

    s

    s s s

    Conversion of Maximization to Minimization:

    The maximization of a function 1 2 nf(x , x ,.....,x ) is

    equivalent to the minimization of 1 2 n-f(x , x ,.....,x ) in the

    sense that both problems yield the same optimal values

    of 1 2x ,x ,......, and nx