Authors

Nak Je Kim

Document Type

Report

Publication Date

1-1-1968

Abstract

Linear programming was first developed by George B. Dantzig, Marshall Wood, and associates of the U.S. Air Force, in 1947. At that time, the Air Force organized a research group under the title of project SCOOP (Scientific Computation of Optimum Programs). This project contributed to the developing of a general interindustry model based on the Leontief input-output model, the Air Force programming and budgeting problem, and the problems which involved the relationship between two-person zero sum games and linear programming. The result was the formal development and application of the linear programming model. This project also developed the simplex computational method for finding the optimum feasible program. Early applications of linear programming were made in the military, in economics, and in the theory of games. During the last decade, however, linear programming applications have been extended to such other fields as management, engineering, and agriculture.

As the application of linear programming has extended to many other fields, Dantzig (1955), Tinter (1955), Beale (1955), Madansky (1960), and others have been responsible for the formulation and development of stochastic linear programming. The stochastic linear programming problem occurs when some of the coefficients, in the objective function and/or in the constraint system of the linear programming model, are subject to random variation.

In the literature, several methods are indicated for formulating the linear programming problem with random requirements to arrive at a solution. The intention of this study is to review some of these methods, and to compare one wit another in terms of the optimum value of the objective function which results from each method. There are three methods that will be considered. The first method is to replace the random element with its expected value and solve the resulting linear programming problem (Hadley, 1964).

The second method is Dantzig’s two-stage linear programming problem with a random requirement (Dantzig, 1955). Suppose the following linear programming problem is considered:

Min. (or max.) C’X X ≥ 0

Subject to: AX ≤ b,

Where C and X are n by 1 vectors, b an m by 1 vector, and A an n matrix, and C’ is C transpose. If vector b is random and matrix A is known, then in the first stage, a decision is made on X, the random vector b is observed, and AX is compared with b. In the second stage, inaccuracies in the first decision are compensated for by a new decision variable Y with some penalty cost F. The problem then becomes,

E min. (or max.) C’X + F’Y, X ≥ 0, Y ≥ 0,

Subject to : AX + BY = b,

Where B is an m by 2n matrix with elements ones, minus ones, and zeroes, and Y is an 2n by 1 vector with elements yi and y-i. E denotes an expectation.

In the third method, the constraints with random requirements are to satisfy a given probability level. The problem then is to find values of the decision variables which optimize the expected objective function without violating the given probability measure (Charnes and Cooper, 1962).

This report surveys the literature on basic linear programming and the simplex method of solution, describes random requirements, and illustrates three methods of solution. Finally, the optimal value of the objective function of each method is compared with the others.

Share

COinS