Learn Linear Programming with G. Hadley's Classic Book: PDF Version Available
Linear Programming by G. Hadley: A Classic Book Review
Linear programming is one of the most widely used mathematical techniques for optimizing complex systems. It has applications in various fields, such as operations research, management science, economics, engineering, computer science, biology, ecology, and more. In this article, we will review a classic book on linear programming by G. Hadley, who was a prominent mathematician and professor at Stanford University. We will discuss what linear programming is, why it is important, who G. Hadley was, what his book is about, and how to get a free PDF copy of it.
ghadleylinearprogrammingnarosa2002pdffree
What is linear programming and why is it important?
Linear programming is a branch of mathematics that deals with finding the optimal solution to a problem that involves minimizing or maximizing a linear function subject to a set of linear constraints. A linear function is a function that can be written as a sum of products of constants and variables, such as f(x) = ax + by + c. A linear constraint is an equation or inequality that involves a linear function, such as ax + by c.
The problem of finding the optimal solution to a linear programming problem can be formulated as follows:
minimize or maximize f(x) = c1x1 + c2x2 + ... + cnxn
subject to a11x1 + a12x2 + ... + a1nxn b1
a21x1 + a22x2 + ... + a2nxn b2
... am1x1 + am2x2 + ... + amnxn bm
x1, x2, ..., xn 0
where x1, x2, ..., xn are the decision variables, c1, c2, ..., cn are the coefficients of the objective function, aij and bi are the coefficients and constants of the constraints, and m and n are the number of constraints and variables, respectively.
The optimal solution to a linear programming problem is the set of values of the decision variables that satisfy all the constraints and make the objective function reach its minimum or maximum value. The optimal solution may be unique, multiple, or nonexistent, depending on the nature of the problem. The optimal solution, if it exists, can be found by using various methods, such as the simplex method, duality theory, sensitivity analysis, and more.
Linear programming is important because it can model and solve many real-world problems that involve optimizing resources, costs, profits, production, transportation, scheduling, allocation, and more. Linear programming can also be used as a tool for analyzing and understanding other mathematical models and concepts, such as game theory, network flows, convex sets, and more.
The history and development of linear programming
The origins of linear programming can be traced back to the 18th century, when mathematicians such as Leonhard Euler and Joseph-Louis Lagrange studied problems involving linear equations and inequalities. However, the modern formulation and theory of linear programming emerged in the 20th century, mainly due to the efforts of George Dantzig, John von Neumann, Leonid Kantorovich, Harold Kuhn, Albert Tucker, and others.
The simplex method
The simplex method is one of the most famous and widely used algorithms for solving linear programming problems. It was developed by George Dantzig in 1947, when he was working as a mathematician at the US Air Force. Dantzig was inspired by a lecture given by John von Neumann on game theory and linear inequalities. He realized that he could use a similar approach to find the optimal solution to a linear programming problem by moving from one extreme point (or vertex) of the feasible region (the set of all points that satisfy the constraints) to another along the edges of the region until reaching the optimal point.
The simplex method works by transforming the original problem into an equivalent problem in standard form (where all constraints are equalities and all variables are nonnegative), then constructing an initial basic feasible solution (a solution that satisfies all the constraints and has exactly n nonnegative variables for n variables), then iteratively improving the solution by replacing one basic variable with another nonbasic variable (called a pivot) that improves the objective function value (called an exchange), until no further improvement is possible or until an unbounded or infeasible problem is detected.
The simplex method is efficient and reliable for most practical problems, but it has some drawbacks. For example, it may encounter degeneracy (when two or more extreme points have the same objective function value), cycling (when it repeats the same sequence of exchanges infinitely), or exponential complexity (when it requires an exponential number of exchanges in the worst case).
Duality theory
Duality theory is another important concept in linear programming that provides a way of analyzing and solving linear programming problems from a different perspective. It was developed by John von Neumann in 1947, when he was working on game theory and matrix analysis. He showed that every linear programming problem has a corresponding dual problem that involves maximizing or minimizing a different objective function subject to different constraints. The dual problem can be obtained by applying a simple transformation to the original problem.
The dual problem has some remarkable properties that relate it to the original problem. For example, if one problem has an optimal solution, then so does the other; if one problem is unbounded or infeasible, then so is the other; and the optimal value of one problem is equal to the optimal value of the other. Moreover, duality theory provides a way of finding an optimal solution to one problem by finding an optimal solution to the other; this is called complementary slackness.
Duality theory is useful because it can simplify and improve the solution process of linear programming problems. For example, it can help to choose between the primal and dual problems based on their complexity and data availability; it can provide a way of checking the optimality and feasibility of a solution; it can give insight into the economic interpretation and sensitivity of a solution; and it can facilitate the development of new algorithms and extensions of linear programming.
Sensitivity analysis
Sensitivity analysis is another useful concept in linear programming that deals with studying how the optimal solution and value of a linear programming problem change when some parameters of the problem are perturbed or modified. It can answer questions such as: How much would the optimal value increase or decrease if the objective function coefficient of a variable changes by one unit? How much can a constraint constant increase or decrease without affecting the optimal solution? How much slack or surplus is there in each constraint at the optimal solution? How much would the right-hand side of a constraint have to change for a nonbasic variable to become basic?
Sensitivity analysis can be performed by using various methods, such as graphical analysis, algebraic analysis, matrix analysis, and more. One of the most common methods is to use the information provided by the final simplex tableau, which contains the optimal values of the variables, the reduced costs of the nonbasic variables, the shadow prices of the constraints, and the ranges of feasibility and optimality for the parameters. Sensitivity analysis can also be done by using software tools that can automatically generate sensitivity reports for linear programming problems.
Sensitivity analysis is useful because it can help to assess the robustness and reliability of a linear programming solution. For example, it can help to identify the most critical and influential parameters of the problem; it can help to evaluate the impact and trade-offs of different scenarios and alternatives; it can help to measure the efficiency and productivity of the resources and activities; and it can help to improve and refine the model formulation and data collection.
Transportation and assignment problems
Transportation and assignment problems are special types of linear programming problems that involve finding the optimal way of allocating or distributing a certain quantity of goods or resources from a set of sources or origins to a set of destinations or demands. A transportation problem can be formulated as follows:
minimize Z = i=1 j=1 cijxij
subject to j=1 xij = ai, i = 1, 2, ..., m i=1 xij = bj, j = 1, 2, ..., n xij 0, i = 1, 2, ..., m; j = 1, 2, ..., n
where xij is the amount of goods or resources transported from source i to destination j, cij is the cost per unit of transporting from source i to destination j, ai is the supply available at source i, bj is the demand required at destination j, m is the number of sources, and n is the number of destinations.
An assignment problem can be seen as a special case of a transportation problem where m = n and ai = bj = 1 for all i and j. An assignment problem can be formulated as follows:
minimize Z = i=1 j=1 cijxij
subject to j=1 xij = 1, i = 1, 2, ..., n i=1 xij = 1, j = 1, 2, ..., n xij = 0 or 1, i = 1, 2, ..., n; j = 1, 2, ..., n
where xij is a binary variable that indicates whether source i is assigned to destination j or not, cij is the cost or benefit of assigning source i to destination j, and n is the number of sources and destinations.
Transportation and assignment problems can be solved by using general linear programming methods, such as the simplex method, but they can also be solved by using specific methods that exploit their special structure and properties, such as the transportation simplex method, the northwest corner rule, the Vogel's approximation method, the Hungarian method, and more.
Transportation and assignment problems are important because they can model and solve many practical problems that involve allocating or matching resources efficiently, such as shipping goods from factories to warehouses, assigning workers to tasks, scheduling flights or trains, matching students to schools, and more.
Network flow problems
Network flow problems are another special type of linear programming problems that involve finding the optimal way of sending a certain quantity of flow (such as water, gas, electricity, traffic, data, etc.) through a network of nodes and arcs. A network flow problem can be formulated as follows:
minimize or maximize Z = (i,j)A cijxij
subject to j:(i,j)A xij - j:(j,i)A xji = bi, i N lij xij uij, (i,j) A xij 0, (i,j) A
where xij is the amount of flow sent from node i to node j along arc (i,j), cij is the cost or benefit per unit of flow sent along arc (i,j), bi is the net supply or demand of flow at node i (positive for supply, negative for demand, zero for intermediate), lij and uij are the lower and upper bounds on the flow along arc (i,j), N is the set of nodes, and A is the set of arcs.
A network flow problem can have different variants depending on the objective function and the constraints. For example, a minimum cost flow problem aims to minimize the total cost of sending a given amount of flow through the network; a maximum flow problem aims to maximize the total amount of flow that can be sent from a source node to a sink node in the network; a shortest path problem aims to find the path with the minimum cost or distance from a source node to a sink node in the network; and a maximum matching problem aims to find the largest set of arcs that do not share any common nodes in the network.
Network flow problems can be solved by using general linear programming methods, such as the simplex method, but they can also be solved by using specific methods that exploit their special structure and properties, such as the network simplex method, the Ford-Fulkerson algorithm, the Dijkstra's algorithm, the Edmonds-Karp algorithm, and more.
Network flow problems are important because they can model and solve many practical problems that involve routing or distributing flow efficiently, such as water supply systems, gas pipelines, power grids, traffic networks, communication networks, social networks, and more.
The strengths and weaknesses of linear programming
71b2f0854b

