What is the advantage of adding $log$ Barrier to solve a Linear program?
$begingroup$
Let $A in mathbb{R}^{n times m}$, $b in mathbb{R}^{n}$, and $x in mathbb{R}^{m}$. Let $Ax leq b$ be the set of linear inequalities which can be written as $a_i^Tx-b_i leq 0 ,,,,forall i= 1, ldots , m$. Consider the following linear program
$$
min_{Ax leq b} c^Tx tag{1}
$$
where $c in mathbb{R}^{m}$ is an arbitrary vector. If this problem has a solution, then according to representation theorem (or Weyl-Minkowski) the solution happens at vertices of the constraint set.
Convex optimization on page 499 says $f(x)=-sum_{i=1}^m log (b_i -a_i^Tx)$ is a self-concordant $log$ barrier function for linear inequalities. So, the above constraint problem can be an unconstraint problem as
$$
min_{x in mathbb{R}^{m}} c^Tx -sum_{i=1}^m log (b_i -a_i^Tx) tag{2}
$$
Let the solutions of (1) and (2) be $x_1^ast$ and $x_2^ast$ respectively.
1- Are the solutions the same ($x_1^ast=x_2^ast$)? (I mean, Barrier bans the algorithm to approach to the boundary of the constraint set because it goes to infinity. However, the solution is on the boundary. How can (2) catch the solution?)
2- In terms of iterations and algorithm complexity which one is superior? I know for (2) we can use Newton's method to find the answer but how using Newton's method accelerate the algorithm?
3- Can we analytically show that the two problems are the same or one has superiority to the other?
Please address my questions carefully by providing rigorous proofs.
optimization convex-analysis convex-optimization linear-programming
$endgroup$
add a comment |
$begingroup$
Let $A in mathbb{R}^{n times m}$, $b in mathbb{R}^{n}$, and $x in mathbb{R}^{m}$. Let $Ax leq b$ be the set of linear inequalities which can be written as $a_i^Tx-b_i leq 0 ,,,,forall i= 1, ldots , m$. Consider the following linear program
$$
min_{Ax leq b} c^Tx tag{1}
$$
where $c in mathbb{R}^{m}$ is an arbitrary vector. If this problem has a solution, then according to representation theorem (or Weyl-Minkowski) the solution happens at vertices of the constraint set.
Convex optimization on page 499 says $f(x)=-sum_{i=1}^m log (b_i -a_i^Tx)$ is a self-concordant $log$ barrier function for linear inequalities. So, the above constraint problem can be an unconstraint problem as
$$
min_{x in mathbb{R}^{m}} c^Tx -sum_{i=1}^m log (b_i -a_i^Tx) tag{2}
$$
Let the solutions of (1) and (2) be $x_1^ast$ and $x_2^ast$ respectively.
1- Are the solutions the same ($x_1^ast=x_2^ast$)? (I mean, Barrier bans the algorithm to approach to the boundary of the constraint set because it goes to infinity. However, the solution is on the boundary. How can (2) catch the solution?)
2- In terms of iterations and algorithm complexity which one is superior? I know for (2) we can use Newton's method to find the answer but how using Newton's method accelerate the algorithm?
3- Can we analytically show that the two problems are the same or one has superiority to the other?
Please address my questions carefully by providing rigorous proofs.
optimization convex-analysis convex-optimization linear-programming
$endgroup$
$begingroup$
Ah, thanks, yes!
$endgroup$
– jjjjjj
Jan 17 at 0:02
add a comment |
$begingroup$
Let $A in mathbb{R}^{n times m}$, $b in mathbb{R}^{n}$, and $x in mathbb{R}^{m}$. Let $Ax leq b$ be the set of linear inequalities which can be written as $a_i^Tx-b_i leq 0 ,,,,forall i= 1, ldots , m$. Consider the following linear program
$$
min_{Ax leq b} c^Tx tag{1}
$$
where $c in mathbb{R}^{m}$ is an arbitrary vector. If this problem has a solution, then according to representation theorem (or Weyl-Minkowski) the solution happens at vertices of the constraint set.
Convex optimization on page 499 says $f(x)=-sum_{i=1}^m log (b_i -a_i^Tx)$ is a self-concordant $log$ barrier function for linear inequalities. So, the above constraint problem can be an unconstraint problem as
$$
min_{x in mathbb{R}^{m}} c^Tx -sum_{i=1}^m log (b_i -a_i^Tx) tag{2}
$$
Let the solutions of (1) and (2) be $x_1^ast$ and $x_2^ast$ respectively.
1- Are the solutions the same ($x_1^ast=x_2^ast$)? (I mean, Barrier bans the algorithm to approach to the boundary of the constraint set because it goes to infinity. However, the solution is on the boundary. How can (2) catch the solution?)
2- In terms of iterations and algorithm complexity which one is superior? I know for (2) we can use Newton's method to find the answer but how using Newton's method accelerate the algorithm?
3- Can we analytically show that the two problems are the same or one has superiority to the other?
Please address my questions carefully by providing rigorous proofs.
optimization convex-analysis convex-optimization linear-programming
$endgroup$
Let $A in mathbb{R}^{n times m}$, $b in mathbb{R}^{n}$, and $x in mathbb{R}^{m}$. Let $Ax leq b$ be the set of linear inequalities which can be written as $a_i^Tx-b_i leq 0 ,,,,forall i= 1, ldots , m$. Consider the following linear program
$$
min_{Ax leq b} c^Tx tag{1}
$$
where $c in mathbb{R}^{m}$ is an arbitrary vector. If this problem has a solution, then according to representation theorem (or Weyl-Minkowski) the solution happens at vertices of the constraint set.
Convex optimization on page 499 says $f(x)=-sum_{i=1}^m log (b_i -a_i^Tx)$ is a self-concordant $log$ barrier function for linear inequalities. So, the above constraint problem can be an unconstraint problem as
$$
min_{x in mathbb{R}^{m}} c^Tx -sum_{i=1}^m log (b_i -a_i^Tx) tag{2}
$$
Let the solutions of (1) and (2) be $x_1^ast$ and $x_2^ast$ respectively.
1- Are the solutions the same ($x_1^ast=x_2^ast$)? (I mean, Barrier bans the algorithm to approach to the boundary of the constraint set because it goes to infinity. However, the solution is on the boundary. How can (2) catch the solution?)
2- In terms of iterations and algorithm complexity which one is superior? I know for (2) we can use Newton's method to find the answer but how using Newton's method accelerate the algorithm?
3- Can we analytically show that the two problems are the same or one has superiority to the other?
Please address my questions carefully by providing rigorous proofs.
optimization convex-analysis convex-optimization linear-programming
optimization convex-analysis convex-optimization linear-programming
edited Jan 17 at 0:13
El borito
674216
674216
asked Jan 16 at 23:25
SaeedSaeed
1,005310
1,005310
$begingroup$
Ah, thanks, yes!
$endgroup$
– jjjjjj
Jan 17 at 0:02
add a comment |
$begingroup$
Ah, thanks, yes!
$endgroup$
– jjjjjj
Jan 17 at 0:02
$begingroup$
Ah, thanks, yes!
$endgroup$
– jjjjjj
Jan 17 at 0:02
$begingroup$
Ah, thanks, yes!
$endgroup$
– jjjjjj
Jan 17 at 0:02
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
No, the solutions are not the same. On p. 499, the book is merely presenting an example of a self-concordant function.
When you ask about algorithm complexity for problem (1), which algorithm do you have in mind? At this point in the book's development, it is not clear how to solve problem (1) efficiently. The book will present a strategy that utilizes log-barrier functions in chapter 11.
No, the problems are not equivalent.
You should read chapter 11 (Interior-point methods), and specifically section 11.2, which presents a strategy to minimize $f_0(x)$ subject to the constraints $f_i(x) leq 0, Ax = b$ by solving a sequence of subproblems of the form
begin{align}
text{minimize} & quad f_0(x) + sum_{i=1}^m - (1/t) log(-f_i(x)) \
text{subject to} &quad Ax = b
end{align}
with increasing values of $t$. Newton's method is used to solve each subproblem. As $t$ increases, the solution of the subproblem becomes a better approximation to the solution of the original problem. That is how log-barrier functions are used to solve problems with inequality constraints.
$endgroup$
$begingroup$
I appreciate for your patience but I am little bit confused. Linear program is a convex problem so it can be solved using interior point method? Is it true? I have the following questions to understand my question better. Are these solutions close, i.e. $x_1^{ast}$ and $x_2^{ast}$? Can (2) be seen as the subproblem of (1) which gives an approximate solution? How many subproblem we need to define to solve (1) using interior point method? What is the optimal value of $t$ for getting a close solution to $x_1^{ast}$?
$endgroup$
– Saeed
Jan 17 at 0:21
$begingroup$
Yes, a linear program is a convex problem, and a linear program can be solved using an interior point method. In fact, interior point methods often give state of the art performance for solving LPs. Interior point methods were originally developed for linear programming problems, and only later were extended to more general convex problems.
$endgroup$
– littleO
Jan 17 at 0:26
1
$begingroup$
No, the solution to problem (2) is not necessarily very close to the solution to problem 1. However, if you had written problem (2) as minimizing $c^T x - (1/t) sum_{I=1}^m log(b_i - a_i^T x)$, then we could say that if $t$ is large then the solution to problem (2) is close to the solution to problem (1).
$endgroup$
– littleO
Jan 17 at 0:29
$begingroup$
When the factor of $1/t$ is included, and $t$ is large, then the solution to (2) gives a good approximation to the solution of (1). Regarding how many subproblems you need to solve, and how large $t$ needs to be, I would say just read chapter 11 for more info about that.
$endgroup$
– littleO
Jan 17 at 0:31
$begingroup$
Thank you so much.
$endgroup$
– Saeed
Jan 17 at 1:02
|
show 5 more comments
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3076421%2fwhat-is-the-advantage-of-adding-log-barrier-to-solve-a-linear-program%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
No, the solutions are not the same. On p. 499, the book is merely presenting an example of a self-concordant function.
When you ask about algorithm complexity for problem (1), which algorithm do you have in mind? At this point in the book's development, it is not clear how to solve problem (1) efficiently. The book will present a strategy that utilizes log-barrier functions in chapter 11.
No, the problems are not equivalent.
You should read chapter 11 (Interior-point methods), and specifically section 11.2, which presents a strategy to minimize $f_0(x)$ subject to the constraints $f_i(x) leq 0, Ax = b$ by solving a sequence of subproblems of the form
begin{align}
text{minimize} & quad f_0(x) + sum_{i=1}^m - (1/t) log(-f_i(x)) \
text{subject to} &quad Ax = b
end{align}
with increasing values of $t$. Newton's method is used to solve each subproblem. As $t$ increases, the solution of the subproblem becomes a better approximation to the solution of the original problem. That is how log-barrier functions are used to solve problems with inequality constraints.
$endgroup$
$begingroup$
I appreciate for your patience but I am little bit confused. Linear program is a convex problem so it can be solved using interior point method? Is it true? I have the following questions to understand my question better. Are these solutions close, i.e. $x_1^{ast}$ and $x_2^{ast}$? Can (2) be seen as the subproblem of (1) which gives an approximate solution? How many subproblem we need to define to solve (1) using interior point method? What is the optimal value of $t$ for getting a close solution to $x_1^{ast}$?
$endgroup$
– Saeed
Jan 17 at 0:21
$begingroup$
Yes, a linear program is a convex problem, and a linear program can be solved using an interior point method. In fact, interior point methods often give state of the art performance for solving LPs. Interior point methods were originally developed for linear programming problems, and only later were extended to more general convex problems.
$endgroup$
– littleO
Jan 17 at 0:26
1
$begingroup$
No, the solution to problem (2) is not necessarily very close to the solution to problem 1. However, if you had written problem (2) as minimizing $c^T x - (1/t) sum_{I=1}^m log(b_i - a_i^T x)$, then we could say that if $t$ is large then the solution to problem (2) is close to the solution to problem (1).
$endgroup$
– littleO
Jan 17 at 0:29
$begingroup$
When the factor of $1/t$ is included, and $t$ is large, then the solution to (2) gives a good approximation to the solution of (1). Regarding how many subproblems you need to solve, and how large $t$ needs to be, I would say just read chapter 11 for more info about that.
$endgroup$
– littleO
Jan 17 at 0:31
$begingroup$
Thank you so much.
$endgroup$
– Saeed
Jan 17 at 1:02
|
show 5 more comments
$begingroup$
No, the solutions are not the same. On p. 499, the book is merely presenting an example of a self-concordant function.
When you ask about algorithm complexity for problem (1), which algorithm do you have in mind? At this point in the book's development, it is not clear how to solve problem (1) efficiently. The book will present a strategy that utilizes log-barrier functions in chapter 11.
No, the problems are not equivalent.
You should read chapter 11 (Interior-point methods), and specifically section 11.2, which presents a strategy to minimize $f_0(x)$ subject to the constraints $f_i(x) leq 0, Ax = b$ by solving a sequence of subproblems of the form
begin{align}
text{minimize} & quad f_0(x) + sum_{i=1}^m - (1/t) log(-f_i(x)) \
text{subject to} &quad Ax = b
end{align}
with increasing values of $t$. Newton's method is used to solve each subproblem. As $t$ increases, the solution of the subproblem becomes a better approximation to the solution of the original problem. That is how log-barrier functions are used to solve problems with inequality constraints.
$endgroup$
$begingroup$
I appreciate for your patience but I am little bit confused. Linear program is a convex problem so it can be solved using interior point method? Is it true? I have the following questions to understand my question better. Are these solutions close, i.e. $x_1^{ast}$ and $x_2^{ast}$? Can (2) be seen as the subproblem of (1) which gives an approximate solution? How many subproblem we need to define to solve (1) using interior point method? What is the optimal value of $t$ for getting a close solution to $x_1^{ast}$?
$endgroup$
– Saeed
Jan 17 at 0:21
$begingroup$
Yes, a linear program is a convex problem, and a linear program can be solved using an interior point method. In fact, interior point methods often give state of the art performance for solving LPs. Interior point methods were originally developed for linear programming problems, and only later were extended to more general convex problems.
$endgroup$
– littleO
Jan 17 at 0:26
1
$begingroup$
No, the solution to problem (2) is not necessarily very close to the solution to problem 1. However, if you had written problem (2) as minimizing $c^T x - (1/t) sum_{I=1}^m log(b_i - a_i^T x)$, then we could say that if $t$ is large then the solution to problem (2) is close to the solution to problem (1).
$endgroup$
– littleO
Jan 17 at 0:29
$begingroup$
When the factor of $1/t$ is included, and $t$ is large, then the solution to (2) gives a good approximation to the solution of (1). Regarding how many subproblems you need to solve, and how large $t$ needs to be, I would say just read chapter 11 for more info about that.
$endgroup$
– littleO
Jan 17 at 0:31
$begingroup$
Thank you so much.
$endgroup$
– Saeed
Jan 17 at 1:02
|
show 5 more comments
$begingroup$
No, the solutions are not the same. On p. 499, the book is merely presenting an example of a self-concordant function.
When you ask about algorithm complexity for problem (1), which algorithm do you have in mind? At this point in the book's development, it is not clear how to solve problem (1) efficiently. The book will present a strategy that utilizes log-barrier functions in chapter 11.
No, the problems are not equivalent.
You should read chapter 11 (Interior-point methods), and specifically section 11.2, which presents a strategy to minimize $f_0(x)$ subject to the constraints $f_i(x) leq 0, Ax = b$ by solving a sequence of subproblems of the form
begin{align}
text{minimize} & quad f_0(x) + sum_{i=1}^m - (1/t) log(-f_i(x)) \
text{subject to} &quad Ax = b
end{align}
with increasing values of $t$. Newton's method is used to solve each subproblem. As $t$ increases, the solution of the subproblem becomes a better approximation to the solution of the original problem. That is how log-barrier functions are used to solve problems with inequality constraints.
$endgroup$
No, the solutions are not the same. On p. 499, the book is merely presenting an example of a self-concordant function.
When you ask about algorithm complexity for problem (1), which algorithm do you have in mind? At this point in the book's development, it is not clear how to solve problem (1) efficiently. The book will present a strategy that utilizes log-barrier functions in chapter 11.
No, the problems are not equivalent.
You should read chapter 11 (Interior-point methods), and specifically section 11.2, which presents a strategy to minimize $f_0(x)$ subject to the constraints $f_i(x) leq 0, Ax = b$ by solving a sequence of subproblems of the form
begin{align}
text{minimize} & quad f_0(x) + sum_{i=1}^m - (1/t) log(-f_i(x)) \
text{subject to} &quad Ax = b
end{align}
with increasing values of $t$. Newton's method is used to solve each subproblem. As $t$ increases, the solution of the subproblem becomes a better approximation to the solution of the original problem. That is how log-barrier functions are used to solve problems with inequality constraints.
edited Jan 17 at 0:09
answered Jan 16 at 23:53
littleOlittleO
29.8k646109
29.8k646109
$begingroup$
I appreciate for your patience but I am little bit confused. Linear program is a convex problem so it can be solved using interior point method? Is it true? I have the following questions to understand my question better. Are these solutions close, i.e. $x_1^{ast}$ and $x_2^{ast}$? Can (2) be seen as the subproblem of (1) which gives an approximate solution? How many subproblem we need to define to solve (1) using interior point method? What is the optimal value of $t$ for getting a close solution to $x_1^{ast}$?
$endgroup$
– Saeed
Jan 17 at 0:21
$begingroup$
Yes, a linear program is a convex problem, and a linear program can be solved using an interior point method. In fact, interior point methods often give state of the art performance for solving LPs. Interior point methods were originally developed for linear programming problems, and only later were extended to more general convex problems.
$endgroup$
– littleO
Jan 17 at 0:26
1
$begingroup$
No, the solution to problem (2) is not necessarily very close to the solution to problem 1. However, if you had written problem (2) as minimizing $c^T x - (1/t) sum_{I=1}^m log(b_i - a_i^T x)$, then we could say that if $t$ is large then the solution to problem (2) is close to the solution to problem (1).
$endgroup$
– littleO
Jan 17 at 0:29
$begingroup$
When the factor of $1/t$ is included, and $t$ is large, then the solution to (2) gives a good approximation to the solution of (1). Regarding how many subproblems you need to solve, and how large $t$ needs to be, I would say just read chapter 11 for more info about that.
$endgroup$
– littleO
Jan 17 at 0:31
$begingroup$
Thank you so much.
$endgroup$
– Saeed
Jan 17 at 1:02
|
show 5 more comments
$begingroup$
I appreciate for your patience but I am little bit confused. Linear program is a convex problem so it can be solved using interior point method? Is it true? I have the following questions to understand my question better. Are these solutions close, i.e. $x_1^{ast}$ and $x_2^{ast}$? Can (2) be seen as the subproblem of (1) which gives an approximate solution? How many subproblem we need to define to solve (1) using interior point method? What is the optimal value of $t$ for getting a close solution to $x_1^{ast}$?
$endgroup$
– Saeed
Jan 17 at 0:21
$begingroup$
Yes, a linear program is a convex problem, and a linear program can be solved using an interior point method. In fact, interior point methods often give state of the art performance for solving LPs. Interior point methods were originally developed for linear programming problems, and only later were extended to more general convex problems.
$endgroup$
– littleO
Jan 17 at 0:26
1
$begingroup$
No, the solution to problem (2) is not necessarily very close to the solution to problem 1. However, if you had written problem (2) as minimizing $c^T x - (1/t) sum_{I=1}^m log(b_i - a_i^T x)$, then we could say that if $t$ is large then the solution to problem (2) is close to the solution to problem (1).
$endgroup$
– littleO
Jan 17 at 0:29
$begingroup$
When the factor of $1/t$ is included, and $t$ is large, then the solution to (2) gives a good approximation to the solution of (1). Regarding how many subproblems you need to solve, and how large $t$ needs to be, I would say just read chapter 11 for more info about that.
$endgroup$
– littleO
Jan 17 at 0:31
$begingroup$
Thank you so much.
$endgroup$
– Saeed
Jan 17 at 1:02
$begingroup$
I appreciate for your patience but I am little bit confused. Linear program is a convex problem so it can be solved using interior point method? Is it true? I have the following questions to understand my question better. Are these solutions close, i.e. $x_1^{ast}$ and $x_2^{ast}$? Can (2) be seen as the subproblem of (1) which gives an approximate solution? How many subproblem we need to define to solve (1) using interior point method? What is the optimal value of $t$ for getting a close solution to $x_1^{ast}$?
$endgroup$
– Saeed
Jan 17 at 0:21
$begingroup$
I appreciate for your patience but I am little bit confused. Linear program is a convex problem so it can be solved using interior point method? Is it true? I have the following questions to understand my question better. Are these solutions close, i.e. $x_1^{ast}$ and $x_2^{ast}$? Can (2) be seen as the subproblem of (1) which gives an approximate solution? How many subproblem we need to define to solve (1) using interior point method? What is the optimal value of $t$ for getting a close solution to $x_1^{ast}$?
$endgroup$
– Saeed
Jan 17 at 0:21
$begingroup$
Yes, a linear program is a convex problem, and a linear program can be solved using an interior point method. In fact, interior point methods often give state of the art performance for solving LPs. Interior point methods were originally developed for linear programming problems, and only later were extended to more general convex problems.
$endgroup$
– littleO
Jan 17 at 0:26
$begingroup$
Yes, a linear program is a convex problem, and a linear program can be solved using an interior point method. In fact, interior point methods often give state of the art performance for solving LPs. Interior point methods were originally developed for linear programming problems, and only later were extended to more general convex problems.
$endgroup$
– littleO
Jan 17 at 0:26
1
1
$begingroup$
No, the solution to problem (2) is not necessarily very close to the solution to problem 1. However, if you had written problem (2) as minimizing $c^T x - (1/t) sum_{I=1}^m log(b_i - a_i^T x)$, then we could say that if $t$ is large then the solution to problem (2) is close to the solution to problem (1).
$endgroup$
– littleO
Jan 17 at 0:29
$begingroup$
No, the solution to problem (2) is not necessarily very close to the solution to problem 1. However, if you had written problem (2) as minimizing $c^T x - (1/t) sum_{I=1}^m log(b_i - a_i^T x)$, then we could say that if $t$ is large then the solution to problem (2) is close to the solution to problem (1).
$endgroup$
– littleO
Jan 17 at 0:29
$begingroup$
When the factor of $1/t$ is included, and $t$ is large, then the solution to (2) gives a good approximation to the solution of (1). Regarding how many subproblems you need to solve, and how large $t$ needs to be, I would say just read chapter 11 for more info about that.
$endgroup$
– littleO
Jan 17 at 0:31
$begingroup$
When the factor of $1/t$ is included, and $t$ is large, then the solution to (2) gives a good approximation to the solution of (1). Regarding how many subproblems you need to solve, and how large $t$ needs to be, I would say just read chapter 11 for more info about that.
$endgroup$
– littleO
Jan 17 at 0:31
$begingroup$
Thank you so much.
$endgroup$
– Saeed
Jan 17 at 1:02
$begingroup$
Thank you so much.
$endgroup$
– Saeed
Jan 17 at 1:02
|
show 5 more comments
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3076421%2fwhat-is-the-advantage-of-adding-log-barrier-to-solve-a-linear-program%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Ah, thanks, yes!
$endgroup$
– jjjjjj
Jan 17 at 0:02