1
$\begingroup$

Let us consider a very simple convex program of the form $$\mathsf{opt} = \inf_{\substack{Ax = b\\Cx\leq d}} f(x)$$ where $x\in\mathbb{R}^n$, $f$ is a (smooth enough) convex function, and $A$, $C$ are some constraint matrices and $b, d$ some vectors of appropriate dimensions, with rational entries (that way, they can be represented on a computer; we also assume $f$ can somehow be represented, e.g., if it is a convex quadratic function with rational coefficients etc). We also assume the problem is strictly feasible, so pretty much the best case scenario for interior-point methods (see Boyd & Vandenberghe, Chapter 11).

By construction, interior-point methods generate solutions that carry an additive error term, i.e., for $\delta > 0$ small enough they generate a point $x_{\delta}$ such that $$f(x_{\delta}) - \mathsf{opt} \leq \delta, \text{ such that } Ax_{\delta}=b \text{ and } \forall i, \; (Cx_{\delta})_i \leq d_i + \delta,$$ see also the summary on Wikipedia.

However, I am wondering whether such problems also admit an $\mathsf{FPTAS}$, as claimed on this post? For an algorithm $\verb|algo|$ to be a Fully Polynomial Time Approximation Scheme to $\mathsf{opt}$, it basically needs to satisfy $$\mathsf{opt} \leq \verb|algo| \leq (1+\varepsilon)\mathsf{opt}$$ for any $\varepsilon > 0$, where the runtime of $\verb|algo|$ must be polynomial w.r.t. the representation size of the instance, as well as $1/\varepsilon$.

Surely, this must be wrong, unless we assume that $\mathsf{opt} \geq 0$, since if the result is negative, the definition for $\mathsf{FPTAS}$ is ill-defined.

Did someone else already have to distinguish between a 'good enough in practice' additive error term, and an actual $\mathsf{FPTAS}$? Thank you very much in advance!

$\endgroup$

1 Answer 1

2
$\begingroup$

You are correct that the definition of FPTAS is ill-defined unless $\mathsf{opt} \geq 0$. Usually, when people are discussing an FPTAS it is in the context of a combinatorial problem where $\mathsf{opt} \geq 0$ either holds, or can be made to hold without loss of generality. If you have a lower bound for $f$ (at least on the feasible region), say L, then you can minimize the function $f(x) - L$. If you know that $\mathsf{opt} < 0$ then you can maximize the function $-f(x)$. Otherwise, you could ask that $|\mathsf{algo}-\mathsf{opt}| \leq \varepsilon|\mathsf{opt}|$ which aligns with the usual definition when $\mathsf{opt} \geq 0$.

Now let us address the distinction/connection between the additive error term seen in the analysis of interior-point methods, versus an FPTAS. First, I want to state for completeness that an algorithm with an additive error of $\varepsilon$ for any $\varepsilon > 0$ is stronger than an FPTAS (assuming the algorithm also has a running time that is polynomial in the input size and $1/\varepsilon$). Assuming $\mathsf{opt} \geq 1$, when given $\varepsilon > 0$ let $\delta = \varepsilon\mathsf{opt}$. Then find $x_\delta$ such that $f(x_\delta) - \mathsf{opt} \leq \delta = \varepsilon \mathsf{opt}$. Rearranging we get $f(x_\delta) \leq (1+\varepsilon) \mathsf{opt}$ as desired.

The reason I would not consider all convergent solvers to be FPTAS's (using the Wikipedia page's definition) is that the solutions returned by the solvers aren't necessarily feasible solutions. The constraints are allowed to be violated by $\varepsilon$. I imagine that under some "reasonable" assumptions, one could devise an algorithm to take the nearly feasible solution, and produce a feasible solution of similar cost. But I also suspect there are some pathological examples where you cannot.

$\endgroup$
6
  • $\begingroup$ Thank you for confirming my fears. However, two comments: 1. Choosing $\delta = \varepsilon \mathsf{opt}$ is nice... if you know $\mathsf{opt}$ a-priori, but that is rarely the case. The only alternative I could find is to use primal-dual methods to find bounds on $\mathsf{opt}$ that become smaller and smaller, but I suppose I need some additional assumptions s.t. the initial bound is not too high? 2. The constraints are not that big of an issue, I think, since we get the confirmation $f(x_{\delta}) - \mathsf{opt} \leq \delta$. If we were looking for a feasible point, this would be different. $\endgroup$ Commented Apr 5 at 5:53
  • $\begingroup$ Knowing any positive lower bound on $\mathsf{opt}$ is good enough, so long as the resulting $\delta$ has polynomial size, but I understand the issue. Using duality to find such a lower bound would be a reasonable idea. $\endgroup$ Commented Apr 6 at 0:57
  • $\begingroup$ Also, I am curious about your motivation. The initial question made me think you were interested from a theoretical perspective, but your comment makes me feel differently. $\endgroup$ Commented Apr 6 at 0:59
  • $\begingroup$ Well, a lot of people usually write that their convex optimization problem is solvable in polynomial time - I did too. But that just doesn't seem to be the case in general (only linear programming is actually in $\mathsf{P}$), so I was hoping that one could at least argue these problems are in $\mathsf{FPTAS}$, but even that seems wrong... You were right, the fact that interior-point methods don't necessarily find a feasible point is actually quite critical, and I don't see any way to get around it... I'll therefore mark the question as answered, unless someone else has another idea... $\endgroup$ Commented Apr 6 at 17:33
  • 1
    $\begingroup$ I think you would need to ensure that an approximate solution to the KKT equations is still close to optimal but that will be true of $f$ is "smooth enough" as you said in the original question. $\endgroup$ Commented Apr 6 at 18:20

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.