Let us consider a very simple convex program of the form $$\mathsf{opt} = \inf_{\substack{Ax = b\\Cx\leq d}} f(x)$$ where $x\in\mathbb{R}^n$, $f$ is a (smooth enough) convex function, and $A$, $C$ are some constraint matrices and $b, d$ some vectors of appropriate dimensions, with rational entries (that way, they can be represented on a computer; we also assume $f$ can somehow be represented, e.g., if it is a convex quadratic function with rational coefficients etc). We also assume the problem is strictly feasible, so pretty much the best case scenario for interior-point methods (see Boyd & Vandenberghe, Chapter 11).
By construction, interior-point methods generate solutions that carry an additive error term, i.e., for $\delta > 0$ small enough they generate a point $x_{\delta}$ such that $$f(x_{\delta}) - \mathsf{opt} \leq \delta, \text{ such that } Ax_{\delta}=b \text{ and } \forall i, \; (Cx_{\delta})_i \leq d_i + \delta,$$ see also the summary on Wikipedia.
However, I am wondering whether such problems also admit an $\mathsf{FPTAS}$, as claimed on this post? For an algorithm $\verb|algo|$ to be a Fully Polynomial Time Approximation Scheme to $\mathsf{opt}$, it basically needs to satisfy $$\mathsf{opt} \leq \verb|algo| \leq (1+\varepsilon)\mathsf{opt}$$ for any $\varepsilon > 0$, where the runtime of $\verb|algo|$ must be polynomial w.r.t. the representation size of the instance, as well as $1/\varepsilon$.
Surely, this must be wrong, unless we assume that $\mathsf{opt} \geq 0$, since if the result is negative, the definition for $\mathsf{FPTAS}$ is ill-defined.
Did someone else already have to distinguish between a 'good enough in practice' additive error term, and an actual $\mathsf{FPTAS}$? Thank you very much in advance!