There are numerous techniques to fit a sphere (with unknown centre and radius) through points in $R^3$, such that the fitted sphere passes through the points as closely as possible (in the least-squared error sense, for example), but I have a slightly different problem: instead of points on the sphere, I have points, each of which is a known (measured) distance away from the (unknown) sphere.
To formulate this problem precisely, let's call the four unknowns $cx, cy, cz$ and $r$ (being the x, y and z coordinates of the sphere's centre, and the radius of the sphere), and let's call the $i$th input point $P[i]$, which has coordinates $(P[i]_x, P[i]_y, P[i]_z)$ and distance from the sphere $P[i]_d$.
Then, I believe a solution to the following would be a good solution to the problem:
$min\sum_{i=0}^{n-1} (\sqrt{(P[i]_x - cx)^2 + (P[i]_y - cy)^2 + (P[i]_z - cz)^2} - r - P[i]_d)^2$
What I am wondering is:
Is this true? Can the above "minimum sum of squared errors" be expected to yield a reasonable fit?
If so, what is a reasonable way to solve this computationally, for cases where there are $\approx 100$ points?
I can compute the derivatives of the above expression with respect to the four unknowns $cx, cy, cz$ and $r$, and could attempt e.g. gradient descent, but is there a more efficient way to solve this problem? For example, can this be recast as a linear or quadratic program?