34
$\begingroup$

Let $f : [0,1] \to [-1,1]$ be an integrable function such that $$\displaystyle\int_{0}^{1} x f(x) \, {\rm d} x = 0$$ What is the maximum possible value of $\displaystyle\int_{0}^{1} f(x) \, {\rm d} x$?

Originally, this was a physics problem involving the maximum amount of force that can be applied to an object without it sliding, but it was reduced to this optimization problem. Using physics, I tried multiple reasonings and obtained the maximal value of $\frac{2-\sqrt{2}}{\sqrt{2}}$ for the integral, corresponding to the function

$$ f(x) = \begin{cases} \,\,\, 1 & \text{if} & 0 \le x \le \frac{1}{\sqrt{2}} \\ -1 & \text{if} & \frac{1}{\sqrt{2}} < x \le 1 \end{cases} $$

However, I would like to know a slightly more rigorous proof.

$\endgroup$
4
  • 2
    $\begingroup$ Can you show what you tried? $\endgroup$ Commented Oct 25 at 10:59
  • $\begingroup$ I want to use Holder's (integral) inequality (or another integral inequality) to show that OP's maximum value cannot be improved upon for any choice of $f,$ but I don't think this approach works due to $f$ having a negative and positive part. But I could be wrong about this. $\endgroup$ Commented Oct 25 at 12:29
  • $\begingroup$ If we take $f(x)=-1$ for $x\in [0,a]$ and $f(x)=1$ for $x\in [a,1]$ and find 'a' then something good may happen. A similar question was asked some years back in Putnam Contest and this approach was used. $\endgroup$ Commented Oct 25 at 13:31
  • $\begingroup$ Funnily enough, I fed a discrete version of this to google AI mode and it essentially came up with the $\frac{1}{\sqrt{2}}$ split! $\endgroup$ Commented Oct 27 at 12:58

10 Answers 10

29
$\begingroup$

Define $F:[0, 1] \to \Bbb R$ as $$ F(x) = \int_0^x t f(t) \, dt = -\int_x^1 t f(t) \, dt \, . $$ The idea is to determine a sharp upper bound for $F(x)$ and then express $\int_0^1 f(x) \, dx$ in terms of $F$ to get a sharp upper bound for that integral.

From $-1 \le f(x) \le 1$ we get $$ |F(x)| \le \int_0^x t |f(t)| \, dt \le \frac 12 x^2 $$ and $$ |F(x)| \le \int_x^1 t |f(t)| \, dt \le \frac 12 (1-x^2) \, , $$ so that $$ \tag{$*$} |F(x)| \le \frac 12 \min(x^2, 1-x^2) $$ for all $ \in [0, 1]$.

The function $F$ is absolutely continuous with $F'(x) = x f(x)$ almost everywhere, therefore we can do integration by parts: For $0 < \epsilon < 1/\sqrt 2$ is $$ \int_\epsilon^1 f(x) \,dx = \int_\epsilon^1 \frac{F'(x)}{x} \,dx = - \frac{F(\epsilon)}{\epsilon} + \int_\epsilon^1 \frac{F(x)}{x^2} \, dx \\ \underset{(*)}{\le} \frac 12 \epsilon + \int_\epsilon^{1/\sqrt 2} \frac 12 \,dx + \int_{1/\sqrt 2}^1 \frac{1-x^2}{2x^2} \, dx \, . $$ Taking the limit $\epsilon \to 0$ gives $$ \int_0^1 f(x) \, dx \le \int_0^{1/\sqrt 2} \frac 12 \,dx + \int_{1/\sqrt 2}^1 \frac{1-x^2}{2x^2} \, dx = \boxed{\sqrt 2 - 1} \, . $$

The bound is sharp. Equality holds if $f(x) = 1$ for almost all $x \in [0, 1/\sqrt 2)$ and $f(x) = -1$ for almost all $x \in (1/\sqrt 2, 1]$.

$\endgroup$
4
  • $\begingroup$ What was the motivation to split the interval $(0,1)$ into the union of $(0,1/\sqrt{2})$ and $(1/\sqrt{2},1)$? $\endgroup$ Commented Oct 26 at 18:30
  • $\begingroup$ @MarkViola: The extremal function (which OP correctly guessed) is $+1$ in the first interval and $-1$ in the second interval. For that function equality needs to hold in all estimates, in particular $F(x) = \frac 12\min(x^2, 1-x^2)$, which is $x^2/2$ in the first interval and $(1-x^2)/2$ in the second interval. $\endgroup$ Commented Oct 26 at 18:39
  • 1
    $\begingroup$ This is a nice solution - are you expecting to see a "better" answer somehow ? @MartinR $\endgroup$ Commented Oct 27 at 12:46
  • 1
    $\begingroup$ @dezdichado: There is one answer which I really like, and I want to reward it with a bounty. $\endgroup$ Commented Oct 27 at 12:50
13
$\begingroup$

I'm sure someone can convert this argument into a more mathematical one (currently it's essentially a manipulation of means)

Let $g(x) = f(x)+1$ for convenience. Then maximising $\int_0^1 f(x)dx = \int_0^1 (g(x) -1)dx$ is the same as maximising $\int_0^1 g(x)dx$.

Consider a rod of length 1 unit and mass $m$ units. Let $g(x)$ be the mass per unit length at distance $x$ from the left end. The given conditions imply:

  1. mass per unit length cannot exceed $2$
  2. $0=\int_0^1 xf(x)dx = \int_0^1 x(g(x)-1)dx \implies \int_0^1 xg(x) = 0.5$, i.e., the centre of mass of the rod lies at $0.5/m$.

Note that the centre of mass must have a minimum $x$ co-ordinate of $\frac 12 \left(\frac m2 \right)$ (half of mass divided by its maximum length density), where it achieves the minimum if all the mass is concentrated towards the left.

This gives $$0.5/m \ge m/4 \iff m \le \sqrt 2$$ This gives that the maximum value of $\int_0^1 f(x)dx$ is $\sqrt2 - 1$.
Corresponding to the conditions of the rod having all it's mass towards the left as much as possible, we also have the value of $f$ as in the OP.

$\endgroup$
5
  • 5
    $\begingroup$ tables have turned- physics is the one rescuing math! (+1) $\endgroup$ Commented Oct 25 at 15:31
  • 4
    $\begingroup$ The claim that 1. the smallest $x$-coordinate of the center of mass is achieved if all the mass is concentrated on the left and 2. (less obvious from a mathematical perspective) there are no other mass distributions that also result in an $x$-coordinate of the center of mass of $\frac{m}{4}$ are the crucial parts that need proof, if we want to "convert the argument into a more mathematical one". It is intuitive but not necessarily easy to show. $\endgroup$ Commented Oct 25 at 15:56
  • $\begingroup$ @ReinhardMeier for 2. Note that there could be many such functions (having pointwise discontinuities), I just noted that the OP's solution corresponded to a physical analogue. $\endgroup$ Commented Oct 26 at 6:16
  • $\begingroup$ For 1. Again, I don't claim the minima only happens when the mass is all concentrated towards the left, but if it is so. I think a proof is possible using something like $I_0$ defined as the integral from OP's function, then $I$ as the integral from general function, then proving $I-I_0 \ge 0$. $\endgroup$ Commented Oct 26 at 7:14
  • 2
    $\begingroup$ @DS Maybe I was not clear enough in my comment: I did not mean to criticize the answer, I just wanted to provide some guidance for anyone who wants to take up the challenge to "convert this argument into a more mathematical one". The two things I mentioned are the only places I spotted that could use more mathematical rigor, everything else looks perfectly valid to me. $\endgroup$ Commented Oct 26 at 15:26
9
$\begingroup$

This is Just a user's excellent argument, only written a bit differently.

Let $s = 1/\sqrt 2$ and $$ f(x) = \begin{cases} 1 & 0 \le x \le s \\ -1 & s < x \le 1 \end{cases} $$ be the suspected extremal function. It satisfies $\int_0^1 xf(x) \, dx = 0$ and $\int_0^1 f(x) \, dx = \sqrt 2 - 1$.

Any function $g:[0, 1] \to [-1, 1]$ satisfies $$ (s-x) \cdot (f(x)-g(x)) \ge 0 $$ for all $x \in [0, 1]$. If $g$ is also integrable with $\int_0^1 xg(x) \, dx = 0$ then $$ 0 \le \int_0^1 (s-x) \cdot (f(x)-g(x)) \, dx = s \left(\int_0^1 f(x) \,dx - \int_0^1 g(x) \, dx \right) $$ and therefore $\int_0^1 g(x) \, dx \le \int_0^1 f(x) \, dx = \sqrt 2 -1$. Equality holds if and only if $g(x) = f(x)$ for almost $x \in [0, 1]$.

$\endgroup$
2
  • $\begingroup$ Hi, where does this argument require that $s=1/\sqrt{2}$ rather than any other number in $[0,1]$? $\endgroup$ Commented Nov 10 at 1:07
  • 1
    $\begingroup$ @Ben: At $\int_0^1 (s-x) \cdot (f(x)-g(x)) \, dx = s \left(\int_0^1 f(x) \,dx - \int_0^1 g(x) \, dx \right)$ it is used that $\int_0^1 xf(x) \, dx = \int_0^1 xg(x) \, dx = 0$. The first integral is zero because the choice of $s$ (and the second integral is zero by the assumptions on $g$). $\endgroup$ Commented Nov 10 at 5:50
8
+100
$\begingroup$

Here is an intuitive geometric explanation:

Illustration of the proof

If $\int_0^1 f<\int_0^1 g$, we have $A<B$, but then with the density $x$ (or any other increasing density $\delta$), the total mass of area $A$ should be even smaller than $B$, hence they cannot be balanced.


Now we write down a rigorous proof based on the above observation for the natural generalization:

Let $\delta(x)$ be an arbitrary density function that's positive almost everywhere and increasing over $[a, b]$, then there exists a unique $s\in [a,b]$, such $\int_a^s \delta(x)dx=\int_s^b \delta(x)dx$. We have

$$\max \int_a^b f(x)dx \text{ subject to } \begin{cases} \int_a^bf(x)\delta(x)=0 \\ |f(x)|\le 1 , \forall x\in [a,b] \end{cases}$$

is achieved by $$f(x)=\begin{cases} 1 & a\le x \le s \\ -1 & s<x\le b \end{cases}$$

Note that since $|f|$ is bounded, we no longer have to assume $f$ is integrable, but only measurable. Also, we can even further generalize by replacing $dx$ with another measure, and $\delta(x)$ is the Radon–Nikodym derivative of two measures.


Suppose there exists admissible $g$ such that $F[g]>F[f]$, let $h:=f-g$, we have $$F[h]<0, \int_a^b h(x)\delta(x)=0, \text{ and } \begin{cases} h(x)\ge 0 & a\le x \le s \\ h(x)\le 0 & s<x\le b \end{cases}$$

Now we have a contradiction: $$\begin{align}\int_a^b h(x)\delta(x) &= \int_a^s |h|\delta(x) - \int_s^b |h|\delta(x) \\ &\le \delta(s)\int_a^s|h|-\delta(s)\int_s^b|h| \\ &=\delta(s)F[h]<0\end{align}$$

Remark: I started with the observation that the constraint domain is actually convex, and the functional $F[f]:=\int_a^b f(x)dx$ to be optimized is linear/convex, then drew the above image to prove $f(x)$ achieves a local maximum (for the $L^\infty$ norm, or whatever norm that's convenient). But the solution turned out to be simple and doesn't depend on any result from convex optimization or norm, especially the modified version given by Martin R in the comment.

$\endgroup$
4
  • 3
    $\begingroup$ This is a really nice argument. It can perhaps be a bit simplified: If $f$ is the extremal function and $g$ is any admissable function then $ (\delta(s)-\delta(x))(f(x)-g(x))\ge 0$ for all $x$, and therefore $$ \delta(s) \left(\int_0^1 f(x) \,dx - \int_0^1 g(x) \, dx \right) = \int_0^1 (\delta(s)-\delta(x))(f(x)-g(x)) \, dx \ge 0 \, . $$ $\endgroup$ Commented Oct 27 at 4:38
  • $\begingroup$ @MartinR That's fantastic! $\endgroup$ Commented Oct 27 at 5:21
  • $\begingroup$ A minor point: You need that $\delta(s) > 0$, which is true because $s \in (a, b)$ (and not only $s \in [a, b]$, as you said). $\endgroup$ Commented Oct 27 at 7:41
  • $\begingroup$ @MartinR Thanks! I required $\delta$ to be positive a.e. and increasing. This actually forces $\delta$ to be positive for all $x>a$, and since $\int_a^b \delta(x)>0$, $s$ cannot be $a$. I could also have said $\delta(x)>0$ except possibly at $x=a$. $\endgroup$ Commented Oct 27 at 8:05
5
$\begingroup$

We prove the existence of a maximum for your functional, and we show that the function you found is indeed the only point of maximum.

Existence of a maximum.

For each measurable $f\colon [0,1]\rightarrow [-1,1]$ we can associate the Lipschitz function $F\colon [0,1]\rightarrow [-1,1]$ given by

$$F(x):=\int_0^x f(t) \ dt$$

Note that

  1. $F(0)=0$,
  2. $|F(x)-F(y)|\leq |x-y|$,

Moreover every function $F$ satisfying $1-2$ is associated to an unique function $f\in L^\infty$ given by $f=F'$, with $f(x)\in [-1,1]$ almost everywhere. One can see that

$$\int_0^1 f(t) \ dt=F(1),$$

$$\int_0^1 t f(t) \ dt = \int_0^1 t F'(t) \ dt = F(1) - \int_0^1 F(t) \ dt.$$

Consider the condition

  1. $F(1) - \int_0^1 F(t) \ dt =0$.

Let $\mathcal{F}$ be the family of functions that satisfies $1-2-3$. By Ascoli-Arzelá Theorem this is a compact subset of functions in $C^0[0,1]$. So the continuous linear functional

$$F\in C^0[0,1]\mapsto F(1)$$

has points of maximum in $\mathcal{F}$. Since $F\in \mathcal{F}\implies -F \in \mathcal{F}$, it is easy to see that the maximum value is no negative.

Note that $F_0$ is a point of maximum of this problem if and only if $f_0=F_0'$ is a point of maximum of your original problem.

Finding the points of maximum: there is only one point of maximum.

Let $F_0$, with $f_0=F_0'$, be a point of maximum.

Claim 1. We claim that $|f_0|=1$ almost everywhere.

Otherwise there is $\epsilon\in (0,1]$ such that $$m(\Omega_\epsilon)> 0,$$ where $$\Omega_\epsilon:=\{x\in [0,1]\colon f_0(x)\in [-1+\epsilon,1-\epsilon] \}.$$

Then there is $c\in (0,1)$ such that

$$\int_0^c t 1_{\Omega_\epsilon}(t) \ dt = \int_c^1 t 1_{\Omega_\epsilon}(t) \ dt$$

Define

$$h = \frac{\epsilon}{2} \left( 1_{\Omega_\epsilon \cap [0,c]}- 1_{\Omega_\epsilon \cap [c,1]}\right).$$

Then

$$\int_0^1 t h(t) dt =0,$$

$$\int_0^1 h(t) dt > 0,$$

$g(x):=f_0(x)+h(x)\in[-1,1]$ and

$$\int_0^1 g(t) dt > \int_0^1 f(t) dt,$$

that contradicts the maximality of $f_0$. This proves the claim.

Claim 2. There is $c\in (0,1)$ such that

$$\{x\in [0,1]\colon f_0(x)=1\}=[0,c]$$

and

$$\{x\in [0,1]\colon f_0(x)=-1\}=[c,1]$$

up to subsets of zero Lebesgue measure.

Indeed, suppose this does not occur. Then there are two disjoint subsets $A$ and $B$ with positive Lebesgue measure and $d\in (0,1)$ such that

$A\subset [0,d]$, $B\subset [d,1]$, $f_0(x)=-1$ on $A$, and $f_0(x)=1$ on $B$. Reducing $A$ and $B$ if necessary we can assume that

$$\int t 1_A(t) \ dt = \int t 1_B(t) \ dt > 0$$.

Note that $m(A) > m(B)$. Then

$$h = \frac{1}{2} \left(1_A- 1_B\right)$$

satisfies

$$\int_0^1 t h(t) \ dt =0,$$

$$\int_0^1 h(t) \ dt > 0,$$

$g(x) = f_0(x)+ h(x)\in [-1,1]$ and we have

$$\int_0^1 g(t) \ dt > \int_0^1 f_0(t) \ dt,$$

that contradicts the maximality of $f_0$.

This concludes the proof of the second claim. Now it is easy to show that the only possible choice for $c$ in the second claim is $c=1/\sqrt{2}.$ So there is only one point of maximum, and the maximum value is $\sqrt{2}-1$.

Final Remark. The argument seems to be fairly general. For instance we could replace the condition

$$\int_0^1 t f(t) \ dt=0$$

by the condition

$$\int_0^1 \alpha(t) f(t) \ dt=0,$$

where $\alpha\colon [0,1]\rightarrow \mathbb{R}$ is continuous, positive and strictly monotone increasing. In this case

$$\int_0^1 \alpha(t) f(t) \ dt = F(1)\alpha(1)- F(0)\alpha(0)- \int_0^1 F(t) \ d\alpha,$$

where the last integral is a Riemann–Stieltjes integral.

Then we can prove the existence of maximum and Claim 1 and Claim 2 using analogous arguments. The uniqueness and nature of the point of maximum follows, that is, the unique point of maximum is

$$f_c= 1_{[0,c]}- 1_{(c,1]},$$

where $c$ is the only $c\in (0,1)$ satisfying

$$\int \alpha(t)f_c(t) \ dt =0.$$

$\endgroup$
5
$\begingroup$

$$ \begin{array}{ll} \underset{f : [0,1] \to [-1,1]}{\text{maximize}} & \displaystyle\int_0^1 f(x) \, {\rm d} x \\ \text{subject to} & \displaystyle\int_0^1 x f(x) \, {\rm d} x = 0 \end{array} $$

Introducing a Lagrange multiplier $\mu$,

$$ \int_0^1 f(x) \, {\rm d} x + \mu \int_0^1 x f(x) \, {\rm d} x = \int_0^1 (1 + \mu x) f(x) \, {\rm d} x $$

Since $f (x) \in [-1,1]$, maximizing the integrand pointwise, we obtain the bang-bang solution$^\color{magenta}\star$

$$ f(x) = \begin{cases} +1 & \text{if} & 1 + \mu x > 0 \\ -1 & \text{if} & 1 + \mu x < 0 \\ \text{any value} \in [-1,1] & \text{if} & 1 + \mu x = 0 \end{cases} $$

with at most one switch, at $x = -\frac{1}{\mu}$, if $x_{\star} := -\frac{1}{\mu} \in [0,1]$, i.e., if $\mu < -1$. Whoever happens to be acquainted with optimal control is (almost) surely acquainted with such solutions. There are two cases to consider:

  • If $\mu \geq -1$, there is no switch. From the equality constraint, $$0 = \int_0^1 x f(x) \, {\rm d} x = \int_0^1 x \, {\rm d} x = \frac12 \neq 0$$ Thus, the switchless case is an impossibility.

  • If $\mu < -1$, there is a single switch at $x_{\star} := -\dfrac{1}{\mu} \in [0,1]$. From the equality constraint, $$0 = \int_0^1 x f(x) \, {\rm d} x = \int_0^{x_\star} x \, {\rm d} x - \int_{x_\star}^1 x \, {\rm d} x = x_\star^2 - \frac12 $$ Thus, the single switch occurs at $\color{blue}{x_\star := \dfrac{1}{\sqrt{2}}}$. Also, $\mu = -\sqrt{2}$.

Thus, the maximum is

$$ \int_0^1 f(x) \, {\rm d} x = \int_0^{x_\star} {\rm d} x - \int_{x_\star}^1 {\rm d} x = 2 x_\star - 1 = \frac{2}{\sqrt{2}} - 1 = \color{blue}{\sqrt{2} - 1} $$

and the maximizing bang-bang solution is

$$ f(x) = \begin{cases} +1 & \text{if} & x \in [0, x_\star) \\ \,\,\,\xi & \text{if} & x = x_\star \\ -1 & \text{if} & x \in (x_\star, 1] \end{cases} $$

where $\xi \in [-1,1]$ is arbitrary.


$\color{magenta}\star$ Lawrence C. Evans, An Introduction to Mathematical Optimal Control Theory, UC Berkeley.

$\endgroup$
4
$\begingroup$

In the following, we use $$ \mu\left(T\right) = \int_T 1 dx $$ for a measurable set $T\subseteq [0,1].$

In order to reduce typing effort, let $s=\frac{\sqrt{2}}{2}.$ Let $$ f(x) = \begin{cases} \phantom{-}1 & \text{if} & x\in[0,s] \\ -1 & \text{if} & x\in(s,1] \end{cases} $$ that is, the function proposed by the OP.

Now let $g$ be a function that deviates from $f$ in $[0,s]$ in a way that affects the integrals. That means, there is a set $S_1\subset [0,s]$ such that $g(x)\leq 1-\varepsilon_1$ with $\varepsilon_1 >0$ for $x\in S_1,$ and $S_1$ has positive measure $\int_{S_1}1dx.$

From $\int_0^1 xg(x)dx =0$ it follows $$ \int_0^s xg(x)dx = - \int_s^1 xg(x)dx $$ which means that there also must be a measurable deviation between $g$ and $f$ in $(s,1].$

There is a set $S_2\subset (s,1]$ such that $g(x)\geq -1+\varepsilon_2$ with $\varepsilon_2 >0$ for $x\in S_2,$ and $S_2$ has positive measure $\int_{S_2}1dx.$

Now we select subsets $T_1\subseteq S_1$ and $T_2\subseteq S_2$ with $$ \int_{T_1}xdx = \int_{T_2}xdx =: d > 0. $$ As all elements in $T_1$ are smaller than or equal to $s$, we have $$ d = \int_{T_1}xdx \leq \int_{s-\mu(T_1)}^s xdx = \mu(T_1)s-\frac{1}{2}\mu^2(T_1) $$ As all elements in $T_2$ are greater than or equal to $s$, we have $$ d = \int_{T_2}xdx \geq \int_s^{s+\mu(T_2)} xdx = \mu(T_2)s+\frac{1}{2}\mu^2(T_2) $$ From that, it follows $$ \mu(T_1) - \mu(T_2) \geq \frac{\mu^2(T_1)+\mu^2(T_2)}{2s} =: m $$ with $m >0.$

We set $\varepsilon =\min(\varepsilon_1,\,\varepsilon_2).$

We create a new function as follows: $$ h(x)= \begin{cases} g(x)+\varepsilon & \text{if} & x\in T_1 \\ g(x)-\varepsilon & \text{if} & x\in T_2 \\ g(x) & \text{if} & x\in [0,1]\setminus (T_1 \cup T_2) \end{cases} $$ Due to the choice of $\varepsilon$, the image of this function is still in $[-1,1].$

Then $$ \int_0^1 h(x) dx= \int_0^1 g(x) dx+ \int_{T_1} \varepsilon dx -\int_{T_2} \varepsilon dx \\ \geq \int_0^1 g(x) dx + \varepsilon m $$ which is really greater than $\int_0^1 g(x)dx$, and $$ \int_0^1 xh(x) dx= \int_0^1 xg(x) dx+ \int_{T_1} x\varepsilon dx -\int_{T_2} x\varepsilon dx \\ = \int_0^1 xg(x) dx + \varepsilon d- \varepsilon d = \int_0^1 xg(x) dx = 0 $$ This shows: Whenever we have a function $g$ that deviates from $f$ (in a measurable way), we can construct a function $h$ that yields a better value of the objective function. Therefore, $f$ must be optimal.

$\endgroup$
4
$\begingroup$

Regarding this answer as an experimental verification...

After discretizing, this problem can be handled via linear programming. Consider the discretized version

$$ \min_{a_k}\sum_{k=0}^{k=N}a_k, \ \ \text{s.t.}\ \ \cases{\sum_{k=0}^{k=N}\frac kN a_k = 0\\ -1\le a_k \le 1},\ k = 0,\cdots,N $$

The formulation is straightforward with MATHEMATICA:

n = 200;
m0 = Table[k/n, {k, 0, n}];
m[k_] := Table[If[j == k, 1, 0], {j, 0, n}];
Flatten[Table[{m[k], -m[k]}, {k, 0, n}], 1];
c = Table[-1, {k, 0, n}];
M = Flatten[Table[{m[k], -m[k]}, {k, 0, n}], 1];
b = Flatten[Table[{1, 1}, {k, 0, n}], 1];
A = Table[a[k], {k, 0, n}];
constr = Join[{A . m0 == 0}, Thread[M . A + b >= 0]];
sol = Maximize[{Total[A]/n, constr}, A]
ListLinePlot[Transpose[Join[{m0}, {A /. sol[[2]]}]]]

enter image description here

$\endgroup$
1
  • $\begingroup$ This is cool and I was really waiting for the keyword "linear programming" to pop up in an answer. But can you please explain what exactly does that code do? I suppose it produces that plot. So what do we infer from that? $\endgroup$ Commented Oct 27 at 16:07
3
$\begingroup$

We reformulate the problem a little bit by doing a change of variables $y=x^2$ in the integration dummy variable and then we make the problem symmetric around the origin. Then we end up with the following problem: "among the integrable functions $f:[-1,1]\to [-1,1]$ for which $\int_{-1}^{+1}dy\,f(y)=0$, maximize $\int_{-1}^{+1}dy\,|y|^{-1/2}f(y)$."

We now reformulate the problem once more to redirect the range of $f$ to the interval $[0,1]$ in stead of $[-1,1]$. We do this by doing the substitution $f_{new}=\frac{f_{old}+1}{2}$. The problem then becomes... "among the integrable functions $f:[-1,1]\to [0,1]$ for which $\int_{-1}^{+1}dy\,f(y)=1$, maximize $\int_{-1}^{+1}dy\,|y|^{-1/2}f(y)$."

From the Hardy-Littlewood inequality, we have $$\int_{-1}^{+1}dy\,|y|^{-1/2}f(y)\leq \int_{-1}^{+1}dy\,\{|.|^{-1/2}\}^*(y)f^*(y)=\int_{-1}^{+1}dy\,|y|^{-1/2}f^*(y)$$ so we know the maximum is to be found among the symmetrically non-increasing functions. Assuming $f:[-1,1]\to[0,1]$ is indeed non-increasing, one can easily test its performance against that of $\chi_{[-\frac{1}{2},\frac{1}{2}]}$ and check that the latter candidate "wins".

Remark: the use of Hardy-Littlewood is not pedantic showing off of math skills. Rearrangement lies at the core of this problem (I think) and is a worthy subject of study, so better not re-invent the wheel with some "more elementary" approach.

$\endgroup$
0
$\begingroup$

Although the existing answers are pretty clever and mathematically correct, if I were a professor teaching a physics course and I gave out this problem, I would expect (hope?) the students would attack it using the Euler-Lagrange equations. (That is, use the more general technique of the calculus of variations.)

Surely the professor wants to guide you into a working understanding of the use of Lagrangians, and has provided this somewhat contrived problem as a gentle way of letting you see how this works.

$\endgroup$
2
  • $\begingroup$ How would the Euler-Lagrange equation handle the constraint $f (x) \in [-1,1]$? $\endgroup$ Commented Oct 31 at 9:17
  • $\begingroup$ @RodrigodeAzevedo There are ways of doing that. $\endgroup$ Commented Oct 31 at 13:21

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.