## Minimal Distance to Pi

Here is a problem from Week of Code 29 hosted by Hackerrank.

Problem Given two integers $$q_1$$ and $$q_2$$ ($$1\le q_1 \le q_2 \le 10^{15}$$), find and print a common fraction $$p/q$$ such that $$q_1 \le q \le q_2$$ and $$\left|p/q-\pi\right|$$ is minimal. If there are several fractions having minimal distance to $$\pi$$, choose the one with the smallest denominator.

Note that checking all possible denominators does not work as iterating for $$10^{15}$$ times would exceed the time limit (2 seconds for C or 10 seconds for Ruby).

The problem setter suggested the following algorithm in the editorial of the problem:

1. Given $$q$$, it is easy to compute $$p$$ such that $$r(q) := p/q$$ is the closest rational to $$\pi$$ among all rationals with denominator $$q$$.
2. Find the semiconvergents of the continued fraction of $$\pi$$ with denominators $$\le 10^{15}$$.
3. Start from $$q = q_1$$, and at each step increase $$q$$ by the smallest denominator $$d$$ of a semiconvergent such that $$r(q+d)$$ is closer to $$\pi$$ than $$r(q)$$. Repeat until $$q$$ exceeds $$q_2$$.

Given $$q$$, let $$d = d(q)$$ be the smallest increment to the denominator $$q$$ such that $$r(q+d)$$ is closer to $$\pi$$ than $$r(q)$$. To justify the algorithm, one needs to prove the $$d$$ is the denominator of one of the semiconvergents. The problem setter admits that he does not have a formal proof.

Inspired by the problem setter’s approach, here is a complete solution to the problem. Note that $$\pi$$ should not be special in this problem, and can be replaced by any other irrational number $$\theta$$. Without loss of generality, we may assume that $$\theta\in(0,1)$$.

Let me first introduce the Farey intervals of $$\theta$$.

1. Start with the interval $$(0/1, 1/1)$$.
2. Suppose the last interval is $$(a/b, c/d)$$. Cut it by the mediant of $$a/b$$ and $$c/d$$ and choose one of the intervals $$(a/b, (a+c)/(b+d)), ((a+c)/(b+d), c/d)$$ that contains $$\theta$$ as the next interval.

We call the intervals appeared in the above process Farey intervals of $$\theta$$. For example, take $$\theta = \pi – 3 = 0.1415926…$$. The Farey intervals are:

$$(0/1, 1/1), (0/1, 1/2), (0/1, 1/3), (0/1, 1/4), (0/1, 1/5), (0/1, 1/6), (0/1, 1/7), (1/8, 1/7), (2/15, 1/7), \cdots$$

The Farey sequence of order $$n$$, denoted by $$F_n$$, is the sequence of completely reduced fractions between 0 and 1 which when in lowest terms have denominators less than or equal to $$n$$, arranged in order of increasing size. Fractions which are neighbouring terms in any Farey sequence are known as a Farey pair. For example, Farey sequence of order 5 is

$$F_5 = (0/1, 1/5, 1/4, 1/3, 2/5, 1/2, 3/5, 2/3, 3/4, 4/5, 1/1).$$

Using the Stern–Brocot tree, one can prove that

Lemma 1 For every Farey interval $$(a/b, c/d)$$ of $$\theta$$, the pair $$(a/b, c/d)$$ is a Farey pair. Conversely, for every Farey pair $$(a/b, c/d)$$, if $$\theta \in (a/b, c/d)$$, then $$(a/b, c/d)$$ is a Farey interval.

We say $$p/q$$ is a good rational approximation of $$\theta$$ if every rational between $$p/q$$ and $$\theta$$ (exclusive) has a denominator greater than $$q$$.

By the definition of Farey sequence, incorporating with Lemma 1, we know that

Lemma 2 A rational is an endpoint of a Farey interval of $$\theta$$ if and only if it is a good rational approximation of $$\theta$$.

In fact, one can show that the endpoints of Farey intervals and semiconvergents of continued fraction are the same thing! Thereof, the problem setter’s claim follows immediately from:

Proposition Given $$q$$, let $$r(q) = p / q$$ be the rational closest to $$\theta$$ with denominator $$q$$. If $$d = d(q)$$ is the minimal increment to $$q$$ such that $$r(q + d) = (p + c) / (q + d)$$ is closer to $$\theta$$ than $$r(q)$$, then $$c/d$$ is a good rational approximation.

Remark The proposition states that the increments to $$p/q$$ always come from a good rational approximation. It is stronger than the problem setter’s statement, which only asserts that the increment to the $$q$$ comes from a good rational approximation.

Proof In $$(x, y)$$-plane, plot the trapezoid defined by

$$\left| y/x – \theta \right| < \left|p/q – \theta\right|, q < x < q + d.$$

Also we interpret rational numbers $$p/q, (p+c)/(q+d)$$ as points $$A = (q, p), B = (q+d, p+c)$$. Let the line through $$(q, p)$$ parallel to $$y=\theta x$$ intersect the vertical line $$x = q+d$$ at $$C = (q+d, p+\theta d)$$. By the definition of $$d$$, we know that the trapezoid does not contain lattice points. In particular, there is no lattice point in the interior of the triangle $$ABC$$. In the coordinate system with origin at $$A$$, $$B$$ has coordinate $$(d, c)$$ and the line through $$A, C$$ is $$y = \theta x$$. Since triangle $$ABC$$ contains no lattice points, there is no $$(b, a)$$ with $$b < d$$ such that $$a/b$$ is between $$\theta$$ and $$c/d$$. In other words, $$c/d$$ is a good rational approximation. QED.

Here is a fine print of the algorithm. Because floats may not have enough precision for the purpose of computation, we will instead use a convergent of the continuous fraction of $$\pi$$ instead. All the computations will then happen in $$\mathbb{Q}$$. Finally, we present the algorithm.

P = Rational(5706674932067741, 1816491048114374) - 3

# find endpoints of Farey intervals
a, b, c, d = 0, 1, 1, 1
farey = [[a,b],[c,d]]
while (f = b + d) <= max - min
farey << [e = a + c, f]
if P < Rational(e, f)
c, d = e, f
else
a, b = e, f
end
end

min, max = gets.split.map(&:to_i)
p_min = (P * q).round

# increase p_min/min by frations in farey
while min <= max
c, d = nil, nil
farey.each do |a, b|
break if min + b > max
if (Rational(p_min + a, min + b) - P).abs < (Rational(p_min, min) - P).abs
c, d = a, b
break
end
end
break if d == nil
p_min += c; min += d
end

puts "#{p_min + 3 * min}/#{min}"

0 (be the first to like this)

## Shalom to Ning

I had never expected that Feb 19, 2017 would be the last day we say farewell to each.

We used to talk about math puzzles, from blue-eyed islander puzzle to the hardest logic puzzle ever, every time on the bus from or to Shuk. We joked about the possibility that the apple cores we throw in the Carmel national park would one day become apple trees. We had plans to host a hot-pot party and introduce Mahjong to our Israeli friends…

I realized that all these can only expressed in past tense when I saw you forever asleep in Eilat.

We were so close, yet we are so far apart.

0 (be the first to like this)

## First week in Israel

Here is a list of things I have experienced during the first week in Israel and some tips for those of you who plan to visit me in Haifa!

Hainan airline has a direct flight from Beijing to Tel Aviv, and they have a resting area for transiting passengers in Beijing.

If this is your first time in Israel, make sure your arrival day is not Saturday or any holiday. Saturday (or holiday) means almost no public transportation, almost no restaurants and fewer people on the street whom you can ask help from.

Most Israelis are quite friendly and speak decent English. Even if someone does not speak English at all, he or she is always able to grab someone closeby to help.

You can exchange all major currencies (eg. US dollars, Chinese yuan) for new Israeli shekels at the airport in Tel Aviv at Bank Hapoalim. Expectedly, the rate is not as good as what you can get outside the airport. A lot of places accept major credit cards, such as Visa and Mastercard. Some places ask for a purchase of 20 shekels or more if you use credit card.

The vending machines for train tickets did not accept my Mastercard. The ticket from the airport to Haifa (Hof Hacarmel station) costs 35 shekels. The train is on platform 2 and it has WiFi connection. Use Google map to tell which station you are arriving if you, like myself, do not understand enough Hebrew.

The train from Tel Aviv to Haifa goes along the coastal line of the mediterranean sea. Highway no.2 lies between the rails and the coast, which resembles highway no.1 in California.

Bus no.11 goes from Hof Hacarmel to Technion. The bus fare is something slightly less than 6 shekels. Moovit is a must for public transportation planning, and it can send you notification when you are approaching at your destination.

Uber and Lyft do not have services in Haifa (but Uber does in Tel Aviv). People use Gett (aka Get Taxi) to call cabs. Since tipping is not required for taxi, remember to turn off automatic tipping in the app. You can use my code GTMLIYK for 15 shekels off your 1st ride.

The most popular messaging app is WhatsApp. I was asked for my WhatsApp contact for quite a few times.

Google Fi (global cellular service including data) works quite well in Israel. For the first week, I’ve heavily relied on my smart phone for navigation, transportation and information retrieving. It’s probably a good idea to carry a power bank. The outlet sockets in Israel are Type C and H.

Other random observations:

On Sunday, the train is full of soldiers. A few were probably carrying M4 carbines when I was on the train.

The rent for the property listed online (eg. yod2.co.il or Facebook group) usually does not include city tax and building management fee.

Some apartments in Haifa have a separate room for the toilet only.

You give all the checks for the rent to the landlord when you sign the contract. Technion is able to write a guarantee letter to the landlord saying that they will freeze your last payment as the security deposit.

You will not be able to change the PIN, set by the bank, of your debit card, at least for Bank Leumi, the other major bank in Israel besides Bank Hapoalim.

Hebrew calendar and Chinese calendar are both lunisolar, so they are similar in a lot of ways. Friday and Saturday are weekends in Israel.

High schoolers can choose how difficult the math they want to learn. For example, calculus (including formulas like $$e^{i\theta} = \cos \theta + i\sin\theta$$) is offered in high schools.

However, most high school graduates need to go to army for 2-3 years and it’s hard to recall what they have learnt after the army. Top students might have the option to delay their military service.

Pork and shellfish are not Kosher, so you will not be able to see them in most supermarkets. It is also not Kosher to eat meat with milk.

0 (be the first to like this)

## A Short Proof of the Nash-Williams’ Partition Theorem

Notations

1. $$\mathbb{N}$$ – the set of natural numbers;
2. $$\binom{M}{k}$$ – the family of all subsets of $$M$$ of size $$k$$;
3. $$\binom{M}{<\omega}$$ – the family of all finite subsets of $$M$$;
4. $$\binom{M}{\omega}$$ – the family of all infinite subsets of $$M$$;

The infinite Ramsey theorem, in its simplest form, states that for every partition $$\binom{\mathbb{N}}{k} = \mathcal{F}_1 \sqcup \dots \sqcup \mathcal{F}_r$$, there exists an infinite set $$M\subset \mathbb{N}$$ such that $$\binom{M}{k}\subset \mathcal{F}_i$$ for some $$i\in [r]$$. The Nash-Williams‘ partition theorem can be seen as a strengthening of the infinite Ramsey theorem, which considers a partition of a subset of $$\binom{\mathbb{N}}{<\omega}$$.

Notations

1. $$\mathcal{F}\restriction M$$ – $$\mathcal{F}\cap 2^M$$, that is, the set $$\{s\in\mathcal{F} : s\subset M\}$$.
2. $$s \sqsubset t$$, where $$s,t$$ are subsets of $$\mathbb{N}$$ – $$s$$ is an initial segment of $$t$$, that is $$s = \{n\in t : n \le \max s\}$$.

Definition Let set $$\mathcal{F} \subset \binom{\mathbb{N}}{<\omega}$$.

1. $$\mathcal{F}$$ is Ramsey if for every partition $$\mathcal{F}=\mathcal{F}_1\sqcup \dots\sqcup\mathcal{F}_r$$ and every $$M\in\binom{\mathbb{N}}{\omega}$$, there is $$N\in\binom{M}{\omega}$$ such that $$\mathcal{F}_i\restriction N = \emptyset$$ for all but at most one $$i\in[r]$$.
2. $$\mathcal{F}$$ is a Nash-Williams family if for all $$s, t\in\mathcal{F}, s\sqsubset t \implies s = t$$.

Theorem [NASH-WILLIAMS 1965] Every Nash-Williams family is Ramsey.

The proof presented here is based on the proof given by Prof. James Cummings in his Infinite Ramsey Theory class. The purpose of this rewrite is to have a proof that resembles the one of the infinite Ramsey theorem.

Notation Let $$s\in\binom{\mathbb{N}}{<\omega}$$ and $$M\in\binom{\mathbb{N}}{\omega}$$. Denote $$[s, M] = \left\{t \in \binom{\mathbb{N}}{<\omega} : t \sqsubset s \text{ or } (s \sqsubset t \text{ and } t\setminus s \subset M)\right\}.$$

Definition Fix $$\mathcal{F}\subset \binom{\mathbb{N}}{<\omega}$$ and $$s\in \binom{\mathbb{N}}{<\omega}$$.

1. $$M$$ accepts $$s$$ if $$[s, M]\cap \mathcal{F}\neq \emptyset$$ and $$M$$ rejects $$s$$ otherwise;
2. $$M$$ strongly accepts $$s$$ if every infinite subset of $$M$$ accepts $$s$$;
3. $$M$$ decides $$s$$ if $$M$$ either rejects $$s$$ or strongly accepts it.

We list some properties that encapsulates the combinatorial characteristics of the definitions above.

Properties

1. If $$M$$ decides (or strongly accepts, or rejects) $$s$$ and $$N\subset M$$, then $$N$$ decides (respectively strongly accepts, rejects) $$s$$ as well.
2. For every $$M\in\binom{\mathbb{N}}{\omega}$$ and $$s\in\binom{\mathbb{N}}{<\omega}$$, there is $$N_1\in\binom{M}{\omega}$$ deciding $$s$$. Consequently, there is $$N_2\in\binom{M}{\omega}$$ deciding every subset of $$s$$.

Proof of Theorem Enough to show that if $$\mathcal{F} = \mathcal{F}_1\sqcup \mathcal{F}_2$$, then for every $$M\in\binom{\mathbb{N}}{\omega}$$, there is infinite $$N\in \binom{M}{\omega}$$ such that $$F_i \restriction N = \emptyset$$ for some $$i\in[2]$$.

We are going to use $$\mathcal{F}_1$$ instead of $$\mathcal{F}$$ in the definitions of “accept”, “reject”, “strongly accept” and “decide”. Find $$N\in \binom{M}{\omega}$$ that decides $$\emptyset$$. If $$N$$ rejects $$\emptyset$$, by definition $$\mathcal{F}_1\restriction N = [\emptyset, N]\cap \mathcal{F}_1 = \emptyset$$. Otherwise $$N$$ strongly accepts $$\emptyset$$.

Inductively, we build a decreasing sequence of infinite sets $$N \supset N_1 \supset N_2\supset \dots$$, an increasing sequence of natural numbers $$n_1, n_2, \dots$$, and maintain that $$n_i\in N_i, n_i < \min N_{i+1}$$ and that $$N_i$$ strongly accepts every $$s\subset \{n_j : j < i\}$$. Initially, we take $$N_1 = N$$ as $$N$$ strongly accepts $$\emptyset$$.

Suppose $$N_1 \supset \dots \supset N_i$$ and $$n_1 < \dots < n_{i-1}$$ have been constructed. In the following lemma, when taking $$M = N_i$$ and $$s = \{n_j : j < i\}$$, it spits out $$m$$ and $$N$$, which are exactly what we need for $$n_i$$ and $$N_{i+1}$$ to finish the inductive step.

Lemma Suppose $$M\in\binom{\mathbb{N}}{\omega}$$, $$s\in\binom{\mathbb{N}}{<\omega}$$ and $$\max s < \min M$$. If $$M$$ strongly accepts every subset of $$s$$, then there are $$m \in M$$ and $$N \in \binom{M}{\omega}$$ such that $$n < \min N$$ and $$N$$ strongly accepts every subset of $$s\cup \{n\}$$

Proof of lemma We can build $$M = M_0 \supset M_1\supset M_2 \supset \dots$$ such that for every $$i$$, $$m_i := \min M_i < \min M_{i+1}$$ and $$M_{i+1}$$ decides every subset of $$s\cup \{m_i\}$$. It might happen that $$M_{i+1}$$ rejects a subset of $$s\cup \{m_i\}$$. However, we claim that this cannot happen for infinitely many times.

Otherwise, by the pigeonhole principle, there is $$t\subset s$$ such that $$I = \{i : M_{i+1} \text{ rejects }t\cup\{m_{i}\}\}$$ is infinite. Let $$M’ = \{m_i : i\in I\}$$. Note that $$[t, M’] \subset \cup_i [t\cup\{m_i\}, M_{i+1}]$$, and so $$[t,M’]\cap \mathcal{F}_1\subset \cup_i \left([t\cup\{m_i\}, M_{i+1}]\cap\mathcal{F}_1\right) = \emptyset$$. Hence $$M’\subset M$$ rejects $$t\subset s$$, which is a contradiction.

Now we pick one $$i$$ such that $$M_{i+1}$$ strongly accepts every subset of $$s\cup\{m_i\}$$, and it is easy to check that $$m = m_i$$ and $$N = M_{i+1}$$ suffice. QED for lemma.

Finally, we take $$N_\infty = \{n_1, n_2, \dots\}$$. For any $$s\in\binom{N_\infty}{<\omega}$$, there is $$i$$ such that $$s\subset \{n_1, \dots, n_{i-1}\}$$. Note that $$N_i$$ strongly accepts $$s$$ and $$N_\infty\subset N_i$$. Therefore $$N_\infty$$ (strongly) accepts $$s$$, that is $$[s, N_\infty]\cap \mathcal{F}_1 \neq \emptyset$$, and say $$t\in [s, N_\infty]\cap \mathcal{F}_1$$. Because $$t\in\mathcal{F}_1$$ and $$\mathcal{F} = \mathcal{F}_1 \sqcup \mathcal{F}_2$$ is a Nash-Williams family, $$s\notin \mathcal{F}_2$$. QED.

11 (this post is made with love)

## Alternative to Beamer for Math Presentation

Although using blackboard and chalk is the best option for a math talk for various reasons, sometimes due to limit on the time, one has to make slides to save time on writing. The most common tools to create slides nowadays are LaTeX and Beamer.

When I was preparing for my talk at Vancouver for Connections in Discrete Mathematics in honor of the work of Ron Graham, as it is my first ever conference talk, I decided to ditch Beamer due to my lack of experience. Finally, I ended up using html+css+javascript to leverage my knowledge in web design.

The javascript framework I used is reveal.js. Though there are other options such as impress.js, reveal.js fits better for a math talk. One can easily create a text-based presentation with static images / charts. The framework also has incorporated with MathJax as an optional dependency, which can be added with a few lines of code. What I really like about reveal.js as well as impress.js is that they provide a smooth spatial transition between slides. However, one has to use other javascript library to draw and animate diagrams. For that, I chose raphael.js, a javascript library that uses SVG and VML for creating graphics so that users can easily, for example, create their own specific chart. The source code of the examples on the official website is really a good place to start.

To integrate reveal.js and raphael.js to achieve a step-by-step animation of a diagram, I hacked it by adding a dummy fragment element in my html document so that reveal.js can listen to the fragmentshown event and hence trigger raphael.js to animate the diagram. In cases where the diagrams are made of html elements, I used jQuery to control the animation. Here is my favorite animation in the slides generated by jQuery.

However, one has to make more effort to reverse the animation made by raphael.js or jQuery if one wants to go backwards in slides. I did not implement any reverse animation since I did not plan to go back in slides at all.

In case there is no internet access during the presentation, one has to have copies of all external javascript libraries (sometimes also fonts), which, in my case, are MathJax, raphael.js and jQuery. In order to use MathJax offline, one need to configure reveal.js. Here is how my final html document looks like.

<!doctype html>
<html lang="en">

<meta charset="utf-8">
<title>A Bound on Turán Number for Cycles of Even Length</title>
<meta name="description" content="A contributed talk at Connections in Discrete Mathematics">
<meta name="author" content="Ziln Jiang">
<link rel="stylesheet" href="css/custom.css"> <!-- your customized css stylesheet can go here -->
<!--[if lt IE 9]>
<script src="lib/js/html5shiv.js"></script>
<![endif]-->

<body>
<div class="reveal">
<div class="slides">
<section>
...
</section>
</div>
</div>
<script src="lib/js/jquery.min.js"></script> <!-- local jQuery library -->
<script src="js/raphael-min.js"></script> <!-- local raphael.js -->
<script src="js/reveal.js"></script>
<script>
Reveal.initialize({
controls: false,
slideNumber: true,
progress: true,
fragments: true,
// Optional libraries used to extend on reveal.js
dependencies: [
{ src: 'plugin/math/math.js', async: true } // to enable MathJax
],
math: {
mathjax: 'lib/js/MathJax/MathJax.js' // local MathJax
}
});
</script>
</body>

</html>


Currently, my slides only work on Chrome correctly. There is another bug that I have not figured out yet. If I start afresh from the first slide, then my second diagram generated by Raphael is not rendered correctly. I got around it by refreshing the slide where the second diagram lives. This is still something annoying that I would like to resolve.

After all, I really like this alternative approach of making slides for math presentation because it enables me to implement whatever I imagine.

16 (this post is made with love)

## 十一年

8 (this post is made with love)

## A Short Proof for Hausdorff Moment Problem

Hausdorff moment problem asks for necessary and sufficient conditions that a given sequence $$(m_n)$$ with $$m_0=1$$ be the sequence of moments of a random variable $$X$$ supported on $$[0,1]$$, i.e., $$\operatorname{E}X^n=m_n$$ for all $$n$$.

In 1921, Hausdorff showed that $$(m_n)$$ is such a moment sequence if and only if the sequence is completely monotonic, i.e., its difference sequences satisfy the equation $$(D^r m)_s \ge 0$$ for all $$r, s \ge 0$$. Here $$D$$ is the difference operator on the space of real sequences $$(a_n)$$ given by $$D a = (a_{n} – a_{n+1})$$.

The proof under the fold follows the outline given in (E18.5 – E18.6) Probability with Martingales by David Williams.

Proof of Necessity Suppose $$(m_n)$$ is the moment sequence of a random variable $$X$$ supported on $$[0,1]$$. By induction, one can show that $$(D^r m)_s = \operatorname{E}(1-X)^rX^s$$. Clearly, as $$X$$ is supported on $$[0,1]$$, the moment sequence is completely monotonic.

Proof of Sufficiency Suppose $$(m_n)$$ is a completely monotonic sequence with $$m_0 = 1$$.

Define $$F_n(x) := \sum_{i \le nx}{n\choose i}(D^{n-i}m)_i$$. Clearly, $$F_n$$ is right-continuous and non-decreasing, and $$F_n(0^-) = 0$$. To prove $$F_n(1) = 1$$, one has to prove the identity $$\sum_{i}{n\choose i}(D^{n-i}m)_i = m_0.$$

Classical Trick Since the identity above is about vectors in the linear space (over the reals) spanned by $$(m_n)$$ and the linear space spanned by $$(m_n)$$ is isomorphic to the one spanned by $$(\theta^n)$$, the identity is equivalent to $$\sum_{i}{n\choose i}(D^{n-i}\theta)_i = \theta^0,$$ where $$\theta_n = \theta^n$$. Now, we take advantage of the ring structure of $$\mathbb{R}[\theta]$$. Notice that $$(D^{r}\theta)_s = (1-\theta)^r\theta^s$$. Using the binomial theorem, we obtain $$\sum_{i}{n\choose i}(D^{n-i}\theta)_i = \sum_{i}{n\choose i}(1-\theta)^{n-i}\theta^i = (1-\theta + \theta)^n = \theta^0.$$

Therefore $$F_n$$ is a bona fide distribution function. Define $$m_{n, k} := \int_{[0,1]} x^kdF_n(x)$$, i.e., $$m_{n,k}$$ is the $$k$$th moment of $$F_n$$. We now find an explicit formula for $$m_{n,k}$$.

Noticing that $$F_n$$ is constant, say $$c_{n,i}$$, on $$[\frac{i}{n}, \frac{i+1}{n})$$, for all $$i=0, \dots, n-1$$ and $$c_{n,i}$$ is a linear combination of $$m_0, \dots, m_n$$, we know that $$m_{n,k} = \sum_{i=0}^n a_{n,k,i}m_i$$.

Just like what we did in proving the identity, we use the special case $$m_n = \theta^n$$ to compute the coefficients $$a_i = a_{n,k,i}$$, where $$0 \le \theta \le 1$$. In this case, $$F_n(x) = \sum_{i \le nx}{n\choose i}(D^{n-i}\theta)_i = \sum_{i\le nx}{n\choose i}(1-\theta)^{n-i}\theta^i, m_{n,k} = \sum_{i=0}^n a_{i}\theta^i.$$

Now consider the situation in which a coin with probability $$\theta$$ is tossed at times $$1,2,\dots$$. The random variable $$H_k$$ is $$1$$ if the $$k$$th toss produces heads, $$0$$ otherwise. Define $$A_n := (H_1 + \dots + H_n)/n$$. It is immediate from the formula of $$F_n$$ that $$F_n$$ is the distribution function of $$A_n$$, and so $$m_{n,k}$$ is the $$k$$th moment of $$A_n$$. However, one can calculate the $$k$$th moment of $$A_n$$ explicitly. Let $$f\colon [k] \to [n]$$ be chosen uniformly at random, $$Im_f$$ be the cardinality of the image of $$f$$ and denote by $$p_i = p_{n,k,i} := \operatorname{Pr}(Im_f = i)$$. Using $$f, Im_f$$ and $$p_i$$, we obtain $$\operatorname{E}A_n^k = \operatorname{E}\left(\frac{H_1 + \dots + H_n}{n}\right)^k = \operatorname{E}H_{f(1)}\dots H_{f(k)} = \operatorname{E}\operatorname{E}[H_{f(1)}\dots H_{f(k)}\mid Im_f] = \operatorname{E}\theta^{Im_f} = \sum_{i=0}^n p_{i}\theta^i.$$ Therefore, for all $$\theta\in [0,1]$$, we know that $$\sum_{i=0}^n a_i\theta^i = \sum_{i=0}^n p_i\theta^i$$, and so $$a_i = p_i$$ for all $$i=0,\dots, n$$.

As both $$(a_i)$$ and $$(p_i)$$ do not depend on $$m_i$$, $$a_i = p_i$$ holds in general. Since $$p_k = p_{n, k, k} = \prod_{i=0}^{k-1}(1-i/n)\to 1$$ as $$n\to\infty$$  and $$p_i = 0$$ for all $$i > k$$, we know that $$\lim_n m_{n,k}= m_k$$.

Using the Helly–Bray Theorem, since $$(F_n)$$ is tight, there exists a distribution function $$F$$ and a subsequence $$(F_{k_n})$$ such that $$F_{k_n}$$ converges weakly to $$F$$. The definition of weak convergence implies that $$\int_{[0,1]} x^k dF(x) = \lim_n \int_{[0,1]}x^k dF_{k_n}(x) = \lim_n m_{k_n,k} = m_k.$$ Therefore, the random variable $$X$$ with distribution function $$F$$ is supported on $$[0,1]$$ and its $$k$$th moment is $$m_k$$. QED

There are other two classical moment problems: the Hamburger moment problem and the Stieltjes moment problem.

4 (this post is made with love)

## 欺诈猜数游戏（下）

5 (this post is made with love)

## An Upper Bound on Stirling Number of the Second Kind

We shall show an upper bound on the Stirling number of the second kind, a byproduct of a homework exercise of Probabilistic Combinatorics offered by Prof. Tom Bohman.

Definition A Stirling number of the second kind (or Stirling partition number) is the number of ways to partition a set of $$n$$ objects into $$k$$ non-empty subsets and is denoted by $$S(n,k)$$.

Proposition For all $$n, k$$, we have $$S(n,k) \leq \frac{k^n}{k!}\left(1-(1-1/m)^k\right)^m.$$

Proof Consider a random bipartite graph with partite sets $$U:=[n], V:=[k]$$. For each vertex $$u\in U$$, it (independently) connects to exactly one of the vertices in $$V$$ uniformly at random. Suppose $$X$$ is the set of non-isolated vertices in $$V$$. It is easy to see that $$\operatorname{Pr}\left(X=V\right) = \frac{\text{number of surjections from }U\text{ to }V}{k^n} = \frac{k!S(n,k)}{k^n}.$$

On the other hand,  we claim that for any $$\emptyset \neq A \subset [k]$$ and $$i \in [k]\setminus A$$, $$\operatorname{Pr}\left(i\in X \mid A\subset X\right) \leq \operatorname{Pr}\left(i\in X\right).$$ Note that the claim is equivalent to $$\operatorname{Pr}\left(A\subset X \mid i\notin X\right) \geq \operatorname{Pr}\left(A\subset X\right).$$ Consider the same random bipartite graph with $$V$$ replaced by $$V’:=[k]\setminus \{i\}$$ and let $$X’$$ be the set of non-isolated vertices in $$V’$$. The claim is justified since $$\operatorname{Pr}\left(A\subset X\mid i\notin X\right) = \operatorname{Pr}\left(A\subset X’\right) \geq \operatorname{Pr}\left(A\subset X\right).$$

Set $$A:=[i-1]$$ in above for $$i = 2, \ldots, k$$. Using the multiplication rule with telescoping the conditional probability, we obtain $$\begin{eqnarray}\operatorname{Pr}\left(X=V\right) &=& \operatorname{Pr}\left(1\in X\right)\operatorname{Pr}\left(2\in X \mid [1]\subset X\right)\ldots \operatorname{Pr}\left(k\in X\mid [k-1]\subset X\right)\\ & \leq & \operatorname{Pr}\left(1\in X\right)\operatorname{Pr}\left(2\in X\right)\ldots\operatorname{Pr}\left(k\in X\right) \\ & = & \left(1-(1-1/m)^k\right)^m.\end{eqnarray}$$ QED.

5 (this post is made with love)

## A Probabilistic Proof of Isoperimetric Inequality

This note is based on Nicolas Garcia Trillos’ talk, Some Problems and Techniques in Geometric Probability, at Carnegie Mellon University on January 29, 2015.

In particular, we demonstrate a probabilistic proof of the isoperimetric inequality. The proof can also be found in Integral Geometry and Geometric Probability by Luis A. Santaló.

Theorem For a convex set with perimeter $$L$$ and area $$A$$, the isoperimetric inequality states that $$4\pi A\leq L^2$$, and that equality holds if and only if the convex set is a disk. (Assume that the boundary is a closed convex curve of class $$C^1$$.)

Proof Let $$P(s)$$ parametrize the boundary by its arc length $$s$$. Given $$s$$ and $$\theta$$. Suppose the line through $$P(s)$$ whose angle to the tangent line equals $$\theta$$ intersects the boundary at another point $$Q(s)$$. Let $$\sigma(s, \theta)$$ be the length of the chord between the two intersections $$P(s), Q(s)$$. Consider the integral $$\int (\sigma_1\sin\theta_2 – \sigma_2\sin\theta_1)^2 \mathrm{d}s_1\mathrm{d}\theta_1\mathrm{d}s_2\mathrm{d}\theta_2,$$ where the integration extends over $$0 \leq s_1, s_2 \leq L$$ and $$0 \leq \theta_1, \theta_2 \leq \pi$$.

Expanding the square in the integrand, we obtain that the integral is equal to $$\pi L \int \sigma^2\mathrm{d}s\mathrm{d}\theta – 2\left(\int \sigma\sin\theta\mathrm{d}s\mathrm{d}\theta\right)^2.$$

On one hand, we have $$\int \sigma^2\mathrm{d}s\mathrm{d}\theta = \int_0^L\int_0^\pi \sigma^2\mathrm{d}\theta\mathrm{d}s = \int_0^L 2A\mathrm{d}s = 2LA.$$

On the other hand, let $$p$$ be the distance from the chord to the origin and $$\phi$$ the angle from the $$x$$-axis to the chord. Suppose the angle from the $$x$$-axis to the tangent line is $$\tau$$. We have $$p = \langle x, y\rangle\cdot\langle \sin\phi, -\cos\phi \rangle = x\sin\phi – y\cos\phi.$$ Differentiating the latter, we get $$\mathrm{d}p = \sin\phi\mathrm{d}x – \cos\phi\mathrm{d}y + (x\cos\phi + y\sin\phi)\mathrm{d}\phi.$$ Moreover, we know that $$\mathrm{d}x = \cos\tau\mathrm{d}s, \mathrm{d}y = \sin\tau\mathrm{d}s.$$ Therefore $$\mathrm{d}p = \sin\phi\cos\tau\mathrm{d}s – \cos\phi\sin\tau\mathrm{d}s + + (x\cos\phi + y\sin\phi)\mathrm{d}\phi,$$ and so $$\mathrm{d}p\mathrm{d}\phi = \sin(\phi – \tau)\mathrm{d}s\mathrm{d}\phi.$$ Since $$\theta + \phi = \tau$$ and $$\mathrm{d}\theta + \mathrm{d}\phi = \tau’\mathrm{d}s$$, we have $$\mathrm{d}p\mathrm{d}\phi = -\sin\theta\mathrm{d}s\mathrm{d}\theta,$$ and so $$\int\sigma\sin\theta\mathrm{d}s\mathrm{d}\theta = \int_0^{2\pi}\int_{-\infty}^\infty \sigma\mathrm{d}p\mathrm{d}\theta = 2\pi A.$$

Since the integral is non-negative, we have that $$2\pi A(L^2 – 4\pi A)\geq 0$$, and so $$4\pi A \leq L^2$$. The equality is achieved if and only if $$\sigma / \sin\theta$$ is a constant, in which case the boundary is a circle. QED.

Remark The proof is considered a probabilistic proof because the differential form $$\mathrm{d}p\mathrm{d}\theta$$ is the measure (invariant under rigid motion) of a random line.

0 (be the first to like this)