Especially in the **analysis of randomized algorithms**, probabilistic bounds play a pivotal role in the proving of probabilistic theorems. This blog aims to be a **reference** for such key results that will be used in the randomized analyses conducted in upcoming blogs. Foremost, we cover the all-important **Markov and Chebyshev bounds**, complete with their simplistic proofs. Then, we move on to state the **Chernoff and Hoeffding concentration bounds**; the proofs of these bounds are quite involved probabilistically and will not be stressed in detail here. Lastly, we conclude other probabilistic bounds such as the **union bound** for future reference.

For the foundational probability theory background for internalizing the material here, look into a previous post. The content presented here is inspired and adapted from Mitzenmachel and Upfal’s *Probability and Computing*, and the notes of C. Seshadhri’s CSE 290A course — the course webpage can be found here.

## Markov’s Bound

Russian mathematician, **Andrey Markov**, was instrumental in the fields of probability theory and statistics, and he is most famously known for his concoction of the stochastic process known as **Markov chains**. Here we present one of the fundamental inequalities in probability theory known as **Markov’s inequality** along with its proof.

**Theorem (Markov’s Bound):** Let be a nonnegative random variable (i.e. all possible values of are nonnegative). Then,

or equivalently written as

for all constant .

**Proof:** Consider some random variable such that all its possible values are nonnegative, and also let be some constant. Now define the following indicator:

and thus we can write that

at all times because of how is defined and since is nonnegative. By definition of expectation, we get that

since is equivalent to . So thus by substitution we see that

since is a constant and by the use of our knowledge of the relation between and . Thus we have completed the derivation of the claim as desired.

The only assumption we need to apply Markov’s inequality is that the **random variable given is nonnegative**. Other than that, we don’t require any other information in relation to the random variable (e.g. variance, etc.), only the expectation. This inequality, although crude and simplistic, is extremely useful due to its **convenient generality**. Further, it is **tight** under the assumptions made to user Markov’s inequality, i.e. there exists a random variable and constant such that we have equality.

## Chebyshev’s Bound

Before diving into the next bound, recall the definition of **variance**, denoted by , for a random variable :

The variance intuitively quantifies the level of the random variable **deviance from its expected value**. The variance sometimes can be computable from a given random variable, and in these cases, the Chebyshev bound is applicable as we will see further on.

Another Russian mathematician is the renowned **Pafnuty Chebyshev**, whose student in fact was Markov himself and many other prominent Russian mathematicians. His most famous contribution to the field of probability theory is the inequality we will introduce here, and from this result followed many other important results in the early days of probability theory. Let’s now discuss **Chebyshev’s inequality** below.

**Theorem (Chebyshev’s Bound):** Let be a random variable. Then,

for all constant .

**Proof:** Notice that since , the inequality and are equivalent, so we can thus write

Furthermore recall that , so simply by applying Markov’s inequality to the random variable , we get that

and thus by substitution we get that as claimed.

As with Markov’s bound, there are a few more **useful expressions** of the same inequality: we list them below. Recall that the **standard deviation** of a random variable , denoted by , is defined as .

**Corollary:** Consider a random variable . Then,

and

for any constant .

The proof of the above corollary follows immediately by substitution into the Chebyshev bound, and so we don’t state the contents of the derivation here.

As we can see the Chebyshev bound (and its corollaries) is in fact **more general** than the Markov bound, as we do not make any assumptions on the random variable . However, we do **require the use of the variance**, and this operation may difficult to compute, so at the same time, the Chebyshev inequality isn’t as general as we might want it to be. Nevertheless, this bound is **extremely useful** for many steps of probabilistic analyses. Moreover, similar to Markov’s bound, this bound is also **tight** with the assumptions made.

## Poisson Trials

Though the general formulations of the Chernoff and Hoeffding bounds only require the use of the moment generating function (which essentially provides a method of understanding “moments” of a random variable), the **most compulsive rendering of the bounds** are for the case of analyzing **Poisson trials** which we will first define here.

Recall that a **0-1 random variable** is the type of random variable such that or where its “probability” is defined by , i.e. the probability the 0-1 random variable is 1; the definition is quite evident in the nomenclature for this class of random variables. Now let’s define Poisson trials formally.

**Definition (Poisson Trials):** Consider mutually independent 0-1 random variables with corresponding probabilities (not necessarily identical); this sequence is known as a sequence of Poisson trials. Then the random variable defined by is known as the sum of Poisson trials.

Now it’s not daft that this might seem similar to the definition of the Bernoulli random variable. In fact, the **Bernoulli is simply a special case** of the sum of Poisson trials, and that is where the corresponding probabilities identical (they are all equal). Further, the sum of random variables does remind us of the **Central Limit Theorem** which asserts the sum of independent random variables from a fixed distribution approaches a Gaussian distribution. Note that in Poisson trials, each trial can come from its own distribution, though the **sum of Poisson trials can be deemed sub-Gaussian**, i.e. it retains certain properties of the Gaussian distribution, especially the tail properties. Keeping this in the back of our minds, it is easier to understand the intuition behind the Chernoff bounds.

Noting the lenience of varying probabilities for the 0-1 random variables is what makes them interesting, and in fact quite **ubiquitous** in the world probabilistic analysis. The following bounds have very particular results for sums of Poisson trials primarily, and we delve into these in detail.

## The Chernoff Bounds

By the **general derivation of the Chernoff bound** (by the process of moment generating functions) we reach the following conclusions for some given random variable :

and

where we must choose choose our wisely to obtain beneficial bounds. Quite underwhelming for the most famous set of concentration inequalities in probability theory.

However, by this process, there are numerous **well-established results for Poisson trials** specifically and this is quite useful since many analyses can be transformed into an equivalent problem involving the sum of Poisson trials. To introduce some “probability theory slang,” the terms **“upper tail”** and **“lower tail”** refer to the ends (right and left, respectively) of the somewhat-Gaussian distributions that Poisson trials generally are.

So we first take a look at the **multiplicative Chernoff upper tail inequalities**.

**Theorem (Multiplicative Chernoff Bounds for the Upper Tail):** Let be independent Poisson trials with corresponding probabilities , respectively. Also let . Then we have the following bounds:

- for ,

- for ,

- for ,

which we call the upper tail multiplicative Chernoff bounds.

On the other hand, for the** lower tail**, we have similar bounds as follows.

**Theorem (Multiplicative Chernoff Bounds for the Lower Tail):** Let be independent Poisson trials with corresponding probabilities , respectively. Also let . Then we have the following bounds:

- for ,

- for ,

which we call the lower tail multiplicative Chernoff bounds.

Note that we write **“multiplicative”** since the types of probabilities we are bounding, are in the form of some multiplicative factor away from the expectation. Further, for the lower tail, we only have cases for since as it is a sum of Poisson trials.

Further, although the distinctive applications of these bounds seem innocuous now, in upcoming blog posts and in many randomized analyses these bounds do come into play; one example is the analysis of the high probability of the expected case for the run-time of **Randomized QuickSort**.

## Hoeffding’s Bound

The **Hoeffding bound** is derived from the Chernoff bound, but it is in an **additive form** rather than the multiplicative form of the above bounds. Firstly, let’s dive into the **generalized theorem**.

**Theorem (Hoeffding’s Bound):** Let be independent random variables with identical expectation and such that for all , we have , i.e. . Also let and not that for any . Then,

for all .

Notice the type of probability we wish to compute, it’s about looking at a very **specific range away from the expectation** of . Thus we use Hoeffding bounds when we wish to analyze such types of events than multiplicative ones.

Now let’s introduce the same Hoeffding bound but for a more **specific case**, and this case tends to be very crucial and ubiquitous in itself, so it’s worth presenting here.

**Corollary:** Let be independent random variables with identical expectation and such that for all we have . Also let . Then,

for all .

As with the Chernoff bounds, the results here may seem quite arbitrary without the provision of examples, but they are key to understand and remember for future posts and of course for all kinds of randomized analyses out there. Both the **Hoeffding and Chernoff bounds are the** **principal concentration bounds** employed probabilistic analysis.

## The Union Bound

Taking a break from the very intense results presented above in the form of the Chernoff and Hoeffding bounds, let’s take a look at a very key and **useful bound** in probability theory. Recall that for any events and , we have that

by the basic use of set theory and mutual disjoint linearity of the probability function. Also recall that by the union of events, we signify an “or,” and similarly for an intersection of events, we signify an “and,” which can also be denoted simply by a comma (e.g. ). From the above expression, it is easy to see that

for any events and since probability is always nonnegative. With this knowledge in hand, lets dive into the bound and its proof.

**Theorem (Union Bound):** Consider the events . Then,

with equality when the events are mutually disjoint.

**Proof:** We prove this by induction on . So our inductive hypothesis assumes that

and so by substitution we get that

with facets of the above derivation followed by the use of observations stated previously in this section. Thus we have achieved our bound as desired. Similarly, when the events are mutually disjoint equality follows from the fact that when events and are disjoint since .

This bound is **universal**, and almost in any situation involving not well-known events or random variables, the use of the union bound is extremely beneficial in coming to a **clean answer**. Personally, in my experience, almost always in the computation of an upper bound in the probability of a union of events, this bound is brought into play, so it’s definitely a **key one to remember**.

## Looking Ahead

All the bounds presented above are **key to all kinds of probabilistic analysis**, and especially in the **world of randomized algorithms**. However, the material is definitely extremely dry and seemingly arbitrary with the lack of examples. In upcoming posts, we will employ many of the results here for our randomized algorithm analysis; but in the meantime, the books *Probability and Computing* and *Randomized Algorithms* are definitely great mines for such examples of the bounds in use. Again this post should act reference to other posts or for personal archiving.

I love the organization of this!