## Organic and Inorganic Chemistry Lesson of the Day – Optical Rotation is a Bulk Property

It is important to note that optical rotation is usually discussed as a bulk property, because it’s usually measured as a bulk property by a polarimeter.  Any individual enantiomeric molecule can almost certainly rotate linearly polarized light.  However, in a bulk sample of a chiral substance, there is usually another molecule that can rotate light in the opposite direction.  This is due to the uniform distribution of the stereochemistry of a random sample of the molecules of one compound.  (In other words, the substance consists of different stereoisomers of one compound, and the proportions of the different stereoisomers are roughly equal.)  Because one molecule’s rotation of the light can be cancelled by another molecule’s optical rotation in the opposite direction, such a random sample of the compound would have no net optical rotation.  This type of cancellation will definitely occur in a racemic mixture.  However, if a substance is enantiomerically pure, then all of the molecules in that substance will rotate linearly polarized light in the same direction – this substance is optically active.

## Organic and Inorganic Chemistry Lesson of the Day – The Difference Between (+)/(-) and (R)/(S) in Stereochemical Notation

In a previous Chemistry Lesson of the Day, I introduced the concept of optical rotation (a.k.a. optical activity).  You may also be familiar with the Cahn-Ingold-Prelog priority rules for designating stereogenic centres as either (R) or (S).   There is no direct association between the (+)/(-) designation and the (R)/(S) designation.  In other words, an (R)-enantiomer can be dextrorotary or levorotary – it must be determined on a case-by-case basis.  The same holds true for an (S)-enantiomer.

(R)/(S) can be used to distinguish between enantiomers in one exception: If the stereoisomer has only 1 stereogenic centre, then this designation can also serve as a way to distinguish between 2 enantiomers.

Furthermore, note that the designation of optical rotation applies to a molecule, whereas the R/S designation applies to a particular stereogenic centre within a molecule.  Thus, a molecule with 2 stereogenic centres may have one (R) stereogenic centre and one (S) stereogenic centre.  However, a chiral compound consisting purely of one enantiomer can rotate linearly polarized light in only one direction, and that direction must be determined on a case-by-case basis by a polarimeter.

## University of Toronto Alumni Reception with Meric Gertler – Tuesday, September 16, 2014 @ Sheraton Vancouver Wall Centre

I will attend the upcoming University of Toronto Alumni Reception in Vancouver to meet the new President of the University of Toronto, Meric Gertler.  If you will attend, please feel free to come up and say “Hello”!

Date: Tuesday, September 16, 2014

Time: 6:30 PM to 8:30 PM

Location:

Sheraton Vancouver Wall Centre
1088 Burrard St.
Vancouver, BC
V6Z 2R9

## Mathematics and Mathematical Statistics Lesson of the Day – Convex Functions and Jensen’s Inequality

Consider a real-valued function $f(x)$ that is continuous on the interval $[x_1, x_2]$, where $x_1$ and $x_2$ are any 2 points in the domain of $f(x)$.  Let

$x_m = 0.5x_1 + 0.5x_2$

be the midpoint of $x_1$ and $x_2$.  Then, if

$f(x_m) \leq 0.5f(x_1) + 0.5f(x_2),$

then $f(x)$ is defined to be midpoint convex.

More generally, let’s consider any point within the interval $[x_1, x_2]$.  We can denote this arbitrary point as

$x_\lambda = \lambda x_1 + (1 - \lambda)x_2,$ where $0 < \lambda < 1$.

Then, if

$f(x_\lambda) \leq \lambda f(x_1) + (1 - \lambda) f(x_2),$

then $f(x)$ is defined to be convex.  If

$f(x_\lambda) < \lambda f(x_1) + (1 - \lambda) f(x_2),$

then $f(x)$ is defined to be strictly convex.

There is a very elegant and powerful relationship about convex functions in mathematics and in mathematical statistics called Jensen’s inequality.  It states that, for any random variable $Y$ with a finite expected value and for any convex function $g(y)$,

$E[g(Y)] \geq g[E(Y)]$.

A function $f(x)$ is defined to be concave if $-f(x)$ is convex.  Thus, Jensen’s inequality can also be stated for concave functions.  For any random variable $Z$ with a finite expected value and for any concave function $h(z)$,

$E[h(Z)] \leq h[E(Z)]$.

In future Statistics Lessons of the Day, I will prove Jensen’s inequality and discuss some of its implications in mathematical statistics.

## Organic and Inorganic Chemistry Lesson of the Day – DO NOT USE THE PREFIXES (d-) and (l-) TO CLASSIFY ENANTIOMERS

In a recent Chemistry Lesson of the Day, I introduced the concept of optical rotation, and I mentioned the use of (+) and (-) to denote dextrorotary and levorotary compounds, respectively.

Some people use d- and l- instead of (+) and (-), respectively.  I strongly discourage this, because there is an old system of classifying stereogenic centres that uses the prefixes D- and L-, and the obvious similarity between the prefixes of the 2 systems causes much confusion.

This old system classifies stereogenic centres based on the similarities of their configurations to the 2 enantiomers of glyceraldehyde.  It is confusing, non-intuitive, and outdated, so I will not discuss its rationale or details on my blog.  (If you are interested, here is a good explanation from the University of Maine’s chemistry department.)

Also, note that D- and L- classify stereogenic centres, whereas d- and l- classify enantiomers – this just adds more confusion.

In short,

• DO NOT use d- and l- to classify enantiomers; use (+) and (-) instead.
• DO NOT use D- and L- to classify stereogenic centres; use the Cahn-Ingold-Prelog priority rules (R/S) instead.

## Mathematical Statistics Lesson of the Day – The Glivenko-Cantelli Theorem

In 2 earlier tutorials that focused on exploratory data analysis in statistics, I introduced

There is actually an elegant theorem that provides a rigorous basis for using empirical CDFs to estimate the true CDF – and this is true for any probability distribution.  It is called the Glivenko-Cantelli theorem, and here is what it states:

Given a sequence of $n$ independent and identically distributed random variables, $X_1, X_2, ..., X_n$,

$P[\lim_{n \to \infty} \sup_{x \epsilon \mathbb{R}} |\hat{F}_n(x) - F_X(x)| = 0] = 1.$

In other words, the empirical CDF of $X_1, X_2, ..., X_n$ converges uniformly to the true CDF.

My mathematical statistics professor at the University of Toronto, Keith Knight, told my class that this is often referred to as “The First Theorem of Statistics” or the “The Fundamental Theorem of Statistics”.  I think that this is a rather subjective title – the central limit theorem is likely more useful and important – but Page 261 of John Taylor’s An introduction to measure and probability (Springer, 1997) recognizes this attribution to the Glivenko-Cantelli theorem, too.

## Mathematical and Applied Statistics Lesson of the Day – The Motivation and Intuition Behind Chebyshev’s Inequality

In 2 recent Statistics Lessons of the Day, I

Chebyshev’s inequality is just a special version of Markov’s inequality; thus, their motivations and intuitions are similar.

$P[|X - \mu| \geq k \sigma] \leq 1 \div k^2$

Markov’s inequality roughly says that a random variable $X$ is most frequently observed near its expected value, $\mu$.  Remarkably, it quantifies just how often $X$ is far away from $\mu$.  Chebyshev’s inequality goes one step further and quantifies that distance between $X$ and $\mu$ in terms of the number of standard deviations away from $\mu$.  It roughly says that the probability of $X$ being $k$ standard deviations away from $\mu$ is at most $k^{-2}$.  Notice that this upper bound decreases as $k$ increases – confirming our intuition that it is highly improbable for $X$ to be far away from $\mu$.

As with Markov’s inequality, Chebyshev’s inequality applies to any random variable $X$, as long as $E(X)$ and $V(X)$ are finite.  (Markov’s inequality requires only $E(X)$ to be finite.)  This is quite a marvelous result!

## Organic and Inorganic Chemistry Lesson of the Day – Optical Rotation (a.k.a. Optical Activity)

A substance consisting of a chiral compound can rotate linearly polarized light – this property is known as optical rotation (more commonly called optical activity).  The direction in which the light is rotated is one way to distinguish between a pair of enantiomers, as they rotate linearly polarized light in opposite directions.

Imagine if you are an enantiomer, and linearly polarized light approaches you.

• If the light is rotated clockwise from your perspective, then you are a dextrorotary enantiomer.
• Otherwise, if the light is rotated counterclockwise from your perspective, then you are a levorotary enantiomer.

In a previous Chemistry Lesson of the Day, I introduced the concept of diastereomers, and I used threose as an example.  Let’s use threose to illustrate some notation about optical activity.

(-)-Threose

• Levorotary compounds are denoted by the prefix (-), followed by a hyphen, then followed by the name of the compound.  The above molecule is (-)-threose.
• Dextrorotary compounds are denoted by the prefix (+), followed by a hyphen, then followed by the name of the compound.  The enantiomer of (-)-threose is (+)-threose.

A compound’s optical rotation is determined by a polarimeter.

I strongly discourage the use of the prefixes (d)- and (l-) to distinguish between enantiomers.  Use (+) and (-) instead.

Beware of the difference between designating enantiomers as (+) or (-) and designating stereogenic centres as either (R) or (S).

It is important to note that optical rotation is usually referred to as a bulk property.

## Mathematical Statistics Lesson of the Day – Chebyshev’s Inequality

The variance of a random variable $X$ is just an expected value of a function of $X$.  Specifically,

$V(X) = E[(X - \mu)^2], \ \text{where} \ \mu = E(X)$.

Let’s substitute $(X - \mu)^2$ into Markov’s inequality and see what happens.  For convenience and without loss of generality, I will replace the constant $c$ with another constant, $b^2$.

$\text{Let} \ b^2 = c, \ b > 0. \ \ \text{Then,}$

$P[(X - \mu)^2 \geq b^2] \leq E[(X - \mu)^2] \div b^2$

$P[ (X - \mu) \leq -b \ \ \text{or} \ \ (X - \mu) \geq b] \leq V(X) \div b^2$

$P[|X - \mu| \geq b] \leq V(X) \div b^2$

Now, let’s substitute $b$ with $k \sigma$, where $\sigma$ is the standard deviation of $X$.  (I can make this substitution, because $\sigma$ is just another constant.)

$\text{Let} \ k \sigma = b. \ \ \text{Then,}$

$P[|X - \mu| \geq k \sigma] \leq V(X) \div k^2 \sigma^2$

$P[|X - \mu| \geq k \sigma] \leq 1 \div k^2$

This last inequality is known as Chebyshev’s inequality, and it is just a special version of Markov’s inequality.  In a later Statistics Lesson of the Day, I will discuss the motivation and intuition behind it.  (Hint: Read my earlier lesson on the motivation and intuition behind Markov’s inequality.)

## Getting Ready for Mathematical Classes in the New Semester – Guest-Blogging on SFU’s Career Services Informer

The following blog post was slightly condensed for editorial brevity and then published on the Career Services Informer, the official blog of the Career Services Centre at my undergraduate alma mater, Simon Fraser University

As a new Fall semester begins, many students start courses such as math, physics, computing science, engineering and statistics.  These can be tough classes with a rapid progression in workload and difficulty, but steady preparation can mount a strong defense to the inevitable pressure and stress.  Here are some tips to help you to get ready for those classes.