## Organic and Inorganic Chemistry Lesson of the Day – Cis/Trans Isomers Are Diastereomers

Recall that the definition of diastereomers is simply 2 molecules that are NOT enantiomers.  Diastereomers often have at least 2 stereogenic centres, and my previous lesson showed an example of how such diastereomers can arise.

However, while an enantiomer must have at least 1 stereogenic centre, there is nothing in the definition of a diastereomer that requires it to have any stereogenic centres.  In fact, a diastereomer does not have to be chiral.  A pair of cis/trans isomers are also diastereomers.  Recall the example of trans-1,2-dibromoethylene and cis-1,2-dibromoethylene:

Image courtesy of Roland1952 on Wikimedia.

These 2 molecules are stereoisomers – they have the same atoms and sequence/connectivity of bonds, but they differ in their spatial orientations.  They are NOT mirror images of each other, let alone non-superimposable mirror images.  Thus, by definition, they are diastereomers, even though they are not chiral.

## Mathematical and Applied Statistics Lesson of the Day – The Motivation and Intuition Behind Markov’s Inequality

Markov’s inequality may seem like a rather arbitrary pair of mathematical expressions that are coincidentally related to each other by an inequality sign:

$P(X \geq c) \leq E(X) \div c,$ where $c > 0$.

However, there is a practical motivation behind Markov’s inequality, and it can be posed in the form of a simple question: How often is the random variable $X$ “far” away from its “centre” or “central value”?

Intuitively, the “central value” of $X$ is the value that of $X$ that is most commonly (or most frequently) observed.  Thus, as $X$ deviates farther and farther from its “central value”, we would expect those distant-from-the-centre values to be less frequently observed.

Recall that the expected value, $E(X)$, is a measure of the “centre” of $X$.  Thus, we would expect that the probability of $X$ being very far away from $E(X)$ is very low.  Indeed, Markov’s inequality rigorously confirms this intuition; here is its rough translation:

As $c$ becomes really far away from $E(X)$, the event $X \geq c$ becomes less probable.

You can confirm this by substituting several key values of $c$.

• If $c = E(X)$, then $P[X \geq E(X)] \leq 1$; this is the highest upper bound that $P(X \geq c)$ can get.  This makes intuitive sense; $X$ is going to be frequently observed near its own expected value.

• If $c \rightarrow \infty$, then $P(X \geq \infty) \leq 0$.  By Kolmogorov’s axioms of probability, any probability must be inclusively between $0$ and $1$, so $P(X \geq \infty) = 0$.  This makes intuitive sense; there is no possible way that $X$ can be bigger than positive infinity.

## Organic and Inorganic Chemistry Lesson of the Day – Meso Isomers

A molecule is a meso isomer if it

Meso isomers have an internal plane of symmetry, which arises from 2 identically substituted but oppositely oriented stereogenic centres.  (By “oppositely oriented”, I mean the stereochemical orientation as defined by the Cahn-Ingold-Prelog priority system.  For example, in a meso isomer with 2 tetrahedral stereogenic centres, one stereogenic centre needs to be “R”, and the other stereogenic centre needs to be “S”. )  This symmetry results in the superimposability of a meso isomer’s mirror image.

By definition, a meso isomer and an enantiomer from the same stereoisomer are a pair of diastereomers.

Having at least 2 stereogenic centres is a necessary but not sufficient condition for a molecule to have meso isomers.  Recall that a molecule with $n$ tetrahedral stereogenic centres has at most $2^n$ stereoisomers; such a molecule would have less than $2^n$ stereoisomers if it has meso isomers.

Meso isomers are also called meso compounds.

Here is an example of a meso isomer; notice the internal plane of symmetry – the horizontal line that divides the 2 stereogenic carbons:

(2R,3S)-tartaric acid

Image courtesy of Project Osprey from Wikimedia (with a slight modification).

## The Chi-Squared Test of Independence – An Example in Both R and SAS

#### Introduction

The chi-squared test of independence is one of the most basic and common hypothesis tests in the statistical analysis of categorical data.  Given 2 categorical random variables, $X$ and $Y$, the chi-squared test of independence determines whether or not there exists a statistical dependence between them.  Formally, it is a hypothesis test with the following null and alternative hypotheses:

$H_0: X \perp Y \ \ \ \ \ \text{vs.} \ \ \ \ \ H_a: X \not \perp Y$

If you’re not familiar with probabilistic independence and how it manifests in categorical random variables, watch my video on calculating expected counts in contingency tables using joint and marginal probabilities.  For your convenience, here is another video that gives a gentler and more practical understanding of calculating expected counts using marginal proportions and marginal totals.

Today, I will continue from those 2 videos and illustrate how the chi-squared test of independence can be implemented in both R and SAS with the same example.

## Mathematical Statistics Lesson of the Day – Markov’s Inequality

Markov’s inequality is an elegant and very useful inequality that relates the probability of an event concerning a non-negative random variable, $X$, with the expected value of $X$.  It states that

$P(X \geq c) \leq E(X) \div c,$

where $c > 0$.

I find Markov’s inequality to be beautiful for 2 reasons:

1. It applies to both continuous and discrete random variables.
2. It applies to any non-negative random variable from any distribution with a finite expected value.

In a later lesson, I will discuss the motivation and intuition behind Markov’s inequality, which has useful implications for understanding a data set.

## Organic and Inorganic Chemistry Lesson of the Day – Racemic Mixtures

A racemic mixture is a mixture that contains equal amounts of both enantiomers of a chiral molecule.  (By amount, I mean the usual unit of quantity in chemistry – the mole.  Of course, since enantiomers are isomers, their molar masses are equal, so a racemic mixture would contain equal masses of both enantiomers, too.)

In synthesizing enantiomers, if a set of reactants combine to form a racemic mixture, then the reactants are called non-stereoselective or non-stereospecific.

in 1895, Otto Wallach proposed that a racemic crystal is more dense than a crystal with purely one of the enantiomers; this is known as Wallach’s rule.  Brock et al. (1991) substantiated this with crystallograhpic data.

Reference:

Brock, C. P., Schweizer, W. B., & Dunitz, J. D. (1991). On the validity of Wallach’s rule: on the density and stability of racemic crystals compared with their chiral counterparts. Journal of the American Chemical Society, 113(26), 9811-9820.

## Applied Statistics Lesson of the Day – The Coefficient of Variation

In my statistics classes, I learned to use the variance or the standard deviation to measure the variability or dispersion of a data set.  However, consider the following 2 hypothetical cases:

1. the standard deviation for the incomes of households in Canada is $2,000 2. the standard deviation for the incomes of the 5 major banks in Canada is$2,000

Even though this measure of dispersion has the same value for both sets of income data, $2,000 is a significant amount for a household, whereas$2,000 is not a lot of money for one of the “Big Five” banks.  Thus, the standard deviation alone does not give a fully accurate sense of the relative variability between the 2 data sets.  One way to overcome this limitation is to take the mean of the data sets into account.

A useful statistic for measuring the variability of a data set while scaling by the mean is the sample coefficient of variation:

$\text{Sample Coefficient of Variation (} \bar{c_v} \text{)} \ = \ s \ \div \ \bar{x},$

where $s$ is the sample standard deviation and $\bar{x}$ is the sample mean.

Analogously, the coefficient of variation for a random variable is

$\text{Coefficient of Variation} \ (c_v) \ = \ \sigma \div \ \mu,$

where $\sigma$ is the random variable’s standard deviation and $\mu$ is the random variable’s expected value.

The coefficient of variation is a very useful statistic that I, unfortunately, never learned in my introductory statistics classes.  I hope that all new statistics students get to learn this alternative measure of dispersion.

## Using Your Vacation to Develop Your Career – Guest Blogging on Simon Fraser University’s Career Services Informer

The following post was originally published on the Career Services Informer.

I recently took a vacation from my former role as a statistician at the BC Centre for Excellence in HIV/AIDS. I did not plan a trip out of town – the spring weather was beautiful in Vancouver, and I wanted to spend time on the things that I like to do in this city. Many obvious things came to mind – walking along beaches, practicing Python programming and catching up with friends – just to name a few.

Yes, Python programming was one of the obvious things on my vacation to-do list, and I understand how ridiculous this may seem to some people. Why tax my brain during a time that is meant for mental relaxation, especially when the weather is great?

## Machine Learning and Applied Statistics Lesson of the Day – Positive Predictive Value and Negative Predictive Value

For a binary classifier,

• its positive predictive value (PPV) is the proportion of positively classified cases that were truly positive.

$\text{PPV} = \text{(Number of True Positives)} \ \div \ \text{(Number of True Positives} \ + \ \text{Number of False Positives)}$

• its negative predictive value (NPV) is the proportion of negatively classified cases that were truly negative.

$\text{NPV} = \text{(Number of True Negatives)} \ \div \ \text{(Number of True Negatives} \ + \ \text{Number of False Negatives)}$

In a later Statistics and Machine Learning Lesson of the Day, I will discuss the differences between PPV/NPV and sensitivity/specificity in assessing the predictive accuracy of a binary classifier.

(Recall that sensitivity and specificity can also be used to evaluate the performance of a binary classifier.  Based on those 2 statistics, we can construct receiver operating characteristic (ROC) curves to assess the predictive accuracy of the classifier, and a minimum standard for a good ROC curve is being better than the line of no discrimination.)

## Organic and Inorganic Chemistry Lesson of the Day – Diastereomers

I previously introduced the concept of chirality and how it is a property of any molecule with only 1 stereogenic centre.  (A molecule with $n$ stereogenic centres may or may not be chiral, depending on its stereochemistry.)  I also defined 2 stereoisomers as enantiomers if they are non-superimposable mirror images of each other.  (Recall that chirality in inorganic chemistry can arise in 2 different ways.)

It is possible for 2 stereoisomers to NOT be enantiomers; in fact, such stereoisomers are called diastereomers.  Yes, I recognize that defining something as the negation of something else is unusual.  If you have learned set theory or probability (as I did in my mathematical statistics classes) then consider the set of all pairs of the stereoisomers of one compound – this is the sample space.  The enantiomers form a set within this sample space, and the diastereomers are the complement of the enantiomers.

It is important to note that, while diastereomers are not mirror images of each other, they are still non-superimposable.  Diastereomers often (but not always) arise from stereoisomers with 2 or more stereogenic centres; here is an example of how they can arise.  (A pair of cis/trans-isomers are also diastereomers, despite not having any stereogenic centres.)

1) Consider a stereoisomer with 2 tetrahedral stereogenic centres and no meso isomers*.  This isomer has $2^{n = 2}$ stereoisomers, where $n = 2$ denotes the number of stereogenic centres.

2) Find one pair of enantiomers based on one of the stereogenic centres.

3) Find the other pair enantiomers based on the other stereogenic centre.

4) Take any one molecule from Step #2 and any one molecule from Step #3.  These cannot be mirror images of each other.  (One molecule cannot have 2 different mirror images of itself.)  These 2 molecules are diastereomers.

Think back to my above description of enantiomers as a proper subset within the sample space of the pairs of one set of stereoisomers.  You can now see why I emphasized that the sample space consists of pairs, since multiple different pairs of stereoisomers can form enantiomers.  In my example above, Steps #2 and #3 produced 2 subsets of enantiomers.  It should be clear by now that enantiomers and diastereomers are defined as pairs.  To further illustrate this point,

a) call the 2 molecules in Step#2 A and B.

b) call the 2 molecules in Step #3 C and D.

A and B are enantiomers.  A and C are diastereomers.  Thus, it is entirely possible for one molecule to be an enantiomer with a second molecule and a diastereomer with a third molecule.

Here is an example of 2 diastereomers.  Notice that they have the same chemical formula but different 3-dimensional orientations – i.e. they are stereoisomers.  These stereoisomers are not mirror images of each other, but they are non-superimposable – i.e. they are diastereomers.

(-)-Threose

(-)-Erythrose

Images courtesy of Popnose, DMacks and Edgar181 on Wikimedia.  For brevity, I direct you to the Wikipedia entry for diastereomers showing these 4 images in one panel.

In a later Chemistry Lesson of the Day on optical rotation (a.k.a. optical activity), I will explain what the (-) symbol means in the names of those 2 diastereomers.

*I will discuss meso isomers in a separate lesson.

## New Job as Biostatistical Analyst at the British Columbia Cancer Agency

Dear Readers and Followers of The Chemical Statistician:

My apologies for the slower than usual posting frequency in the last few months, but I have been very busy preparing for a big transition – after a long and intense selection process that started in March, I was offered a new job as a biostatistical analyst at the British Columbia Cancer Agency (BCCA)!

I was sad to leave many of the kind and friendly co-workers whom I met at the British Columbia Centre for Excellence in HIV/AIDS during my 10 months of working there, but I was very excited to accept this offer and begin working for the BCCA – specifically, in the Cancer Surveillance and Outomces (CSO) Unit.  I had already met several of my new co-workers from past meetings in the Vancouver SAS User Group, and I also know 2 people who worked for long periods in this same group in the past.  From all of these interactions, I got a very positive impression about the professionalism, expertise, and collegiality of this new group, so I was delighted to join this team.

I started my new job 3 weeks ago, and was plunged into 3 projects immediately.  I have been swamped with work right from the start, so I’m still adjusting to my new schedule and surroundings.  Nonetheless, I hope to resume blogging at my usual pace as I settle into my new job.  (I just posted a new video on calculating expected counts in contingency tables using joint and marginal probabilities.)  I also hope to use my work as inspiration for blogging topics here at The Chemical Statistician.

Thank you all for your patience and continued readership.  It has been a pleasure to learn from you, and I hope to continue a successful expansion of The Chemical Statistician for the rest of 2014 and beyond!

Eric

## Video Tutorial – Calculating Expected Counts in a Contingency Table Using Joint Probabilities

In an earlier video, I showed how to calculate expected counts in a contingency table using marginal proportions and totals.  (Recall that expected counts are needed to conduct hypothesis tests of independence between categorical random variables.)  Today, I want to share a second video of calculating expected counts – this time, using joint probabilities.  This method uses the definition of independence between 2 random variables to form estimators of the joint probabilities for each cell in the contingency table.  Once the joint probabilities are estimated, the expected counts are simply the joint probabilities multipled by the grand total of the entire sample.  This method gives a more direct and deeper connection between the null hypothesis of a test of independence and the calculation of expected counts.

I encourage you to watch both of my videos on expected counts in my YouTube channel to gain a deeper understanding of how and why they can be calculated.  Please note that the expected counts are slightly different in the 2 videos due to round-off error; if you want to be convinced about this, I encourage you to do the calculations in the 2 different orders as I presented in the 2 videos – you will eventually see where the differences arise.