SFU Statistics and Actuarial Science Gala – Wednesday, September 16, 2015

I look forward to attending the #SFU50 Gala at the Department of Statistics and Actuarial Science at Simon Fraser University on Wednesday, September 16, 2015.  There will be a poster presentation of undergraduate case studies, a short awards ceremony, and many opportunities to network with current and former students, professors and staff from that department.  If you will attend this event, please come and say “Hello”!


Time: 5:00 – 7:30 pm

Date: Wednesday, September 16, 2015

Place: Applied Sciences Building Atrium, Simon Fraser University, Burnaby, British Columbia, Canada

Odds and Probability: Commonly Misused Terms in Statistics – An Illustrative Example in Baseball

Yesterday, all 15 home teams in Major League Baseball won on the same day – the first such occurrence in history.  CTV News published an article written by Mike Fitzpatrick from The Associated Press that reported on this event.  The article states, “Viewing every game as a 50-50 proposition independent of all others, STATS figured the odds of a home sweep on a night with a full major league schedule was 1 in 32,768.”  (Emphases added)

odds of all 15 home teams winning on same day

Screenshot captured at 5:35 pm Vancouver time on Wednesday, August 12, 2015.

Out of curiosity, I wanted to reproduce this result.  This event is an intersection of 15 independent Bernoulli random variables, all with the probability of the home team winning being 0.5.

P[(\text{Winner}_1 = \text{Home Team}_1) \cap (\text{Winner}_2 = \text{Home Team}_2) \cap \ldots \cap (\text{Winner}_{15}= \text{Home Team}_{15})]

Since all 15 games are assumed to be mutually independent, the probability of all 15 home teams winning is just

P(\text{All 15 Home Teams Win}) = \prod_{n = 1}^{15} P(\text{Winner}_i = \text{Home Team}_i)

P(\text{All 15 Home Teams Win}) = 0.5^{15} = 0.00003051757

Now, let’s connect this probability to odds.

It is important to note that

  • odds is only applicable to Bernoulli random variables (i.e. binary events)
  • odds is the ratio of the probability of success to the probability of failure

For our example,

\text{Odds}(\text{All 15 Home Teams Win}) = P(\text{All 15 Home Teams Win}) \ \div \ P(\text{At least 1 Home Team Loses})

\text{Odds}(\text{All 15 Home Teams Win}) = 0.00003051757 \div (1 - 0.00003051757)

\text{Odds}(\text{All 15 Home Teams Win}) = 0.0000305185

The above article states that the odds is 1 in 32,768.  The fraction 1/32768 is equal to 0.00003051757, which is NOT the odds as I just calculated.  Instead, 0.00003051757 is the probability of all 15 home teams winning.  Thus, the article incorrectly states 0.00003051757 as the odds rather than the probability.

This is an example of a common confusion between probability and odds that the media and the general public often make.  Probability and odds are two different concepts and are calculated differently, and my calculations above illustrate their differences.  Thus, exercise caution when reading statements about probability and odds, and make sure that the communicator of such statements knows exactly how they are calculated and which one is more applicable.

Analytical Chemistry Lesson of the Day – Linearity in Method Validation

In analytical chemistry, the quantity of interest is often estimated from a calibration line.  A technique or instrument generates the analytical response for the quantity of interest, so a calibration line is constructed from generating multiple responses from multiple standard samples of known quantities.  Linearity refers to how well a plot of the analytical response versus the quantity of interest follows a straight line.  If this relationship holds, then an analytical response can be generated from a sample containing an unknown quantity, and the calibration line can be used to estimate the unknown quantity with a confidence interval.

Note that this concept of “linear” is different from the “linear” in “linear regression” in statistics.

This is the the second blog post in a series of Chemistry Lessons of the Day on method validation in analytical chemistry.  Read the previous post on specificity, and stay tuned for future posts!

Analytical Chemistry Lesson of the Day – Specificity in Method Validation and Quality Assurance

In pharmaceutical chemistry, one of the requirements for method validation is specificity, the ability of an analytical method to distinguish the analyte from other chemicals in the sample.  The specificity of the method may be assessed by deliberately adding impurities into a sample containing the analyte and testing how well the method can identify the analyte.

Statistics is an important tool in analytical chemistry, and, ideally, there is no overlap in the vocabulary that is used between the 2 fields.  Unfortunately, the above definition of specificity is different from that in statistics.  In a previous Machine Learning Lesson and Applied Statistics Lesson of the Day, I introduced the concepts of sensitivity and specificity in binary classification.  In the context of assessing the predictive accuracy of a binary classifier, its specificity is the proportion of truly negative cases among the classified negative cases.

Mathematical Statistics Lesson of the Day – An Example of An Ancillary Statistic

Consider 2 random variables, X_1 and X_2, from the normal distribution \text{Normal}(\mu, \sigma^2), where \mu is unknown.  Then the statistic

D = X_1 - X_2

has the distribution

\text{Normal}(0, 2\sigma^2).

The distribution of D does not depend on \mu, so D is an ancillary statistic for \mu.

Note that, if \sigma^2 is unknown, then D is not ancillary for \sigma^2.

Data Science Seminar by David Campbell on Approximate Bayesian Computation and the Earthworm Invasion in Canada

My colleague, David Campbell, will be the feature speaker at the next Vancouver Data Science Meetup on Thursday, June 25.  (This is a jointly organized event with the Vancouver Machine Learning Meetup and the Vancouver R Users Meetup.)  He will present his research on approximate Bayesian computation and Markov Chain Monte Carlo, and he will highlight how he has used these tools to study the invasion of European earthworms in Canada, especially their drastic effects on the boreal forests in Alberta.

Dave is a statistics professor at Simon Fraser University, and I have found him to be very smart and articulate in my communication with him.  This seminar promises to be both entertaining and educational.  If you will attend it, then I look forward to seeing you there!  Check out Dave on Twitter and LInkedIn.

Title: The great Canadian worm invasion (from an approximate Bayesian computation perspective)

Speaker: David Campbell

Date: Thursday, June 25


HootSuite (Headquarters)

5 East 8th Avenue

Vancouver, BC


• 6:00 pm: Doors are open – feel free to mingle!
• 6:30 pm: Presentation begins.
• ~7:45 Off to a nearby restaurant for food, drinks, and breakout discussions.


After being brought in by pioneers for agricultural reasons, European earthworms have been taking North America by storm and are starting to change the Alberta Boreal forests. This talk uses an invasive species model to introduce the basic ideas behind estimating the rate of new worm introductions and how quickly they spread with the goal of predicting the future extent of the great Canadian worm invasion. To take on the earthworm invaders, we turn to Approximate Bayesian Computation methods. Bayesian statistics are used to gather and update knowledge as new information becomes available owing to their success in prediction and estimating ongoing and evolving processes. Approximate Bayesian Computation is a step in the right direction when it’s just not possible to actually do the right thing- in this case using the exact invasive species model is infeasible. These tools will be used within a Markov Chain Monte Carlo framework.

About Dave Campbell:

Dave Campbell is an Associate Professor in the Department of Statistics and Actuarial Science at Simon Fraser University and Director of the Management and Systems Science Program. Dave’s main research area is at the intersections of statistics with computer science, applied math, and numerical analysis. Dave has published papers on Bayesian algorithms, adaptive time-frequency estimation, and dealing with lack of identifiability. His students have gone on to faculty positions and worked in industry at video game companies and predicting behaviour in malls, chat rooms, and online sales.

Mathematical Statistics Lesson of the Day – Ancillary Statistics

The set-up for today’s post mirrors my earlier Statistics Lessons of the Day on sufficient statistics and complete statistics.

Suppose that you collected data

\mathbf{X} = X_1, X_2, ..., X_n

in order to estimate a parameter \theta.  Let f_\theta(x) be the probability density function (PDF) or probability mass function (PMF) for X_1, X_2, ..., X_n.


a = A(\mathbf{X})

be a statistics based on \textbf{X}.

If the distribution of A(\textbf{X}) does NOT depend on \theta, then A(\textbf{X}) is called an ancillary statistic.

An ancillary statistic contains no information about \theta; its distribution is fixed and known without any relation to \theta.  Why, then, would we care about A(\textbf{X})  I will address this question in later Statistics Lessons of the Day, and I will connect ancillary statistics to sufficient statistics, minimally sufficient statistics and complete statistics.

Eric’s Enlightenment for Wednesday, June 3, 2015

  1. Jodi Beggs uses the Rule of 70 to explain why small differences in GDP growth rates have large ramifications.
  2. Rick Wicklin illustrates the importance of choosing bin widths carefully when plotting histograms.
  3. Shana Kelley et al. have developed an electrochemical sensor for detecting selected mutated nucleic acids (i.e. cancer markers in DNA!).  “The sensor comprises gold electrical leads deposited on a silicon wafer, with palladium nano-electrodes.”
  4. Rhett Allain provides a very detailed and analytical critique of Mjölnir (Thor’s hammer) – specifically, its unrealistic centre of mass.  This is an impressive exercise in physics!
  5. Congratulations to the Career Services Centre at Simon Fraser University for winning TalentEgg’s Special Award for Innovation by a Career Centre!  I was fortunate to volunteer there as a career advisor for 5 years, and it was a wonderful place to learn, grow and give back to the community. My career has benefited greatly from that experience, and it is a pleasure to continue my involvement as a guest blogger for its official blog, The Career Services Informer. Way to go, everyone!

Eric’s Enlightenment for Monday, June 1, 2015

  1. A comprehensive graphic of public perceptions about chemistry in the United Kingdom – compiled by the Royal Society of Chemistry.  (Hat Tip: Neil Smithers)
  2. Qing Ke et al. compiled a list of “sleeping beauties” in science – articles that were not appreciated at the time of publication and required much passage in time before becoming popular in the scientific community.  (Unfortunately, that original article is gated by subscription.)  As reported in Nature.com, “the longest sleeper in the top 15 is a statistics paper from Karl Pearson, entitled, ‘On lines and planes of closest fit to systems of points in space‘.  Published in Philosophical Magazine in 1901, this paper awoke only in 2002.”  Out of those top 15 sleeping beauties, 7 were in chemistry.  A full pre-published version of Ke et al.’s paper can be found on arXiv.
  3. What would the Earth’s stratospheric ozone layer look like if the Montreal Protocol was never enacted to ban halocarbon refrigerants, solvents, and aerosol-can propellants?  Using simulations, Martyn Chipperfield et al. “found that the Antarctic ozone hole would have grown by an additional 40% by 2013.”
  4. Jan Hoffman on new challenges in mental health for university students: “Anxiety has now surpassed depression as the most common mental health diagnosis among college students, though depression, too, is on the rise. More than half of students visiting campus clinics cite anxiety as a health concern, according to a recent study of more than 100,000 students nationwide by the Center for Collegiate Mental Health at Penn State.”

Eric’s Enlightenment for Wednesday, May 27, 2015

  1. Why do humans get schizophrenia, but other animals don’t?
  2. At Marginal Revolution, Ramez Naam recently argued that CRISPR (with all of the limitations in some recent research) should not be feared in two blog posts – Part 1 and Part 2.
  3. Ecological fallacies and exception fallacies – two common mistakes in reasoning, statistics and scientific research.
  4. Intrauterine devices (IUDs) are the most effective contraceptives, so why is their usage so low?  Shefali Luthra reports that – at least for teenage girls – pediatricians were not trained to insert them in their education.  Maddie Oatman finds more complicated reasons for women in general.

Eric’s Enlightenment for Friday, May 22, 2015

  1. John Urschel (academically published mathematician and NFL football player) uses logistic regression, expected value and variance to anticipate that the new farther distance for the extra-point conversion will not reduce its use in the NFL.
  2. John Ioannidis is widely known for his 2005 paper “Why most published research findings are false“.  In 2014, he wrote another paper on the same topic called “How to Make More Published Research True“.
  3. Yoshitaka Fujii holds the record for the number of retractions of academic publications for a single author: 183 papers, or “roughly 7 percent of all retracted papers between 1980 and 2011”.
  4. The chemistry of why bread stales, and how to slow retrogradation.

Eric’s Enlightenment for Wednesday, May 20, 2015

  1. A common but bad criticism of basketball analytics is that statistics cannot capture the effect of teamwork when assessing the value of a player.  Dan Rosenbaum wrote a great article on how adjusted plus/minus accomplishes this goal.
  2. Citing Dan’s work above, Neil Paine used adjusted plus/minus (APM) to show why Jason Collins was one of the top defensive centres in the NBA and the most underrated player of the last 15 years of his career.  When Neil mentions regularized APM (RAPM) in the third-to-last paragraph, he calls it a Bayesian version of APM.  Most statisticians are more familiar with the term ridge regression, which is one type of regression that penalizes the inclusion of too many redundant predictors.  Make sure to check out that great plot of actual RAPM vs. expected PER at the bottom of the article.
  3. In a 33-page article that was published on 2015-05-14 in Physical Review Letters, only the first 9 pages describes the research done for the article; the other 24 pages were used to list its 5,514 authors – setting a record for the largest known number of authors for a single research article.  Hyperauthorship is common in physics, but not – apparently – in biology.  (Hat Tip: Tyler Cowen)
  4. Brandon Findlay explains why methanol/water mixtures make great cooling baths.  He wrote a very thorough follow-up blog post on how to make them, and he includes photos to aid the demonstration.

Eric’s Enlightenment for Friday, May 15, 2015

  1. An infographic compares R and Python for statistics, data analysis, and data visualization – in a lot of detail!
  2. Psychologist Brian Nosek tackles human biases in science – including motivated reasoning and confirmation bias – long but very worthwhile to read.
  3. Scott Sumner’s wife documents her observations of Beijing during her current trip – very interesting comparisons of how normal life has changed rapidly over the past 10 years.
  4. Is hot air or hot water more effective at melting a frozen pipe – a good answer based on heat capacity and heat resistivity ensues.

Eric’s Enlightenment for Tuesday, May 12, 2015

  1. A great list of public data sets on GitHub – most are free.
  2. Is the 4% withdrawal rule still effective for determining how much you can spend to attain perpetual retirement?
  3. Jeff Leek compiled a great list of awesome things that people did in statistics in 2014.  Here is his list for 2013.  (Hat Tip: Cici Chen and R-Bloggers)
  4. A video demonstration of the triple point of tert-butyl alcohol.

Eric’s Enlightenment for Wednesday, May 6, 2015

  1. Moldova has mysteriously lost one-eighth of its GDP, possibly to fraudulent loans.
  2. Kai Brothers was diagnosed with HIV in 1989, but did not show any symptoms for 25 years.  Does he have a natural defense against HIV?  Now that he is starting to show symptoms, should he start taking anti-retroviral drugs and deny scientists the chance to look for that natural defense in his blood?
  3. Use the VVALUE function in SAS to convert formatted values of a variable into new values of a character variable.
  4. Alex Reinhart diligently compiled and explained a list of major “egregious statistical fallacies regularly committed in the name of science”.  Check them out on his web site and in his book entitled “Statistics Done Wrong“.  I highly recommend reading the section entitled “The p value and the base rate fallacy“.

Eric’s Enlightenment for Thursday, April 30, 2015

  1. Simon Jackman from Stanford University provides some simple examples of obtaining the posterior distribution using conjugate priors.  If you are new to Bayesian statistics and need to develop the intuition for the basic ideas, then work through the math in these examples with pen and paper.
  2. Did you know that there are plastics that conduct electricity?  In fact, Alan J. Heeger, Alan G. MacDiarmid and Hideki Shirakawa won the 2000 Nobel Prize in Chemistry for the work on this fascinating subject.
  3. Jared Niemi provides a nice video introduction of mixed-effects models.  I highly encourage you to work through the math with pen and paper.
  4. Alberto Cairo adds a healthy dose of caution about the recent advent of data-driven journalism.  He emphasizes problems like confusing correlation with causation, ecological fallacies, and drawing conclusions based on small sample sizes or unrepresentative samples.

Eric’s Enlightenment for Wednesday, April 29, 2015

  1. Anscombe’s quartet is a collection of 4 data sets that have almost identical summary statistics but appear very differently when plotted.  They illustrate the importance of visualizing your data first before plugging them into a statistical model.
  2. A potential geochemical explanation for the existence of Blood Falls, an outflow of saltwater tainted with iron (III) oxide at the snout of the Taylor Glacier in Antarctica.  Here is the original Nature paper by Jill Mikucki et al.
  3. Jonathan Rothwell and Siddharth Kulkarni from the Brookings Institution use a value-added approach to rank 2-year and 4-year post-secondary institutions in the USA.  Some of the top-ranked universities by this measure are lesser known schools like Colgate University, Rose-Hulman Institute of Technology, and Carleton College.  I would love to see something similar for Canada!
  4. Heather Krause from Datassist provides tips on how to avoid (accidentally) lying with your data.  Do read the linked sources of further information!

Career Seminar at Department of Statistics and Actuarial Science, Simon Fraser University: 1:30 – 2:20 pm, Friday, February 20, 2015

I am very pleased to be invited to speak to the faculty and students in the Department of Statistics and Actuarial Science at Simon Fraser University on this upcoming Friday.  I look forward to sharing my career advice and answering questions from the students about how to succeed in a career in statistics.  If you will attend this seminar, please feel free to come and say “Hello”!

Eric Cai - Official Head Shot

Read more of this post

The advantages of using count() to get N-way frequency tables as data frames in R


I recently introduced how to use the count() function in the “plyr” package in R to produce 1-way frequency tables in R.  Several commenters provided alternative ways of doing so, and they are all appreciated.  Today, I want to extend that tutorial by demonstrating how count() can be used to produce N-way frequency tables in the list format – this will magnify the superiority of this function over other functions like table() and xtabs().


2-Way Frequencies: The Cross-Tabulated Format vs. The List-Format

To get a 2-way frequency table (i.e. a frequency table of the counts of a data set as divided by 2 categorical variables), you can display it in a cross-tabulated format or in a list format.

In R, the xtabs() function is good for cross-tabulation.  Let’s use the “mtcars” data set again; recall that it is a built-in data set in Base R.

> y = xtabs(~ cyl + gear, mtcars)
> y
 cyl      3     4     5
 4        1     8     2
 6        2     4     1
 8        12    0     2

Read more of this post


Get every new post delivered to your Inbox.

Join 528 other followers