Tips for presenting at a scientific conference

Introduction

I served as a judge for some of the student presentations at the 2016 Canadian Statistics Student Conference (CSSC).  The conference was both a learning opportunity and a networking opportunity for statistics students in Canada.  The presentations allowed the students to share their research and course projects with their peers, and it was a chance for them to get feedback about their work and learn new ideas from other students.

Unfortunately, I found most of the presentations to be very bad – not necessarily in terms of the content, but because of the delivery.  Although the students showed much earnestness and eagerness in sharing their work with others, most of them demonstrated poor competence in public speaking.

Public speaking is an important skill in knowledge-based industries, so these opportunities are valuable experiences for anybody to strengthen this skill.  You can only learn it by doing it many times, making mistakes, and learning from those mistakes.  Having delivered many presentations, learned from my share of mistakes, and received much praise for my seminars, I hope that the following tips will help anyone who presents at scientific conferences to improve their public-speaking skills.  In fact, most of these tips apply to public speaking in general.

I spoke at the 2016 Canadian Statistics Student Conference on career advice for students and new graduates in statistics.

Image courtesy of Peter Macdonald on Flickr.

Read more of this post

Odds and Probability: Commonly Misused Terms in Statistics – An Illustrative Example in Baseball

Yesterday, all 15 home teams in Major League Baseball won on the same day – the first such occurrence in history.  CTV News published an article written by Mike Fitzpatrick from The Associated Press that reported on this event.  The article states, “Viewing every game as a 50-50 proposition independent of all others, STATS figured the odds of a home sweep on a night with a full major league schedule was 1 in 32,768.”  (Emphases added)

odds of all 15 home teams winning on same day

Screenshot captured at 5:35 pm Vancouver time on Wednesday, August 12, 2015.

Out of curiosity, I wanted to reproduce this result.  This event is an intersection of 15 independent Bernoulli random variables, all with the probability of the home team winning being 0.5.

P[(\text{Winner}_1 = \text{Home Team}_1) \cap (\text{Winner}_2 = \text{Home Team}_2) \cap \ldots \cap (\text{Winner}_{15}= \text{Home Team}_{15})]

Since all 15 games are assumed to be mutually independent, the probability of all 15 home teams winning is just

P(\text{All 15 Home Teams Win}) = \prod_{n = 1}^{15} P(\text{Winner}_i = \text{Home Team}_i)

P(\text{All 15 Home Teams Win}) = 0.5^{15} = 0.00003051757

Now, let’s connect this probability to odds.

It is important to note that

  • odds is only applicable to Bernoulli random variables (i.e. binary events)
  • odds is the ratio of the probability of success to the probability of failure

For our example,

\text{Odds}(\text{All 15 Home Teams Win}) = P(\text{All 15 Home Teams Win}) \ \div \ P(\text{At least 1 Home Team Loses})

\text{Odds}(\text{All 15 Home Teams Win}) = 0.00003051757 \div (1 - 0.00003051757)

\text{Odds}(\text{All 15 Home Teams Win}) = 0.0000305185

The above article states that the odds is 1 in 32,768.  The fraction 1/32768 is equal to 0.00003051757, which is NOT the odds as I just calculated.  Instead, 0.00003051757 is the probability of all 15 home teams winning.  Thus, the article incorrectly states 0.00003051757 as the odds rather than the probability.

This is an example of a common confusion between probability and odds that the media and the general public often make.  Probability and odds are two different concepts and are calculated differently, and my calculations above illustrate their differences.  Thus, exercise caution when reading statements about probability and odds, and make sure that the communicator of such statements knows exactly how they are calculated and which one is more applicable.

Mathematical Statistics Lesson of the Day – Basu’s Theorem

Today’s Statistics Lesson of the Day will discuss Basu’s theorem, which connects the previously discussed concepts of minimally sufficient statistics, complete statistics and ancillary statistics.  As before, I will begin with the following set-up.

Suppose that you collected data

\mathbf{X} = X_1, X_2, ..., X_n

in order to estimate a parameter \theta.  Let f_\theta(x) be the probability density function (PDF) or probability mass function (PMF) for X_1, X_2, ..., X_n.

Let

t = T(\mathbf{X})

be a statistics based on \textbf{X}.

Basu’s theorem states that, if T(\textbf{X}) is a complete and minimal sufficient statistic, then T(\textbf{X}) is independent of every ancillary statistic.

Establishing the independence between 2 random variables can be very difficult if their joint distribution is hard to obtain.  This theorem allows the independence between minimally sufficient statistic and every ancillary statistic to be established without their joint distribution – and this is the great utility of Basu’s theorem.

However, establishing that a statistic is complete can be a difficult task.  In a later lesson, I will discuss another theorem that will make this task easier for certain cases.

Mathematical Statistics Lesson of the Day – An Example of An Ancillary Statistic

Consider 2 random variables, X_1 and X_2, from the normal distribution \text{Normal}(\mu, \sigma^2), where \mu is unknown.  Then the statistic

D = X_1 - X_2

has the distribution

\text{Normal}(0, 2\sigma^2).

The distribution of D does not depend on \mu, so D is an ancillary statistic for \mu.

Note that, if \sigma^2 is unknown, then D is not ancillary for \sigma^2.

Eric’s Enlightenment for Friday, May 8, 2015

  1. A nice set of tutorials on Microsoft Excel at OfficeTuts by Tomasz Decker.
  2. “We had proved that an assertion was indeed true in all of the difficult cases, but it turned out to be false in the simple case. We never bothered to check.”  Are mistakes in academic mathematics being effectively identified and corrected?  Vladimir Voevodsky (2002 Fields Medalist) published a major theorem in 1990, but Carlos Simpson found an error with the theorem in 1998.  It wasn’t until 2013 that Voevodsky finally became convinced that his theorem was wrong.  This motivated him to develop “proof assistants” – computer programs that help to prove mathematical theorems.
  3. Synthesizing artificial muscles from gold-plated onion skins
  4. Andrew Gelman debriefs his presentation to Princeton’s economics department about unbiasedness and econometrics.

Eric’s Enlightenment for Monday, April 20, 2015

  1. John D. Cook explains why 0! is defined to be equal to 1.  This is also an excellent post on how definitions are created in mathematics.
  2. Why are GDP estimates often unreliable?  (Jonathan Jones wrote this report for Britain, but it’s likely applicable to all countries.)
  3. Rick Wicklin shares useful code for counting the number of missing and non-missing observations in a data set in SAS.
  4. A potentially game-changing breakthrough in artificial photosynthesis may be able to solve the world’s carbon emission problem…

Mathematical Statistics Lesson of the Day – Minimally Sufficient Statistics

In using a statistic to estimate a parameter in a probability distribution, it is important to remember that there can be multiple sufficient statistics for the same parameter.  Indeed, the entire data set, X_1, X_2, ..., X_n, can be a sufficient statistic – it certainly contains all of the information that is needed to estimate the parameter.  However, using all n variables is not very satisfying as a sufficient statistic, because it doesn’t reduce the information in any meaningful way – and a more compact, concise statistic is better than a complicated, multi-dimensional statistic.  If we can use a lower-dimensional statistic that still contains all necessary information for estimating the parameter, then we have truly reduced our data set without stripping any value from it.

Our saviour for this problem is a minimally sufficient statistic.  This is defined as a statistic, T(\textbf{X}), such that

  1. T(\textbf{X}) is a sufficient statistic
  2. if U(\textbf{X}) is any other sufficient statistic, then there exists a function g such that

T(\textbf{X}) = g[U(\textbf{X})].

Note that, if there exists a one-to-one function h such that

T(\textbf{X}) = h[U(\textbf{X})],

then T(\textbf{X}) and U(\textbf{X}) are equivalent.

Mathematics and Mathematical Statistics Lesson of the Day – Convex Functions and Jensen’s Inequality

Consider a real-valued function f(x) that is continuous on the interval [x_1, x_2], where x_1 and x_2 are any 2 points in the domain of f(x).  Let

x_m = 0.5x_1 + 0.5x_2

be the midpoint of x_1 and x_2.  Then, if

f(x_m) \leq 0.5f(x_1) + 0.5f(x_2),

then f(x) is defined to be midpoint convex.

More generally, let’s consider any point within the interval [x_1, x_2].  We can denote this arbitrary point as

x_\lambda = \lambda x_1 + (1 - \lambda)x_2, where 0 < \lambda < 1.

Then, if

f(x_\lambda) \leq \lambda f(x_1) + (1 - \lambda) f(x_2),

then f(x) is defined to be convex.  If

f(x_\lambda) < \lambda f(x_1) + (1 - \lambda) f(x_2),

then f(x) is defined to be strictly convex.

There is a very elegant and powerful relationship about convex functions in mathematics and in mathematical statistics called Jensen’s inequality.  It states that, for any random variable Y with a finite expected value and for any convex function g(y),

E[g(Y)] \geq g[E(Y)].

A function f(x) is defined to be concave if -f(x) is convex.  Thus, Jensen’s inequality can also be stated for concave functions.  For any random variable Z with a finite expected value and for any concave function h(z),

E[h(Z)] \leq h[E(Z)].

In future Statistics Lessons of the Day, I will prove Jensen’s inequality and discuss some of its implications in mathematical statistics.

Mathematical Statistics Lesson of the Day – The Glivenko-Cantelli Theorem

In 2 earlier tutorials that focused on exploratory data analysis in statistics, I introduced

There is actually an elegant theorem that provides a rigorous basis for using empirical CDFs to estimate the true CDF – and this is true for any probability distribution.  It is called the Glivenko-Cantelli theorem, and here is what it states:

Given a sequence of n independent and identically distributed random variables, X_1, X_2, ..., X_n,

P[\lim_{n \to \infty} \sup_{x \epsilon \mathbb{R}} |\hat{F}_n(x) - F_X(x)| = 0] = 1.

In other words, the empirical CDF of X_1, X_2, ..., X_n converges uniformly to the true CDF.

My mathematical statistics professor at the University of Toronto, Keith Knight, told my class that this is often referred to as “The First Theorem of Statistics” or the “The Fundamental Theorem of Statistics”.  I think that this is a rather subjective title – the central limit theorem is likely more useful and important – but Page 261 of John Taylor’s An introduction to measure and probability (Springer, 1997) recognizes this attribution to the Glivenko-Cantelli theorem, too.

Mathematical and Applied Statistics Lesson of the Day – The Motivation and Intuition Behind Chebyshev’s Inequality

In 2 recent Statistics Lessons of the Day, I

Chebyshev’s inequality is just a special version of Markov’s inequality; thus, their motivations and intuitions are similar.

P[|X - \mu| \geq k \sigma] \leq 1 \div k^2

Markov’s inequality roughly says that a random variable X is most frequently observed near its expected value, \mu.  Remarkably, it quantifies just how often X is far away from \mu.  Chebyshev’s inequality goes one step further and quantifies that distance between X and \mu in terms of the number of standard deviations away from \mu.  It roughly says that the probability of X being k standard deviations away from \mu is at most k^{-2}.  Notice that this upper bound decreases as k increases – confirming our intuition that it is highly improbable for X to be far away from \mu.

As with Markov’s inequality, Chebyshev’s inequality applies to any random variable X, as long as E(X) and V(X) are finite.  (Markov’s inequality requires only E(X) to be finite.)  This is quite a marvelous result!

Mathematical Statistics Lesson of the Day – Chebyshev’s Inequality

The variance of a random variable X is just an expected value of a function of X.  Specifically,

V(X) = E[(X - \mu)^2], \ \text{where} \ \mu = E(X).

Let’s substitute (X - \mu)^2 into Markov’s inequality and see what happens.  For convenience and without loss of generality, I will replace the constant c with another constant, b^2.

\text{Let} \ b^2 = c, \ b > 0. \ \ \text{Then,}

P[(X - \mu)^2 \geq b^2] \leq E[(X - \mu)^2] \div b^2

P[ (X - \mu) \leq -b \ \ \text{or} \ \ (X - \mu) \geq b] \leq V(X) \div b^2

P[|X - \mu| \geq b] \leq V(X) \div b^2

Now, let’s substitute b with k \sigma, where \sigma is the standard deviation of X.  (I can make this substitution, because \sigma is just another constant.)

\text{Let} \ k \sigma = b. \ \ \text{Then,}

P[|X - \mu| \geq k \sigma] \leq V(X) \div k^2 \sigma^2

P[|X - \mu| \geq k \sigma] \leq 1 \div k^2

This last inequality is known as Chebyshev’s inequality, and it is just a special version of Markov’s inequality.  In a later Statistics Lesson of the Day, I will discuss the motivation and intuition behind it.  (Hint: Read my earlier lesson on the motivation and intuition behind Markov’s inequality.)

Getting Ready for Mathematical Classes in the New Semester – Guest-Blogging on SFU’s Career Services Informer

The following blog post was slightly condensed for editorial brevity and then published on the Career Services Informer, the official blog of the Career Services Centre at my undergraduate alma mater, Simon Fraser University

sfu csi

As a new Fall semester begins, many students start courses such as math, physics, computing science, engineering and statistics.  These can be tough classes with a rapid progression in workload and difficulty, but steady preparation can mount a strong defense to the inevitable pressure and stress.  Here are some tips to help you to get ready for those classes.

Read more of this post

Mathematical and Applied Statistics Lesson of the Day – The Motivation and Intuition Behind Markov’s Inequality

Markov’s inequality may seem like a rather arbitrary pair of mathematical expressions that are coincidentally related to each other by an inequality sign:

P(X \geq c) \leq E(X) \div c, where c > 0.

However, there is a practical motivation behind Markov’s inequality, and it can be posed in the form of a simple question: How often is the random variable X “far” away from its “centre” or “central value”?

Intuitively, the “central value” of X is the value that of X that is most commonly (or most frequently) observed.  Thus, as X deviates further and further from its “central value”, we would expect those distant-from-the-centre vales to be less frequently observed.

Recall that the expected value, E(X), is a measure of the “centre” of X.  Thus, we would expect that the probability of X being very far away from E(X) is very low.  Indeed, Markov’s inequality rigorously confirms this intuition; here is its rough translation:

As c becomes really far away from E(X), the event X \geq c becomes less probable.

You can confirm this by substituting several key values of c.

 

  • If c = E(X), then P[X \geq E(X)] \leq 1; this is the highest upper bound that P(X \geq c) can get.  This makes intuitive sense; X is going to be frequently observed near its own expected value.

 

  • If c \rightarrow \infty, then P(X \geq \infty) \leq 0.  By Kolmogorov’s axioms of probability, any probability must be inclusively between 0 and 1, so P(X \geq \infty) = 0.  This makes intuitive sense; there is no possible way that X can be bigger than positive infinity.

Mathematical Statistics Lesson of the Day – Markov’s Inequality

Markov’s inequality is an elegant and very useful inequality that relates the probability of an event concerning a non-negative random variable, X, with the expected value of X.  It states that

P(X \geq c) \leq E(X) \div c,

where c > 0.

I find Markov’s inequality to be beautiful for 2 reasons:

  1. It applies to both continuous and discrete random variables.
  2. It applies to any non-negative random variable from any distribution with a finite expected value.

In a later lesson, I will discuss the motivation and intuition behind Markov’s inequality, which has useful implications for understanding a data set.

Organic and Inorganic Chemistry Lesson of the Day – Diastereomers

I previously introduced the concept of chirality and how it is a property of any molecule with only 1 stereogenic centre.  (A molecule with n stereogenic centres may or may not be chiral, depending on its stereochemistry.)  I also defined 2 stereoisomers as enantiomers if they are non-superimposable mirror images of each other.  (Recall that chirality in inorganic chemistry can arise in 2 different ways.)

It is possible for 2 stereoisomers to NOT be enantiomers; in fact, such stereoisomers are called diastereomers.  Yes, I recognize that defining something as the negation of something else is unusual.  If you have learned set theory or probability (as I did in my mathematical statistics classes) then consider the set of all pairs of the stereoisomers of one compound – this is the sample space.  The enantiomers form a set within this sample space, and the diastereomers are the complement of the enantiomers.

It is important to note that, while diastereomers are not mirror images of each other, they are still non-superimposable.  Diastereomers often (but not always) arise from stereoisomers with 2 or more stereogenic centres; here is an example of how they can arise.  (A pair of cis/trans-isomers are also diastereomers, despite not having any stereogenic centres.)

1) Consider a stereoisomer with 2 tetrahedral stereogenic centres and no meso isomers*.  This isomer has 2^{n = 2} stereoisomers, where n = 2 denotes the number of stereogenic centres.

2) Find one pair of enantiomers based on one of the stereogenic centres.

3) Find the other pair enantiomers based on the other stereogenic centre.

4) Take any one molecule from Step #2 and any one molecule from Step #3.  These cannot be mirror images of each other.  (One molecule cannot have 2 different mirror images of itself.)  These 2 molecules are diastereomers.

Think back to my above description of enantiomers as a proper subset within the sample space of the pairs of one set of stereoisomers.  You can now see why I emphasized that the sample space consists of pairs, since multiple different pairs of stereoisomers can form enantiomers.  In my example above, Steps #2 and #3 produced 2 subsets of enantiomers.  It should be clear by now that enantiomers and diastereomers are defined as pairs.  To further illustrate this point,

a) call the 2 molecules in Step#2 A and B.

b) call the 2 molecules in Step #3 C and D.

A and B are enantiomers.  A and C are diastereomers.  Thus, it is entirely possible for one molecule to be an enantiomer with a second molecule and a diastereomer with a third molecule.

Here is an example of 2 diastereomers.  Notice that they have the same chemical formula but different 3-dimensional orientations – i.e. they are stereoisomers.  These stereoisomers are not mirror images of each other, but they are non-superimposable – i.e. they are diastereomers.

D-threose.svg 2

(-)-Threose

D-erythrose 2.svg

(-)-Erythrose

 

 

 

 

 

 

Images courtesy of Popnose, DMacks and Edgar181 on Wikimedia.  For brevity, I direct you to the Wikipedia entry for diastereomers showing these 4 images in one panel.

In a later Chemistry Lesson of the Day on optical rotation (a.k.a. optical activity), I will explain what the (-) symbol means in the names of those 2 diastereomers.

*I will discuss meso isomers in a separate lesson.

Video Tutorial – Calculating Expected Counts in a Contingency Table Using Joint Probabilities

In an earlier video, I showed how to calculate expected counts in a contingency table using marginal proportions and totals.  (Recall that expected counts are needed to conduct hypothesis tests of independence between categorical random variables.)  Today, I want to share a second video of calculating expected counts – this time, using joint probabilities.  This method uses the definition of independence between 2 random variables to form estimators of the joint probabilities for each cell in the contingency table.  Once the joint probabilities are estimated, the expected counts are simply the joint probabilities multipled by the grand total of the entire sample.  This method gives a more direct and deeper connection between the null hypothesis of a test of independence and the calculation of expected counts.

I encourage you to watch both of my videos on expected counts in my YouTube channel to gain a deeper understanding of how and why they can be calculated.  Please note that the expected counts are slightly different in the 2 videos due to round-off error; if you want to be convinced about this, I encourage you to do the calculations in the 2 different orders as I presented in the 2 videos – you will eventually see where the differences arise.

You can also watch the video below the fold!

Read more of this post

Mathematics and Applied Statistics Lesson of the Day – The Geometric Mean

Suppose that you invested in a stock 3 years ago, and the annual rates of return for each of the 3 years were

  • 5% in the 1st year
  • 10% in the 2nd year
  • 15% in the 3rd year

What is the average rate of return in those 3 years?

It’s tempting to use the arithmetic mean, since we are so used to using it when trying to estimate the “centre” of our data.  However, the arithmetic mean is not appropriate in this case, because the annual rate of return implies a multiplicative growth of your investment by a factor of 1 + r, where r is the rate of return in each year.  In contrast, the arithmetic mean is appropriate for quantities that are additive in nature; for example, your average annual salary from the past 3 years is the sum of last 3 annual salaries divided by 3.

If the arithmetic mean is not appropriate, then what can we use instead?  Our saviour is the geometric mean, G.  The average factor of growth from the 3 years is

G = [(1 + r_1)(1 + r_2) ... (1 + r_n)]^{1/n},

where r_i is the rate of return in year i, i = 1, 2, 3, ..., n.  The average annual rate of return is G - 1.  Note that the geometric mean is NOT applied to the annual rates of return, but the annual factors of growth.

 

Returning to our example, our average factor of growth is

G = [(1 + 0.05) \times (1 + 0.10) \times (1 + 0.15)]^{1/3} = 1.099242.

Thus, our annual rate of return is G - 1 = 1.099242 - 1 = 0.099242 = 9.9242\%.

 

Here is a good way to think about the difference between the arithmetic mean and the geometric mean.  Suppose that there are 2 sets of numbers.

  1. The first set, S_1, consists of your data x_1, x_2, ..., x_n, and this set has a sample size of n.
  2. The second, S_2,  set also has a sample size of n, but all n values are the same – let’s call this common value y.
  • What number must y be such that the sums in S_1 and S_2 are equal?  This value of y is the arithmetic mean of the first set.
  • What number must y be such that the products in S_1 and S_2 are equal?  This value of y is the geometric mean of the first set.

Note that the geometric means is only applicable to positive numbers.

Mathematics and Applied Statistics Lesson of the Day – The Weighted Harmonic Mean

In a previous Statistics Lesson of the Day on the harmonic mean, I used an example of a car travelling at 2 different speeds – 60 km/hr and 40 km/hr.  In that example, the car travelled 120 km at both speeds, so the 2 speeds had equal weight in calculating the harmonic mean of the speeds.

What if the cars travelled different distances at those speeds?  In that case, we can modify the calculation to allow the weight of each datum to be different.  This results in the weighted harmonic mean, which has the formula

H = \sum_{i = 1}^{n} w_i \ \ \div \ \ \sum_{i = 1}^{n}(w_i \ \div \ x_i).

 

For example, consider a car travelling for 240 kilometres at 2 different speeds and for 2 different distances:

  1. 60 km/hr for 100 km
  2. 40 km/hr for another 140 km

Then the weighted harmonic mean of the speeds (i.e. the average speed of the whole trip) is

(100 \text{ km} \ + \ 140 \text{ km}) \ \div \ [(100 \text{ km} \ \div \ 60 \text{ km/hr}) \ + \ (140 \text{ km} \ \div \ 40 \text{ km/hr})]

= 46.45 \text{ km/hr}

 

Notice that this is exactly the same calculation that we would use if we wanted to calculate the average speed of the whole trip by the formula from kinematics:

\text{Average Speed} = \Delta \text{Distance} \div \Delta \text{Time}

Mathematics and Applied Statistics Lesson of the Day – The Harmonic Mean

The harmonic mean, H, for n positive real numbers x_1, x_2, ..., x_n is defined as

H = n \div (1/x_1 + 1/x_2 + .. + 1/x_n) = n \div \sum_{i = 1}^{n}x_i^{-1}.

This type of mean is useful for measuring the average of rates.  For example, consider a car travelling for 240 kilometres at 2 different speeds:

  1. 60 km/hr for 120 km
  2. 40 km/hr for another 120 km

Then its average speed for this trip is

S_{avg} = 2 \div (1/60 + 1/40) = 48 \text{ km/hr}

Notice that the speed for the 2 trips have equal weight in the calculation of the harmonic mean – this is valid because of the equal distance travelled at the 2 speeds.  If the distances were not equal, then use a weighted harmonic mean instead – I will cover this in a later lesson.

To confirm the formulaic calculation above, let’s use the definition of average speed from physics.  The average speed is defined as

S_{avg} = \Delta \text{distance} \div \Delta \text{time}

We already have the elapsed distance – it’s 240 km.  Let’s find the time elapsed for this trip.

\Delta \text{ time} = 120 \text{ km} \times (1 \text{ hr}/60 \text{ km}) + 120 \text{ km} \times (1 \text{ hr}/40 \text{ km})

\Delta \text{time} = 5 \text{ hours}

Thus,

S_{avg} = 240 \text{ km} \div 5 \text{ hours} = 48 \text { km/hr}

Notice that this explicit calculation of the average speed by the definition from kinematics is the same as the average speed that we calculated from the harmonic mean!

 

Video Tutorial – Useful Relationships Between Any Pair of h(t), f(t) and S(t)

I first started my video tutorial series on survival analysis by defining the hazard function.  I then explained how this definition leads to the elegant relationship of

h(t) = f(t) \div S(t).

In my new video, I derive 6 useful mathematical relationships that exist between any 2 of the 3 quantities in the above equation.  Each relationship allows one quantity to be written as a function of the other.

I am excited to continue adding to my Youtube channel‘s collection of video tutorials.  Please stay tuned for more!

You can also watch this new video below the fold!

Read more of this post