Applied Statistics Lesson of the Day – What “Linear” in Linear Regression Really Means

Linear regression is one of the most commonly used tools in statistics, yet one of its fundamental features is commonly misunderstood by many non-statisticians.  I have witnessed this misunderstanding on numerous occasions in my work experience, statistical consulting and statistical education, and it is important for all statisticians to be aware of this common misunderstanding, to anticipate it when someone is about to make this mistake, and to educate that person about the correct meaning.

Consider the simple linear regression model:

Y = \beta_0 + \beta_1x + \varepsilon.

The “linear” in linear regression refers to the linearity between the response variable (Y) and the regression coefficients (\beta_0 and \beta_1).  It DOES NOT refer to the linearity between the response variable (Y) and the explanatory variable (x) This is contrary to mathematical descriptions of linear relationships; for example, when high school students learn about the equation of a line,

y = mx + b

the relationship is called “linear” because of the linearity between y and x.  This is the source of the mistaken understanding about the meaning of “linear” in linear regression; I am grateful that my applied statistics professor, Dr. Boxin Tang, emphasized the statistical meaning of “linear” when he taught linear regression to me.

Why is this difference in terminology important?  A casual observer may be puzzled by this apparent nit-picking of the semantics.  This terminology is important because the estimation of the regression coefficients in a regression model depends on the relationship between the response variable and the regression coefficients.  If this relationship is linear, then the estimation is very simple and can be done analytically by linear algebra.  If not, then the estimation can be very difficult and often cannot be done analytically – numerical methods must be used, instead.

Now, one of the assumptions of linear regression is the linearity between the response variable (Y) and the explanatory variable (x).  However, what if the scatter plot of Y versus x reveals a non-linear relationship, such as a quadratic relationship?  In that case, the solution is simple – just replace x with x^2.  (Admittedly, if the interpretation of the regression coefficient is important, then such interpretation becomes more difficult with this transformation.  However, if prediction of the response is the key goal, then such interpretation is not necessary, and this is not a problem.)  The important point is that the estimation of the regression coefficients can still be done by linear algebra after the transformation of the explanatory variable.

Determining chemical concentration with standard addition: An application of linear regression in JMP – A Guest Blog Post for the JMP Blog

I am very excited to announce that I have been invited by JMP to be a guest blogger for its official blog!  My thanks to Arati Mejdal, Global Social Media Manager for the JMP Division of SAS, for welcoming me into the JMP blogging community with so much support and encouragement, and I am pleased to publish my first post on the JMP Blog!  Mark Bailey and Byron Wingerd from JMP provided some valuable feedback to this blog post, and I am fortunate to get the chance to work with and learn from them!

Following the tradition of The Chemical Statistician, this post combines my passions for statistics and chemistry by illustrating how simple linear regression can be used for the method of standard addition in analytical chemistry.  In particular, I highlight the useful capability of the “Inverse Prediction” function under “Fit Model” platform in JMP to estimate the predictor given an observed response value (i.e. estimate the value of x_i given y_i).  Check it out!

JMP blog post - standard addition

Machine Learning Lesson of the Day – Supervised Learning: Classification and Regression

Supervised learning has 2 categories:

  • In classification, the target variable is categorical.
  • In regression, the target variable is continuous.

Thus, regression in statistics is different from regression in supervised learning.

In statistics,

  • regression is used to model relationships between predictors and targets, and the targets could be continuous or categorical.  
  • a regression model usually includes 2 components to describe such relationships:
    • a systematic component
    • a random component.  The random component of this relationship is mathematically described by some probability distribution.  
  • most regression models in statistics also have assumptions about the statistical independence or dependence between the predictors and/or between the observations.  
  • many statistical models also aim to provide interpretable relationships between the predictors and targets.  
    • For example, in simple linear regression, the slope parameter, \beta_1, predicts the change in the target, Y, for every unit increase in the predictor, X.

In supervised learning,

  • target variables in regression must be continuous
    • categorical target variables are modelled in classification
  • regression has less or even no emphasis on using probability to describe the random variation between the predictor and the target
    • Random forests are powerful tools for both classification and regression, but they do not use probability to describe the relationship between the predictors and the target.
  • regression has less or even no emphasis on providing interpretable relationships between the predictors and targets.  
    • Neural networks are powerful tools for both classification and regression, but they do not provide interpretable relationships between the predictors and the target.

***The last 2 points are applicable to classification, too.

In general, supervised learning puts much more emphasis on accurate prediction than statistics.

Since regression in supervised learning includes only continuous targets, this results in some confusing terminology between the 2 fields.  For example, logistic regression is a commonly used technique in both statistics and supervised learning.  However, despite its name, it is a classification technique in supervised learning, because the response variable in logistic regression is categorical.

Estimating the Decay Rate and the Half-Life of DDT in Trout – Applying Simple Linear Regression with Logarithmic Transformation

This blog post uses a function and a script written in R that were displayed in an earlier blog post.

Introduction

This is the second of a series of blog posts about simple linear regression; the first was written recently on some conceptual nuances and subtleties about this model.  In this blog post, I will use simple linear regression to analyze a data set with a logarithmic transformation and discuss how to make inferences on the regression coefficients and the means of the target on the original scale.  The data document the decay of dichlorodiphenyltrichloroethane (DDT) in trout in Lake Michigan; I found it on Page 49 in the book “Elements of Environmental Chemistry” by Ronald A. Hites.  Future posts will also be written on the chemical aspects of this topic, including the environmental chemistry of DDT and exponential decay in chemistry and, in particular, radiochemistry.

DDT

Dichlorodiphenyltrichloroethane (DDT)

Source: Wikimedia Commons

 

A serious student of statistics or a statistician re-learning the fundamentals like myself should always try to understand the math and the statistics behind a software’s built-in function rather than treating it like a black box.  This is especially worthwhile for a basic yet powerful tool like simple linear regression.  Thus, instead of simply using the lm() function in R, I will reproduce the calculations done by lm() with my own function and script (posted earlier on my blog) to obtain inferential statistics on the regression coefficients.  However, I will not write or explain the math behind the calculations; they are shown in my own function with very self-evident variable names, in case you are interested.  The calculations are arguably the most straightforward aspects of linear regression, and you can easily find the derivations and formulas on the web, in introductory or applied statistics textbooks, and in regression textbooks.

Read more of this post

My Own R Function and Script for Simple Linear Regression – An Illustration with Exponential Decay of DDT in Trout

Here is the function that I wrote for doing simple linear regression, as alluded to in my blog post about simple linear regression on log-transformed data on the decay of DDT concentration in trout in Lake Michigan.  My goal was to replicate the 4 columns of the output from applying summary() to the output of lm().

To use this file and this script,

1) I saved this file as “simple linear regression.r”.

2) In the same folder, I saved a script called “DDT trout regression.r” that used this function to implement simple linear regression on the log-transformed DDT data.

3) I used setwd() to change the working directory to the folder containing the function and the script.

4) I made sure “DDT trout regression.r” used the source() function to call my user-defined function for simple linear regression.

5) I ran “DDT trout regression.r”.

Read more of this post

Some Subtle and Nuanced Concepts about Simple Linear Regression

Introduction

This blog post will focus on some conceptual foundations of simple linear regression, a very common technique in statistics and a precursor for understanding multiple linear regression.  I will expose and clarify many nuances and subtleties that I did not fully absorb until my Master’s degree in statistics at the University of Toronto.

What is Simple Linear Regression?

Simple linear regression is a predictive model that uses a predictor variable (x) to predict a continuous target variable (Y).  It is a formal and rigorous way to express 2 fundamental components of a statistical predictive model.

1) For each value of x, there is a probability distribution of Y.

2) The means of the probability distributions for all values of Y vary with x in a systematic way.

Mathematically, the first component is reflected in a random error variable, and the second component is reflected in the constant that expresses the linear relationship between x and Y.  These two components add together to give the following mathematical model.

Y_i = \beta_0 + \beta_1 x_i + \varepsilon_i, \ \ \ i = 1,...,n

\varepsilon_i \sim Normal(0, \sigma^2)

\varepsilon_i \perp \varepsilon_j, \ \ \ \ \ i \neq j

The last mathematical expression states that two different error terms are statistically independent.

Essentially, this model captures the tendency for Y to vary systematically with x.  The systematic part is the constant term, \beta_0 + \beta_1 x_i.  The tendency (rather than a direct relation) is reflected in the probability distribution of the error component.

Note that I capitalized the target Y because it is a random variable.  (It is a linear combination of the random error, so it is also a random variable.)  I used lower-case for the predictor because it is a constant in the model.

What are the Assumptions of Simple Linear Regression?

1) The predictor variable is a fixed constant with no random variation.  If you want to model the predictor as a random variable, use the errors-in-variables model (a.k.a. measurement errors model).

2) The target variable is a linear combination of the regression coefficients and the predictor.

3) The variance of the random error component is constant.  This assumptions is called homoscedasticity.

4) The random errors are independent of each other.

5) The regression coefficients are constants.  If you want to model the regression coefficients as random variables, use the random effects model.  If you want to include both fixed and random coefficients in your model, use the mixed effects model.  The documentation for PROX MIXED in SAS/STAT has a nice explanation of mixed effects model.  I also recommend the documentation for PROC GLM for more about the random effects model.

***6) The random errors are normally distributed with an expected value of 0 and a variance of \sigma^2 .  As Assumption #3 states, this variance is constant for all \varepsilon_i, \ i = 1,...,n .

***This last assumption is not needed for the least-squares estimation of the regression coefficients.  However, it is needed for conducting statistical inference for the regression coefficients, such as testing hypotheses and constructing confidence intervals.

Important Clarifications about the Terminology

Let me clarify some common confusion about the 2 key terms in the name “simple linear regression”.

– It is called “simple” because it uses only one predictor, whereas multiple linear regression uses multiple predictors.  While it is relatively simple to understand, and while it is a simple model compared to other predictive models, there are many concepts and nuances behind linear regression that still makes it difficult to understand for many people.  (I hope that this blog post will make it easier to understand this model!)

– It is called “linear” because the target variable is linear with respect to the parameters \beta_0 and \beta_1  (the regression coefficients)not because it is linear with respect to the predictor; this is a very common misunderstanding, and I did not learn this until the second course in which I learned about linear regression.  This is more than just a naming custom; it implies that the regression coefficients can be estimated using linear algebra, which has many benefits that will be described in a later post.

Simple linear regression does assume that the target variable has a linear relationship with the predictor variable.  However, if it doesn’t, it can often be resolved – the predictor and/or the target can often be transformed to make the relationship linear.  If, however, the target variable cannot be written as a linear combination of the parameters \beta_0 and \beta_1 , then the model is no longer linear regressioneven if the target is linear with respect to the predictor.

How are the Regression Coefficients Estimated?

The regression coefficients are estimated by finding values of \beta_0 and \beta_1 that minimize the sum of the squares of the deviations from the regression line to the data.  My first linear regression textbook, “Applied Linear Statistical Models” by Kutner, Nachtsheim, Neter, and Li uses the letter “Q” to denote this quantity.  This is called the method of least squares.  The word “minimize” should trigger finding the global optimizers using differential calculus.

Q = \sum_{i=1}^n(y_i - \beta_0 - \beta_1 x_i)^2

Differentiate Q with respect to \beta_0 and \beta_1; set the 2 derivatives to zero to get the normal equations.  The estimates are obtained by solving this system of 2 equations.

Why is the Least-Squares Method Used to Estimate the Regression Coefficients?

A natural question arises: Why minimize the sum of the squares of the errors?  Why not minimize some other measure of the distances from the regression line to the data, like the sum of the absolute values of the errors?

Q' = \sum_{i=1}^n |y_i - \beta_0 - \beta_1 x_i|

The answer lies within the Gauss-Markov theorem, which guarantees some very attractive properties for the least-squares estimators of the regression coefficients:

– these estimators are unbiased

out of all linear unbiased estimators, the least-squares estimators have the minimum variance

Thus, the least-squares estimators are both accurate and very precise.

Note that the Gauss-Markov theorem holds without Assumption #6 above, which states that the errors have a normal distribution with an expected value of zero and a variance of \sigma^2 .