Skip to content

Confidence Intervals for prediction in GLMMs

With LM and GLM the predict function can return the standard error for the predicted values on either the observed data or on new data. This is then used to draw confidence or prediction intervals around the fitted regression lines. The confidence intervals (CI) focus on the regression lines and can be interpreted as (assuming that we draw 95% CI): “If we would repeat our sampling X times the regression line would fall between this interval 95% of the time”. On the other hand the prediction interval focus on single data point and could be interpreted as (again assuming that we draw 95% CI): “If we would sample X times at these particular value for the explanatory variables, the response value would fall between this interval 95% of the time”. The wikipedia page has some nice explanation about the meaning of confidence intervals.
For GLMM the predict function does not allow one to derive standard error, the reason being (from the help page of predict.merMod): “There is no option for computing standard errors of predictions because it is difficult to define an efficient method that incorporates uncertainty in the variance parameters”. This means there is for now no way to include in the computation of the standard error for predicted values the fact that the fitted random effect standard deviation are just estimates and may be more or less well estimated. We can however still derive confidence or prediction intervals keeping in mind that we might underestimate the uncertainty around the estimates.


library(lme4)

#first case simple lmer, simulate 100 data points from 10 groups with one continuous fixed effect variable
x<-runif(100,0,10)
f<-gl(n = 10,k = 10)
data<-data.frame(x=x,f=f)
modmat<-model.matrix(~x,data)
#the fixed effect coefficient
fixed<-c(1,0.5)
#the random effect
rnd<-rnorm(10,0,0.7)
#the simulated response values
data$y<-rnorm(100,modmat%*%fixed+rnd[f],0.3)

#model
m<-lmer(y~x+(1|f),data)

#first CI and PI using predict-like method, using code posted here: http://glmm.wikidot.com/faq
newdat<-data.frame(x=seq(0,10,length=20))
mm<-model.matrix(~x,newdat)
newdat$y<-mm%*%fixef(m) 
#predict(m,newdat,re.form=NA) would give the same results
pvar1 <- diag(mm %*% tcrossprod(vcov(m),mm))
tvar1 <- pvar1+VarCorr(m)$f[1] # must be adapted for more complex models
newdat <- data.frame(
  newdat
  , plo = newdat$y-1.96*sqrt(pvar1)
  , phi = newdat$y+1.96*sqrt(pvar1)
  , tlo = newdat$y-1.96*sqrt(tvar1)
  , thi = newdat$y+1.96*sqrt(tvar1)
)

#second version with bootMer
#we have to define a function that will be applied to the nsim simulations
#here we basically get a merMod object and return the fitted values
predFun<-function(.) mm%*%fixef(.) 
bb<-bootMer(m,FUN=predFun,nsim=200) #do this 200 times
#as we did this 200 times the 95% CI will be bordered by the 5th and 195th value
bb_se<-apply(bb$t,2,function(x) x[order(x)][c(5,195)])
newdat$blo<-bb_se[1,]
newdat$bhi<-bb_se[2,]

plot(y~x,data)
lines(newdat$x,newdat$y,col="red",lty=2,lwd=3)
lines(newdat$x,newdat$plo,col="blue",lty=2,lwd=2)
lines(newdat$x,newdat$phi,col="blue",lty=2,lwd=2)
lines(newdat$x,newdat$tlo,col="orange",lty=2,lwd=2)
lines(newdat$x,newdat$thi,col="orange",lty=2,lwd=2)
lines(newdat$x,newdat$bhi,col="darkgreen",lty=2,lwd=2)
lines(newdat$x,newdat$blo,col="darkgreen",lty=2,lwd=2)
legend("topleft",legend=c("Fitted line","Confidence interval","Prediction interval","Bootstrapped CI"),col=c("red","blue","orange","darkgreen"),lty=2,lwd=2,bty="n")

bootGLMM1

This looks pretty familiar, the prediction interval being always bigger than the confidence interval.
Now in the help page for the predict.merMod function the authors of the lme4 package wrote that bootMer should be the prefered method to derive confidence intervals from GLMM. The idea there is to simulate N times new data from the model and get some statistic of interest. In our case we are interested in deriving the bootstrapped fitted values to get confidence interval for the regression line. bb$t is a matrix with the observation in the column and the different bootstrapped samples in the rows. To get the 95% CI for the fitted line we then need to get the [0.025*N,0.975*N] values of the sorted bootstrapped values.

The bootstrapped CI falls pretty close to the “normal” CI, even if for each bootstrapped sample new random effect values were calculated (because use.u=FALSE per default in bootMer)

Now let’s turn to a more complex example, a Poisson GLMM with two crossed random effects:


#second case more complex design with two crossed RE and a poisson response
x<-runif(100,0,10)
f1<-gl(n = 10,k = 10)
f2<-as.factor(rep(1:10,10))
data<-data.frame(x=x,f1=f1,f2=f2)
modmat<-model.matrix(~x,data)

fixed<-c(-0.12,0.35)
rnd1<-rnorm(10,0,0.7)
rnd2<-rnorm(10,0,0.2)

mus<-modmat%*%fixed+rnd1[f1]+rnd2[f2]
data$y<-rpois(100,exp(mus))

m<-glmer(y~x+(1|f1)+(1|f2),data,family="poisson")

#for GLMMs we have to back-transform the prediction after adding/removing the SE
newdat<-data.frame(x=seq(0,10,length=20))
mm<-model.matrix(~x,newdat)
y<-mm%*%fixef(m)
pvar1 <- diag(mm %*% tcrossprod(vcov(m),mm))
tvar1 <- pvar1+VarCorr(m)$f1[1]+VarCorr(m)$f2[1]  ## must be adapted for more complex models
newdat <- data.frame(
  x=newdat$x,
  y=exp(y),
   plo = exp(y-1.96*sqrt(pvar1))
  , phi = exp(y+1.96*sqrt(pvar1))
  , tlo = exp(y-1.96*sqrt(tvar1))
  , thi = exp(y+1.96*sqrt(tvar1))
)

#second version with bootMer
predFun<-function(.) exp(mm%*%fixef(.)) 
bb<-bootMer(m,FUN=predFun,nsim=200)
bb_se<-apply(bb$t,2,function(x) x[order(x)][c(5,195)])
newdat$blo<-bb_se[1,]
newdat$bhi<-bb_se[2,]

#plot
plot(y~x,data)
lines(newdat$x,newdat$y,col="red",lty=2,lwd=3)
lines(newdat$x,newdat$plo,col="blue",lty=2,lwd=2)
lines(newdat$x,newdat$phi,col="blue",lty=2,lwd=2)
lines(newdat$x,newdat$tlo,col="orange",lty=2,lwd=2)
lines(newdat$x,newdat$thi,col="orange",lty=2,lwd=2)
lines(newdat$x,newdat$bhi,col="darkgreen",lty=2,lwd=2)
lines(newdat$x,newdat$blo,col="darkgreen",lty=2,lwd=2)
legend("topleft",legend=c("Fitted line","Confidence interval","Prediction interval","Bootstrapped CI"),col=c("red","blue","orange","darkgreen"),lty=2,lwd=2,bty="n")

bootGLMM2

Again in this case the bootstrapped CI falled pretty close to the “normal” CI. We have here seen three different way to derive intervals representing the uncertainty around the regression lines (CI) and the response points (PI). Choosing among them would depend on what you want to see (what is the level of uncertainty around my fitted line vs if I sample new observations which value will they take), but also for complex model on computing power, as bootMer can take some time to run for GLMM with many observations and complex model structure.

Exploration of Functional Diversity indices using Shiny

FD_pict

Biological diversity (or biodiversity) is a complex concept with many different aspects in it, like species richness, evenness or functional redundancy. My field of research focus on understanding the effect of changing plant diversity on higher trophic levels communities but also ecosystem function. Even if the founding papers of this area of research already hypothesized that all components of biodiversity could influence ecosystem function (See Fig1 in Chapin et al 2000), the first experimental results were focusing on taxonomic diversity (ie species richness, shannon diversity, shannon evenness …). More recently the importance of functional diversity as the main driver of changes in ecosystem function has been emphasized by several authors (ie Reiss et al 2009). The idea behind functional diversity is basically to measure a characteristic (the traits) of the organisms under study, for example the height of a plant or the body mass of an insect, and then derive an index of how diverse these traits values are in a particular sites. A nice introduction into the topic is the Chapter from Evan Weiher in Biological Diversity.

Now as taxonomic diversity has many different indices so do functional diversity, recent developments of a new multidimensional framework and of an R package allow researchers to easily derive functional diversity index from their dataset. But finding the right index for his system can be rather daunting and as several studies showed there is not a single best index (See Weiher Ch.13 in Biological Diversity) but rather a set of different index each showing a different facet of the functional diversity like functional richness, functional evenness or functional divergence.

Here I show a little Shiny App to graphically explore in a 2D trait dimension the meaning of a set of functional diversity indices. The App is still in its infancy and many things could be added (ie variation in trait distribution …) but here it is:

#load libraries
library(shiny)
library(MASS)
library(geometry)
library(plotrix)
library(FD)

#launch App
runGitHub("JenaExp",username = "Lionel68",subdir = "FD/Shiny_20150426")

All codes are available here: https://github.com/Lionel68/JenaExp/tree/master/FD

Literature:

Chapin III, F. Stuart, et al. “Consequences of changing biodiversity.” Nature 405.6783 (2000): 234-242.

Reiss, Julia, et al. “Emerging horizons in biodiversity and ecosystem functioning research.” Trends in ecology & evolution 24.9 (2009): 505-514.

Weiher, E. “A primer of trait and functional diversity.” Biological diversity, frontiers in measurement and assessment (2011): 175-193.

Villéger, Sébastien, Norman WH Mason, and David Mouillot. “New multidimensional functional diversity indices for a multifaceted framework in functional ecology.” Ecology 89.8 (2008): 2290-2301.

A function to help graphical model checks of lm and ANOVA

As always a more colourful version of this post is available on rpubs.

Even if LM are very simple models at the basis of many more complex ones, LM still have some assumptions that if not met would render any interpretation from the models plainly wrong. In my field of research most people were taught about checking ANOVA assumptions using tests like Levene & co. This is however not the best way to check if my model meet its assumptions as p-values depend on the sample size, with small sample size we will almost never reject the null hypothesis while with big sample even small deviation will lead to significant p-values (discussion). As ANOVA and linear models are two different ways to look at the same model (explanation) we can check ANOVA assumptions using graphical check from a linear model. In R this is easily done using plot(model), but people often ask me what amount of deviation makes me reject a model. One easy way to see if the model checking graphs are off the charts is to simulate data from the model, fit the model to these newly simulated data and compare the graphical checks from the simulated data with the real data. If you cannot differentiate between the simulated and the real data then your model is fine, if you can then try again!

EDIT: You can also make the conclusion form your visual inspection more rigorous following the protocols outlined in this article. (Thanks for Tim comment)

Below is a little function that implement this idea:

 

lm.test<-function(m){
  require(plyr)
  #the model frame
  dat<-model.frame(m)
  #the model matrix
  f<-formula(m)
  modmat<-model.matrix(f,dat)
  #the standard deviation of the residuals
  sd.resid<-sd(resid(m))
  #sample size
  n<-dim(dat)[1]
  #get the right-hand side of the formula  
  #rhs<-all.vars(update(f, 0~.))
  #simulate 8 response vectors from model
  ys<-lapply(1:8,function(x) rnorm(n,modmat%*%coef(m),sd.resid))
  #refit the models
  ms<-llply(ys,function(y) lm(y~modmat[,-1]))
  #put the residuals and fitted values in a list
  df<-llply(ms,function(x) data.frame(Fitted=fitted(x),Resid=resid(x)))
  #select a random number from 2 to 8
  rnd<-sample(2:8,1)
  #put the original data into the list
  df<-c(df[1:(rnd-1)],list(data.frame(Fitted=fitted(m),Resid=resid(m))),df[rnd:8])

  #plot 
  par(mfrow=c(3,3))
  l_ply(df,function(x){
    plot(Resid~Fitted,x,xlab="Fitted",ylab="Residuals")
    abline(h=0,lwd=2,lty=2)
  })

  l_ply(df,function(x){
    qqnorm(x$Resid)
    qqline(x$Resid)
  })

  out<-list(Position=rnd)
  return(out)
}

 

This function print the two basic plots: one looking at the spread of the residuals around the fitted values, the other one look at the normality of the residuals. The function return the position of the real model in the 3×3 window, counting from left to right and from top to bottom (ie position 1 is upper left graph).

Let’s try the function:
 

#a simulated data frame of independent variables
dat<-data.frame(Temp=runif(100,0,20),Treatment=gl(n = 5,k = 20))
contrasts(dat$Treatment)<-"contr.sum"
#the model matrix
modmat<-model.matrix(~Temp*Treatment,data=dat)
#the coefficient
coeff<-rnorm(10,0,4)
#simulate response data
dat$Biomass<-rnorm(100,modmat%*%coeff,1)
#the model
m<-lm(Biomass~Temp*Treatment,dat)
#model check
chk<-lm.test(m)

 

lm.test1

lm.test2

Can you find which one is the real one? I could not, here is the answer:

 

chk
$Position
[1] 4

Happy and safe modelling!

Generating ANOVA-like table from GLMM using parametric bootstrap

bootGLMM

This article may also be found on RPubs: http://rpubs.com/hughes/63269

In the list of worst to best way to test for effect in GLMM the list on http://glmm.wikidot.com/faq state that parametric bootstrapping is among the best options. PBmodcomp in the pbkrtest package implement such parametric bootstrapping by comparing a full model to a null one. The function simulate data (the response vector) from the null model then fit these data to the null and full model and derive a likelihood ratio test for each of the simulated data. Then we can compare the observed likelihood ratio test to the null distribution generated from the many simulation and derive a p-value. The advantage of using such a method over the classical p-values derived from a chi-square test on the likelihood ratio test is that in the parametric bootstrap we do not assume any null distribution (like chi-square) but instead derive our own null distribution from the model and the data at hand. We do not make the assumption then that the likelihood ratio test statistic is chi-square distributed. I have made a little function that wraps around the PBmodcomp function to compute bootstrapped p-values for each term in a model by sequentially adding them. This lead to anova-like table that are typically obtained when one use the command anova on a glm object.

#the libraries used
library(lme4)
library(arm)
library(pbkrtest)
#the function
anova_merMod<-function(model,rand,w=NULL,seed=round(runif(1,0,100),0),nsim=50){
  data<-model@frame
  if(!is.null(w)){
    data<-data[,-grep("(weights)",names(data))]
  }
  
  resp<-names(model.frame(model))[1]
  #generate a list of reduced model formula
  fs<-list()
  fs[[1]]<-as.formula(paste(resp,"~ 1 +",rand))
  nb_terms<-length(attr(terms(model),"term.labels"))
  if(nb_terms>1){
    for(i in 1:nb_terms){
      tmp<-c(attr(terms(model),"term.labels")[1:i],rand)
      fs[[i+1]]<-reformulate(tmp,response=resp)
    }      
  }

  #fit the reduced model to the data
  
  fam<-family(model)[1]$family
  fam<-gsub("\\([0-9]*\\.[0-9]*\\)","",fam)
  if(fam=="gaussian"){
    m_fit<-lapply(fs,function(x) lmer(x,data,REML=FALSE))
  } else if(fam=="binomial"){
    m_fit<-lapply(fs,function(x) glmer(x,data,family=fam,weights=w))
  }  else if(fam=="Negative Binomial"){
 m_fit<-lapply(fs,function(x) glmer.nb(x,data))
 }  else if(fam=="poisson"){
    m_fit<-lapply(fs,function(x) glmer(x,data,family=fam))
  }
  
  if(nb_terms==1){
   m_fit[[2]]<-model
  }
  #compare nested model with one another and get LRT values (ie increase in the likelihood of the models as parameters are added)
  tab_out<-NULL
  
  for(i in 1:(length(m_fit)-1)){
    comp<-PBmodcomp(m_fit[[i+1]],m_fit[[i]],seed=seed,nsim=nsim)    
    term_added<-attr(terms(m_fit[[i+1]]),"term.labels")[length(attr(terms(m_fit[[i+1]]),"term.labels"))]
    #here are reported the bootstrapped p-values, ie not assuming any parametric distribution like chi-square to the LRT values generated under the null model
    #these p-values represent the number of time the simulated LRT value (under null model) are larger than the observe one
    tmp<-data.frame(term=term_added,LRT=comp$test$stat[1],p_value=comp$test$p.value[2])
    tab_out<-rbind(tab_out,tmp)
    print(paste("Variable ",term_added," tested",sep=""))
  }  
  print(paste("Seed set to:",seed))
  return(tab_out)  
}

You pass your GLMM model to the function together with the random part as character (see example below), if you fitted a binomial GLMM you also need to provide the weights as a vector, you can then set a seed and the last argument is the number of simulation to do, it is set by default to 50 for rapid checking purpose but if you want to report these results larger values (ie 1000, 10000) should be used.

Let’s look at a simple LMM example:

data(grouseticks)
m<-lmer(TICKS~cHEIGHT+YEAR+(1|BROOD),grouseticks)
summary(m)
## Linear mixed model fit by REML ['lmerMod']
## Formula: TICKS ~ cHEIGHT + YEAR + (1 | BROOD)
##    Data: grouseticks
## 
## REML criterion at convergence: 2755
## 
## Scaled residuals: 
##    Min     1Q Median     3Q    Max 
## -3.406 -0.246 -0.036  0.146  5.807 
## 
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  BROOD    (Intercept) 87.3     9.34    
##  Residual             28.1     5.30    
## Number of obs: 403, groups:  BROOD, 118
## 
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)   5.4947     1.6238    3.38
## cHEIGHT      -0.1045     0.0264   -3.95
## YEAR96        4.1910     2.2424    1.87
## YEAR97       -4.3304     2.2708   -1.91
## 
## Correlation of Fixed Effects:
##         (Intr) cHEIGH YEAR96
## cHEIGHT -0.091              
## YEAR96  -0.726  0.088       
## YEAR97  -0.714  0.052  0.518
anova_merMod(model=m,rand="(1|BROOD)")
## [1] "Variable cHEIGHT tested"
## [1] "Variable YEAR tested"
## [1] "Seed set to: 63"
##      term   LRT p_value
## 1 cHEIGHT 14.55 0.01961
## 2    YEAR 14.40 0.01961

The resulting table show for each term in the model the likelihood ratio test, which is basically the decrease in deviance when going from the null to the full model and the p value, you may look at the PBtest line in the details of ?PBmodcomp to see how it is computed.

Now let’s see how to use the function with binomial GLMM:

#simulate some binomial data
x1<-runif(100,-2,2)
x2<-runif(100,-2,2)
group<-gl(n = 20,k = 5)
rnd.eff<-rnorm(20,mean=0,sd=1.5)
p<-1+0.5*x1-2*x2+rnd.eff[group]+rnorm(100,0,0.3)
y<-rbinom(n = 100,size = 10,prob = invlogit(p))
prop<-y/10
#fit a model
m<-glmer(prop~x1+x2+(1|group),family="binomial",weights = rep(10,100))
summary(m)
## Generalized linear mixed model fit by maximum likelihood (Laplace
##   Approximation) [glmerMod]
##  Family: binomial  ( logit )
## Formula: prop ~ x1 + x2 + (1 | group)
## Weights: rep(10, 100)
## 
##      AIC      BIC   logLik deviance df.resid 
##    288.6    299.1   -140.3    280.6       96 
## 
## Scaled residuals: 
##    Min     1Q Median     3Q    Max 
## -2.334 -0.503  0.181  0.580  2.466 
## 
## Random effects:
##  Groups Name        Variance Std.Dev.
##  group  (Intercept) 1.38     1.18    
## Number of obs: 100, groups:  group, 20
## 
## Fixed effects:
##             Estimate Std. Error z value Pr(>|z|)    
## (Intercept)    0.748      0.287    2.61   0.0092 ** 
## x1             0.524      0.104    5.02  5.3e-07 ***
## x2            -2.083      0.143  -14.56  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Correlation of Fixed Effects:
##    (Intr) x1    
## x1  0.090       
## x2 -0.205 -0.345
#takes some time
anova_merMod(m,rand = "(1|group)",w = rep(10,100))
## [1] "Variable x1 tested"
## [1] "Variable x2 tested"
## [1] "Seed set to: 98"
##   term      LRT p_value
## 1   x1   0.0429 0.80392
## 2   x2 502.0921 0.01961

For binomial model, the model must be fitted with proportion data and a vector of weights (ie the number of binomial trial) must be passed to the ‘w’ argument. Some warning message may pop up at the end of the function, this comes from convergence failure in PBmodcomp, this do not affect the results, you may read the article from the pbkrtest package: http://www.jstatsoft.org/v59/i09/ to understand better where this comes from.

Happy modeling and as Ben Bolker say: “When all else fails, don’t forget to keep p-values in perspective: http://www.phdcomics.com/comics/archive.php?comicid=905

Evenness effect on ecosystem functioning

eve_ef

Most biodiversity ecosystem function (BEF) experiment focus on one aspect of diversity: species richness, and had to it some functional composition flavour. The treatment manipulate the number of species and the number of broad functional group occurring. The issue there is that diversity is not only species richness, the classical definition of diversity tells us that it is the combination between richness and evenness that makes diversity (Tuomitso 2012). Diversity is therefore the number of biological unit present at a given location at a given time taking into account how are the individuals distributed amongst the species.

Some definition:

A community with 10 species where 99% of individuals belong to one species is thought to be less diverse than a community with 10 species and equal repartition of the individuals between the species. The difference in the distribution of the total abundance between the different species in a community is called evenness. A community where all species have the same abundance will get the highest possible evenness value while a community where most of the individual belong to one species will have the lowest evenness value. There is a whole bunch of index that can be used to represent these three different components: species richness, evenness and diversity, people interested by knowing how to measure these should read Magurran and McGill 2011.

Here in this post I will talk about the ways by which a community of changing evenness will affect its ecosystem function.

The early literature on BEF already highlighted the potential impacts of changing evenness on the function of the ecosystem (Chapin et al 2000), as changes in the evenness will impact the distribution of the functional traits values (see figure 1 in Hillebrand et al 2008).

A community evenness will shift much earlier than its species richness, before a species is loss its abundance is likely to decline gradually leading to variation in evenness if this decline is not uniformly affecting all species in the system (ie if all species loose 5 individuals between time 1 and time 2, evenness will be unchanged). Tracking community evenness gives us some kind of early warning signals of what is going on in the community.

We can make three different scenarios on how changes in evenness will affect ecosystem function:

– if the function comes from synergies between the species in the community (ie complementarity, mutualism …) a reduction in evenness would lead to decline in the function rates as the beneficial supports between the species declines.

– if the function rates is dominated by the output of one species (ie one dominant species) and this species performance for the focal function is below the average, then a decrease in evenness due to the increased dominance of this species will lead to decreasing function rates

– if the function rates is dominated by the output of one species performing above average (a superproductive species), then the decrease in evenness caused by the increased dominance of this species will lead to an increase in the function.

Empirical results of the relation of evenness and ecosystem function are still rather rare, Wilsey and Potvin (2000) found in a grassland system that in plots with higher evenness there were higher variation in the light-capturing strategies used by the plant and so higher biomass production.

Evenness and dominance are two closely linked concept that often correlates between one another, Hillebrand et al (2008) reviewed some of the literature investigating links between evenness or dominance and ecosystem function, all kind of relation have already been found ie negative, positive or no relation.

So evenness effect on ecosystem function may take different form and will be sensible to spatial and time scale under study but knowledge on these relations will help us predict how the ecosystem functions will respond to the continuous shifts in the abundance of the species under anthropogenic pressures.

Litterature:

Chapin III, F. Stuart, et al. “Consequences of changing biodiversity.” Nature 405.6783 (2000): 234-242.

Hillebrand, Helmut, Danuta M. Bennett, and Marc W. Cadotte. “Consequences of dominance: a review of evenness effects on local and regional ecosystem processes.” Ecology 89.6 (2008): 1510-1520.

Magurran, Anne E., and Brian J. McGill, eds. Biological diversity: frontiers in measurement and assessment. Vol. 12. Oxford: Oxford University Press, 2011.

Tuomisto, Hanna. “An updated consumer’s guide to evenness and related indices.” Oikos 121.8 (2012): 1203-1218.

Wilsey, Brian J., and Catherine Potvin. “Biodiversity and ecosystem functioning: importance of species evenness in an old field.” Ecology 81.4 (2000): 887-892.

Using and interpreting different contrasts in linear models in R

When building a regression model with categorical variables with more than two levels (ie “Cold”, “Freezing”, “Warm”) R is doing internally some transformation to be able to compute regression coefficient. What R is doing is that it is turning your categorical variables into a set of contrasts, this number of contrasts is the number of level in the variable (3 in the example above) minus 1. Here I will present three ways to set the contrasts and depending on your research question and your variables one might be more appropriate than the others.

 # setting seed so that numerical results stay stable
 set.seed(25)
 # let's imagine an experiment which measure plant biomass based on various
 # levels of nutrient added to the medium first case one treatments three
 # levels
 f <- gl(n = 3, k = 20, labels = c("control", "low", "high"))
 # with treatments contrasts (default)
 mat <- model.matrix(~f, data = data.frame(f = f))
 # this tell us which contrast has been used to generate the model matrix
 attr(mat, "contrasts")
 
## $f
 ## [1] "contr.treatment"
# simulate some ys
 beta <- c(12, 3, 6)  #these are the simulated regression coefficient
 y <- rnorm(n = 60, mean = mat %*% beta, sd = 2)
 m <- lm(y ~ f)
 summary(m)
## 
## Call:
## lm(formula = y ~ f)
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -4.350 -1.611  0.161  1.162  5.201 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
 ## (Intercept) 11.498 0.463 24.84 < 2e-16 ***
 ## flow        3.037 0.655 4.64 2.1e-05 ***
 ## fhigh       6.163 0.655 9.41 3.3e-13 ***
 ## ---
 ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
 ##
 ## Residual standard error: 2.07 on 57 degrees of freedom
 ## Multiple R-squared: 0.609, Adjusted R-squared: 0.595
 ## F-statistic: 44.3 on 2 and 57 DF, p-value: 2.45e-12

The first coefficient, the intercept, is the estimated average for the baseline group, which in our case is the control group (in the control group the biomass is estimated to be on average 11.49). The second coefficient “flow” is the estimated difference between the average in the baseline group and the average in the “low” group (this group has an increase in 3.03 biomass compared to the control group). Similarly the third coefficient is the difference between the average in the baseline and the “high” group (this group has an increase of 6.16 biomass compared to the control group).

 # plot the results
 plot(y ~ rep(1:3, each = 20), xaxt = "n", xlab = "Treatment")
 axis(1, at = c(1, 2, 3), labels = levels(f))
 points(c(1, 2, 3), c(coef(m)[1], coef(m)[2:3] + coef(m)[1]), pch = 16, cex = 2)
 

c1

This is by default, now let’s turn to other contrasts options:

  # now with helmert contrasts
 mat  attr(mat, "contrasts")
 ## $f
 ## [1] "contr.helmert"
# simulate the ys
beta <- c(5, 3, 2)
y <- rnorm(n = 60, mean = mat %*% beta, sd = 1.5)
# model
m <- lm(y ~ f, contrasts = list(f = "contr.helmert"))  #there we tell the model to use helmert contrast to build the model
summary(m)
## 
## Call:
## lm(formula = y ~ f, contrasts = list(f = "contr.helmert"))
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -2.281 -0.973 -0.202  0.752  2.986 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)    4.756      0.184    25.9   <2e-16 ***
## f1             3.116      0.225    13.9   <2e-16 ***
## f2             1.949      0.130    15.0   <2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.42 on 57 degrees of freedom
## Multiple R-squared:  0.88,   Adjusted R-squared:  0.876 
## F-statistic:  209 on 2 and 57 DF,  p-value: <2e-16

Now the meaning of the various coefficient is different, the intercept is the average biomass over all the levels (across control, low and high), f1 is the difference between the average of the first level (control) and the average of the second one (low), plants in the treatment low have a 3.11 increase in biomass. f2 is the difference between the average of control and low and the average of high treatment. To put this differently, if we put together the data from the control and low treatment and compare there average value to the average value of plants in the high treatment we would get fhigh. Mean biomass of the plants in the high treatment is higher by 1.95 than plants of the control and low treatment. This type of contrast is a bit harder to interpret but is well suited for variables where the levels have an order, ie (“0”,“>0 and <5″,”>5 and <10” …), there we can gradually compare the successive levels.

As a note, in most applied cases we can set the contrasts that will be used by the model using contrasts(factor)<-‘some_contrasts’. Let’s turn to the last contrasts the effect (or deviation) coding:

# now with sum contrasts, let's spice up a little bit and introduce an
# interaction with a continuous variables
x <- runif(60, -2, 2)
levels(f) <- c("North", "South", "Center")  #let's make different level which cannot easily be given an order or a baseline
mat <- model.matrix(~x * f, data = data.frame(f = f), contrasts.arg = list(f = "contr.sum"))
attr(mat, "contrasts")
 # simulate the ys
beta <- c(5, -2, 3, 2, 1, -4)
y <- rnorm(n = 60, mean = mat %*% beta, sd = 1.5)
# model
m <- lm(y ~ x * f, contrasts = list(f = "contr.sum"))
summary(m)
## 
## Call:
## lm(formula = y ~ x * f, contrasts = list(f = "contr.sum"))
## 
## Residuals:
##    Min     1Q Median     3Q    Max 
## -3.271 -1.021 -0.165  0.867  3.914 
## 
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)    
## (Intercept)    5.057      0.207   24.42  < 2e-16 ***
## x             -1.869      0.185  -10.13  4.4e-14 ***
## f1             2.699      0.295    9.16  1.4e-12 ***
## f2             2.351      0.293    8.03  8.8e-11 ***
## x:f1           0.954      0.264    3.61  0.00067 ***
## x:f2          -4.269      0.268  -15.92  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## 
## Residual standard error: 1.58 on 54 degrees of freedom
## Multiple R-squared:  0.922,  Adjusted R-squared:  0.914 
## F-statistic:  127 on 5 and 54 DF,  p-value: <2e-16

This type of contrasts is useful when there is no natural way to set a baseline or an ordering in the different levels of the variables. I changed the name of the level to illustrate what I mean by this, let’s imagine in this context that we had sampled our plant in three different localities, it is hard to determine in this context what should be the baseline, the deviation coding is a nice way to model these type of data. The intercept in this context is the overall mean across the levels (as in the helmert contrasts), so overall the plant biomass was 5.05. The second one the the average slope between the biomass and the x variable, if we increase x by one the plant biomass decrease by -1.87 across the geographical gradient. f1 is the difference between the overall mean and the mean in the north locality, similarly f2 is the difference between the overall mean and the south locality. To get the estimated average value at the center locality we have to do:

coef(m)[1] - coef(m)[3] - coef(m)[4]
## (Intercept)
## 0.00678

The interaction coefficient are the deviation of the slope within a group from the overall slope, therefore in the north if we increase x by 1, we decrease the biomass by 1.87-0.95= 0.92, similarly the slope in the south is -1.87+(-4.27)= -6.14 and in the center: -1.87-0.95-(-4.27) = +1.45. Around each of these coefficient we have some assessement of the significance of the difference between the overall mean and the various groups. So far I could not find a way to assess the significance of the difference between the overall mean and the last group …

Let’s do a nice figure of this:

 # a nice plot
 plot(y ~ x, xlab = "Temperature", ylab = "Biomass", pch = 16, col = rep(c("orange",
 "violetred", "coral"), each = 20))
 abline(a = 4.55, b = -2, lwd = 2, lty = 1, col = "blue")
 abline(a = coef(m)[1] + coef(m)[3], b = coef(m)[2] + coef(m)[5], lwd = 2, lty = 2,
 col = "orange")
 abline(a = coef(m)[1] + coef(m)[4], b = coef(m)[2] + coef(m)[6], lwd = 2, lty = 2,
 col = "violetred")
 abline(a = coef(m)[1] - coef(m)[3] - coef(m)[4], b = coef(m)[2] - coef(m)[5] -
 coef(m)[6], lwd = 2, lty = 2, col = "coral")
 legend("topright", legend = c("Average", "North", "South", "Center"), col = c("blue",
 "orange", "violetred", "coral"), pch = 16, lty = c(1, 2, 2, 2), bty = "n")
 

c2

Thats’s it for today, this page was greatly helpful in understanding the meaning of the various contrasts: http://www.unc.edu/courses/2006spring/ecol/145/001/docs/lectures/lecture26.htm, the wikipedia page is also rather nice on this: https://en.wikipedia.org/wiki/Categorical_variable

 

What roles should ecologist have in the society?

advocacy

This post is a reflection on the roles that ecologist should have in the society, it was sparked by some discussion I had at the BES/SFE meeting in december. This is one of my first try at blogging essay-like away from my nice beloved R-code.

First of I’d like to start with a definition of ecologist, a concept that might be not so clear in people’s mind. At least in French (and maybe in other languages) the word to name someone who is doing ecology as a science and the word for political/militanting ecology is the same (écologistes). I would like to stress here the difference between the two: the first one represent people doing ecology as a science, so trying to answer questions using repeatable protocols and interpreting the results stemming from it in the light of previous works. The second one represent peoples fighting for some values that are dear to them, and trying to implement in the political agenda some policies that tend to reach these goals. Throughout this post the word ecologist will refer to the first meaning of the word (scientist).

In the recent years there have been more and more calls for ecologist to do policy-relevant science. Also the creation of the IPBES is a formidable opportunity for ecologist to engage in assessment that will hopefully guide policy-making at various levels (international to local). What are the different roles that an ecologist could take at this interface between science and policy but also science and the society?

The first role which most ecologist feel confident about is being an expert, your knowledge in a certain area is acknowledged by your peers and when policy-maker need some information on particular topic (ie a government would like to protect a certain bird species) they call you to provide them with some knowledge. In this context the ecologist give some information on the possible outcomes from the various alternatives on the table and then go back to his/her research. Under this scheme the ecologist do not rank the various alternatives, just outline their likely consequences and benefits, it is then for the policy-maker and other members of the civil society present at the discussion to decide what would be the best course of actions based on this newly-acquired knowledge and some sets of values.

First note aside about using words: some words carry values and should be avoided by scientist talking to society/policy-maker as an expert (ie poor, good …), values comes from a certain perspectives and if the perspective is clearly defined (ie we want to maintain 60% of bird species in this area at any costs) then using value-laden words maybe ok (ie removing wind farms will be beneficial for the policy). However in a general context it can be misleading to use such words (ie biodiversity is good) because they call for a desired outcome that may or may not be explicitly stated. In a broader context the use of neutral words like increase or vary is to be preferred.

The second role goes one step further and do not only outline the likely consequences of various policy alternatives but also state which of these alternatives is the best, this is called advocating. Here there come some serious issues that what one person defines as best alternatives do not depend on scientific knowledge but on personal, normative values (forest are inherently better than urban habitat for biodiversity). This role can make sense because ecologist working on a certain system for 20-30 years have developed a deep understanding of this system, why should they not state what they feel is the best? Even more provocative, we could say that it is some kind of duty for ecologist to state what is best are they are the one knowing better and if they remain silent suboptimal options could be taken up as other voices will make stronger case: “It would be irresponsible not to share views based on deep experience with the public and decision makers” Mooney and Ehrlich see litterature below.

Second note aside on scientist as perfectly neutral person: the idea that scientist in general are neutral and able to give information without our opinions transpiring into it is most certainly untrue. We are human beings with constraints affecting how we work and how we communicate and interpret our results. More importantly we all have values and must be able as every citizen to fight for them. As a note within a note this is actually the nice thing about turning ecology from a qualitative science to a quantitative one, numbers will tend not to reflect ones opinions and feelings (even if they maybe very important in understanding your system/data).

My point of view is to be both, expert and advocate but in different settings and making clear what role we are endorsing when making a statement. Engage in panel discussion with policy-maker as an expert and provide information on various alternatives, but also engage with the society and when advocating for particular outcome clearly state it and acknowledge that you are no longer an expert but a advocate, your values are not inherently better than others. The trick here would be to still be invited to participate to the discussion with the policy-maker, I do not know how to solve this …

Finding the right balance between neutral expert and engaged citizen will be one of the challenge of the future ecologists, a quote to finish: “Why should I study science if I can’t personally influence the direction of events?” Students asking Roughgarden a tricky question.

Some food for thoughts:
https://www.nceas.ucsb.edu/files/projects/resources/ecoessay/wagner/index.html
http://bigthink.com/age-of-engagement/do-scientists-have-a-special-responsibility-to-engage-in-political-advocacy
http://bioscience.oxfordjournals.org/content/53/2/170.full
http://onlinelibrary.wiley.com/doi/10.1111/j.1523-1739.2006.00639.x/
http://www.esajournals.org/doi/abs/10.1890/12-1695.1
http://link.springer.com/article/10.1007/s10584-014-1108-1

Follow

Get every new post delivered to your Inbox.