Crossed and Nested hierarchical models with STAN and R

Below I will expand on previous posts on bayesian regression modelling using STAN (see previous instalments here, here, and here). Topic of the day is modelling crossed and nested design in hierarchical models using STAN in R.

Crossed design appear when we have more than one grouping variable and when data are recorded for each combination of the grouping variables. For example say that we measured the growth of a fungi on different Petri dishes and that you took several samples from each dishes. In this example we have two grouping variables: the Petri dish and the sample. Since we have observations for each combination of the two grouping variables we are in a crossed design. We can model this using a hierarchical model with an intercept representing the average growth, a parameter representing the deviation from this average for each Petri dish and an additional parameter representing the deviation from the average for each sample. Below is the corresponding model in STAN code:

/*A simple example of an crossed hierarchical model
*based on the Penicillin data from the lme4 package

data {
  int<lower=0> N;//number of observations
  int<lower=0> n_sample;//number of samples
  int<lower=0> n_plate;//number of plates
  int<lower=1,upper=n_sample> sample_id[N];//vector of sample indeces
  int<lower=1,upper=n_plate> plate_id[N];//vector of plate indeces
  vector[N] y;
parameters {
  vector[n_sample] gamma;//vector of sample deviation from the average 
  vector[n_plate] delta;//vector of plate deviation from the average
  real<lower=0> mu;//average diameter value
  real<lower=0> sigma_gamma;//standard deviation of the gamma coeffs
  real<lower=0> sigma_delta;//standard deviation of the delta coeffs
  real<lower=0> sigma_y;//standard deviation of the observations
transformed parameters {
  vector[N] y_hat;

  for (i in 1:N)
    y_hat[i] = mu + gamma[sample_id[i]] + delta[plate_id[i]];
model {
  //prior on the scale coefficient
//weakly informative priors, see section 6.9 in STAN user guide
  sigma_gamma ~ cauchy(0,2.5);
  sigma_delta ~ cauchy(0,2.5);
  sigma_y ~ gamma(2,0.1);
  //get sample and plate level deviation
  gamma ~ normal(0, sigma_gamma);
  delta ~ normal(0, sigma_delta);
  y ~ normal(y_hat, sigma_y);
generated quantities {
//sample predicted values from the model for posterior predictive checks
  real y_rep[N];
  for(n in 1:N)
    y_rep[n] = normal_rng(y_hat[n],sigma_y);

Pasting and saving this code into a .stan file we now turn to R using the Penicillin dataset from the lme4 package as (real-life) example:

library(shinystan)#for great model viz
library(ggplot2)#for great viz in general
#look if we have sample for each combination
#create the plate and sample index
#the model matrix (just an intercept in this case)

#fit the model
m_peni<-stan(file = "crossed_penicillin.stan",


The model seem to fit pretty nicely, all chains converged for all parameters (Rhat around 1), we have decent posterior distribution (top panel in the figure below) and also good correlation between observed and fitted data (bottom panel figure below).


In a next step we can look at the deviation form the average diameter for each sample and each plate (Petri dish):

#make caterpillar plot
 labs(y="Regression parameters")


We can compare this figure to Figure 2.2 in here where the same model was fitted to the data using lmer.

I now turn to nested design. Nested design occur when there is more than one grouping variable and when there is a hierarchy in these variables with categories from lower variables only being present at one level from higher variables. For examples if we measured student scores within classes within schools we would have a nested hierarchical design. In the following I will use the Arabidopsis dataset from the lme4 package. Arabidopsis plants from different regions (Netherlands, Spain and Sweden) and from different populations within these regions (nested design) were collected and the researchers looked at the effect of herbivory and nutrient addition on the number of fruits produced per plants. Below is the corresponding STAN code:

/*Nested regression example
*Three-levels with varying-intercept
*based on:
*and applied to the Arabidopsis data from lme4

data {
  int<lower=1> N; //number of observations
  int<lower=1> P; //number of populations
  int<lower=1> R; //number of regions
//population ID
  int<lower=1,upper=P> PopID[N]; 
 //index of population appertenance to a specific region
  int<lower=1,upper=R> PopWithinReg[P]; 

  int<lower=0> Fruit[N]; //the response variable
  real AMD[N]; //predictor variable, whether the apical meristem was unclipped (0) or clipped (1)
  real nutrient[N]; //predictor variable, whether nutrient level were control (0) or higher (1)

parameters {
  //regression slopes
  real beta_0; //intercept
  real beta_1; //effect of clipping apical meristem on number of fruits
  real beta_2; //effect of increaing nutrient level on number of fruits

  //the deviation from the intercept at the different levels
  real dev_pop[P]; //deviation between the populations within a region
  real dev_reg[R]; //deviation between the regions

  //the standard deviation for the deviations
  real<lower=0> sigma_pop;
  real<lower=0> sigma_reg;

transformed parameters {
  //varying intercepts
  real beta_0pop[P];
  real beta_0reg[R];

  //the linear predictor for the observations
  real<lower=0> lambda[N];

  //compute the varying intercept at the region level
  for(r in 1:R){
    beta_0reg[r] = beta_0 + dev_reg[r];}

  //compute varying intercept at the population within region level
  for(p in 1:P){
     beta_0pop[p] = beta_0reg[PopWithinReg[p]] + dev_pop[p];}

  //the linear predictor
  for(n in 1:N){
     lambda[n] = beta_0pop[PopID[n]] + beta_1 * AMD[n] + beta_2 * nutrient[n];}

model {
  //weakly informative priors on the slopes
  beta_0 ~ cauchy(0,5);
  beta_1 ~ cauchy(0,5);
  beta_2 ~ cauchy(0,5);

  //weakly informative prior on the standard deviation
  sigma_pop ~ cauchy(0,2.5);
  sigma_reg ~ cauchy(0,2.5);

  //distribution of the varying intercept
  dev_pop ~ normal(0,sigma_pop);
  dev_reg ~ normal(0,sigma_reg);

  Fruit ~ poisson_log(lambda);

generated quantities {
//sample predicted values from the model for posterior predictive checks
 int<lower=0> fruit_rep[N];
 for(n in 1:N)
   fruit_rep[n] = poisson_log_rng(lambda[n]);

I decided to use a Poisson distribution as the response is a count variable. The only “tricky” part is the index linking a particular population to its specific region (PopWithinReg). In this model we assume that variations between populations within regions is only affecting the average number of fruits but is not affecting the plant responses to the simulated herbivory (AMD) and to increased in nutrient levels. In other words populations within region is an intercept-only “random effect”. We turn back to R:

#generate the IDs <- as.numeric(Arabidopsis$popu)
pop_to_reg <- as.numeric(factor(substr(levels(Arabidopsis$popu),3,4)))
#create the predictor variables
amd <- ifelse(Arabidopsis$amd=="unclipped",0,1)
nutrient <- ifelse(Arabidopsis$nutrient==1,0,1)

m_arab <- stan("nested_3lvl.stan",data=list(N=625,P=9,R=3,,
#check model

Rstan is warning us that we had some divergent iterations, we could correct this using non-centered re-parametrization (See this post and the STAN user guide). More worrisome is the discrepancy between the posterior predictive data and the observed ones:


We can explore these errors for each populations within regions:

mcmc_arab <- extract(m_arab)
#plot obs vs fitted data across groups
fit_arab <- mcmc_arab$fruit_rep
#average across MCMC samples
Arabidopsis$Fit <- apply(fit_arab,2,mean)
#plot obs vs fit


The model predict basically four values, one for each combination of the two treatment variables. The original data are way more dispersed than the fitted ones, one could try to use negative binomial distribution while making the treatment effect also vary between the populations between the regions …

That’s it for this post, a great source of regression models for further examples in the STAN-wiki.




Hierarchical models with RStan (Part 1)

Real-world data sometime show complex structure that call for the use of special models. When data are organized in more than one level, hierarchical models are the most relevant tool for data analysis. One classic example is when you record student performance from different schools, you might decide to record student-level variables (age, ethnicity, social background) as well as school-level variable (number of student, budget). In this post I will show how to fit such models using RStan. As there is much to say and try on such models I restrict myself in this post to a rather simple example, I will extend this to more complex situations in latter posts.

A few words about RStan:

If you don’t know anything about STAN and RStan make sure to check out this webpage. In a few words RStan is an R interface to the STAN programming language that let’s you fit Bayesian models. A classical workflow looks like this:

  1. Write a STAN model file ending with a .stan
  2. In R fit the model using the RStan package passing the model file and the data to the stan function
  3. Check model fit, a great way to do it is to use the shinystan package

First example with simulated data:

Say that we recorded the response of 10 different plant species to rising temperature and nitrogen concentration. We measured the biomass of 10 individuals per species along a gradient of both temperature and nitrogen concentration and we would like to know how these two variables affect plant biomass. In hierarchical model we let regression parameters vary between the species, this means that, for example, species A might have a more positive slope between temperature and biomass than species B. Note however that we do not fit separate regression to each species, rather the regression parameters for the different species are themselves being fitted to a statistical distribution.

In mathematical terms this example can be written:

\mu_{ij} = \beta_{0j} + \beta_{1j} * Temperature_{ij} + \beta_{2j} * Nitrogen_{ij}

\beta_{0j} \sim N(\gamma_{0},\tau_{0})

… (same for all regression coefficients) …

y_{ij} \sim N(\mu_{ij},\sigma)

For observations i: 1 … N and species j: 1 … J.

This is how such a model looks like in STAN:

/*A simple example of an hierarchical model*/
data {
  int<lower=1> N; //the number of observations
  int<lower=1> J; //the number of groups
  int<lower=1> K; //number of columns in the model matrix
  int<lower=1,upper=J> id[N]; //vector of group indeces
  matrix[N,K] X; //the model matrix
  vector[N] y; //the response variable
parameters {
  vector[K] gamma; //population-level regression coefficients
  vector<lower=0>[K] tau; //the standard deviation of the regression coefficients

  vector[K] beta[J]; //matrix of group-level regression coefficients
  real<lower=0> sigma; //standard deviation of the individual observations
model {
  vector[N] mu; //linear predictor
  gamma ~ normal(0,5); //weakly informative priors on the regression coefficients
  tau ~ cauchy(0,2.5); //weakly informative priors, see section 6.9 in STAN user guide
  sigma ~ gamma(2,0.1); //weakly informative priors, see section 6.9 in STAN user guide
  for(j in 1:J){
   beta[j] ~ normal(gamma,tau); //fill the matrix of group-level regression coefficients 
  for(n in 1:N){
    mu[n] = X[n] * beta[id[n]]; //compute the linear predictor using relevant group-level regression coefficients 

  y ~ normal(mu,sigma);

You can copy/paste the code into an empty text editor and save it under a .stan file.

Now we turn into R:

#load libraries
#where the STAN model is saved
#simulate some data
N<-100 #sample size
J<-10 #number of plant species
id<-rep(1:J,each=10) #index of plant species
K<-3 #number of regression coefficients
#population-level regression coefficient
#standard deviation of the group-level coefficient
#standard deviation of individual observations
#group-level regression coefficients
beta<-mapply(function(g,t) rnorm(J,g,t),g=gamma,t=tau) 
#the model matrix
y<-vector(length = N)
for(n in 1:N){
  #simulate response data
#run the model

The MCMC sampling takes place (took about 90 sec per chain on my computer), and then I got this warning message: “Warning messages:
1: There were 61 divergent transitions after warmup. Increasing adapt_delta above 0.8 may help. See
2: Examine the pairs() plot to diagnose sampling problems”

Here is an explanation for this warning: “For some intuition, imagine walking down a steep mountain. If you take too big of a step you will fall, but if you can take very tiny steps you might be able to make your way down the mountain, albeit very slowly. Similarly, we can tell Stan to take smaller steps around the posterior distribution, which (in some but not all cases) can help avoid these divergences.” from here. This issue occur quite often in hierarchical model with limited sample size, the simplest solution being to re-parameterize the model, in other words to re-write the equations so that the MCMC sampler has an easier time sampling the posterior distribution.

Below is a new STAN model with a non-centered parameterization (See Section 22.6 in STAN user guide):

parameters {
  vector[K] gamma; //population-level regression coefficients
  vector<lower=0>[K] tau; //the standard deviation of the regression coefficients
  //implementing Matt's trick
  vector[K] beta_raw[J];
  real<lower=0> sigma; //standard deviation of the individual observations
transformed parameters {
  vector[K] beta[J]; //matrix of group-level regression coefficients
  //computing the group-level coefficient, based on non-centered parametrization based on section 22.6 STAN (v2.12) user's guide
  for(j in 1:J){
    beta[j] = gamma + tau .* beta_raw[j];
model {
  vector[N] mu; //linear predictor
  gamma ~ normal(0,5); //weakly informative priors on the regression coefficients
  tau ~ cauchy(0,2.5); //weakly informative priors, see section 6.9 in STAN user guide
  sigma ~ gamma(2,0.1); //weakly informative priors, see section 6.9 in STAN user guide
  for(j in 1:J){
   beta_raw[j] ~ normal(0,1); //fill the matrix of group-level regression coefficients 
  for(n in 1:N){
    mu[n] = X[n] * beta[id[n]]; //compute the linear predictor using relevant group-level regression coefficients 
  y ~ normal(mu,sigma);

Note that the data model block is identical in the two cases.

We turn back to R:

#re-parametrize the model
#no more divergent iterations, we can start exploring the model
#a great way to start is to use the shinystan library
#model inference
Inference for Stan model: hierarchical1_reparam.
4 chains, each with iter=2000; warmup=1000; thin=1; 
post-warmup draws per chain=1000, total post-warmup draws=4000.

          mean se_mean   sd  2.5%   25%   50%  75% 97.5% n_eff Rhat
gamma[1]  1.96    0.00 0.17  1.61  1.86  1.96 2.07  2.29  2075    1
gamma[2] -0.03    0.02 0.77 -1.53 -0.49 -0.04 0.43  1.55  1047    1
gamma[3]  2.81    0.02 0.49  1.84  2.52  2.80 3.12  3.79   926    1
tau[1]    0.34    0.01 0.21  0.02  0.19  0.33 0.46  0.79  1135    1
tau[2]    2.39    0.02 0.66  1.47  1.94  2.26 2.69  4.04  1234    1
tau[3]    1.44    0.01 0.41  0.87  1.16  1.37 1.65  2.43  1317    1
sigma     1.04    0.00 0.09  0.89  0.98  1.04 1.10  1.23  2392    1

Samples were drawn using NUTS(diag_e) at Thu Nov 10 14:11:41 2016.
For each parameter, n_eff is a crude measure of effective sample size,
and Rhat is the potential scale reduction factor on split chains (at 
convergence, Rhat=1).

The regression parameters were all decently estimated except for the second slope coefficient, the simulated value was -1.

All MCMC samples for all coefficient can be easily extracted and used to compute whatever your interest is:

#extract the MCMC samples

#plot average response to explanatory variables
#get predicted values for each MCMC sample
pred_x1<-apply(mcmc_hier$gamma,1,function(beta) X_new %*% beta)
#now get median and 95% credible intervals
#same stuff for the second explanatory variables
pred_x2<-apply(mcmc_hier$gamma,1,function(beta) X_new %*% beta)

Here we basically generated new model matrices where only one variable was moving at a time, this allowed us to get the model prediction for the effect of say temperature on plant biomass under average nutrient conditions. These predictions were obtained by multiplying the model matrix with the coefficients for each MCMC sample (the first apply command), and then we can get from these samples the median with 95% credible intervals (the second apply command).

Now we can plot this (code for the plots at the end of the post)


Another important plot is the variation in the regression parameters between the species, again this is easily done using the MCMC samples:

#now we could look at the variation in the regression coefficients between the groups doing caterpillar plots
#we may also add the population-level median estimate
  labs(y="Regression parameters")


The cool thing with using STAN is that we can extend or modify the model in many ways. This will be the topics of future posts which will include: crossed and nested design, multilevel modelling, non-normal distributions and much more, stay tuned!

Code for the first plot:

plot(y~X[,2],pch=16,xlab="Temperature",ylab="Response variable",col=cols[id])
plot(y~X[,3],pch=16,xlab="Nitrogen concentration",ylab="Response variable",col=cols[id])
mtext(text = "Population-level response to the two\nexplanatory variables with 95% CrI",side = 3,line = 0,outer=TRUE)
legend(x=2.1,y=10,legend=paste("Gr",1:10),ncol = 1,col=cols,pch=16,bty="n",xpd=NA,title = "Group\nID")


Shiny and Leaflet for Visualization Awards

Next week will be the meeting of the German (and Swiss and Austrians) ecologists in Marburg and the organizing team launched a visualization contest based on spatial data of the stores present in the city. Nadja Simons and I decided to enter the contest, our idea was to link the store data to the city bus network promoting a sustainable and safe way of movement through the town (we are ecologists after all).

We used leaflet to prepare the map and plot the bus lines, bus stops and stores. On top of that because all this information is rather overwhelming to grasp (more than 200 stores and 50 bus stops), we implemented a shiny App to allow the user to restrict its search of the best Bars in Marburg.

All the code is on GitHub and the App can accessed by clicking here.

Go there, enjoy, and if you want to learn a bit about the process of developing the App come back here.


First a few words on what is so cool about leaflet:

  • There is a lot of maps available
  • With the AwesomeMarker plugin, available in the github repo of the package, you can directly tap into three awesome libraries for icons: font awesome, glyphicons and ionicons
  • Leaflet understand HTML code, we used it to provide a clickable link to the bus timetables
  • Using groups makes it easy to interactively add or remove group of objects from the map

Having this was already nice, but putting it into a shiny App was even more exciting. Here are some of the main points:

  • A very important concept in shiny is the concept of reactivity, the way I understood it is that a reactive object is a small function that will get updated every time the user input some elements, see this nice tutorial for more on this.
  • Our idea of the App was that stores should appear when the mouse is passed over their nearest bus stop. The issue there is that the stores must then disappear if the mouse moves out. The trick is to create a set of reactiveValues that are NULL as long as the event (mouse passes over a bus stop) does not occure AND return to NULL when the event is finished (mouse moves out), helped by this post, we were able to implement what we wanted.
  • Shiny gives you a lot of freedom to create and customize the design of the App, what I found very cool was the possibility to have tabs.

Be sure to check the leaflet tutorial page, 90% of my questions were answered there, also check out the many articles on shiny.

See you in Marburg if you’ll be around!


Simulating local community dynamics under ecological drift

In 2001 the book by Stephen Hubbell on the neutral theory of biodiversity was a major shift from classical community ecology. Before this book the niche-assembly framework was dominating the study of community dynamics. Very briefly under this framework local species composition is the result of the resource available at a particular site and species presence or absence depends on species niche (as defined in Chase 2003). As a result community assembly was seen as a deterministic process and a specific set of species should emerge from a given set of resources and species composition should be stable as long as the system is not disturbed. Hubbell theory based on a dispersal-assembly framework (historically developed by MacArthur and Wilson) was a departure from this mainstream view of community dynamics by assuming that local species composition are transient with the absence of any equilibrium. Species abundance follow a random walk under ecological drift, species are continuously going locally extinct while a constant rain of immigrants from the metacommunity bring new recruits. The interested reader is kindly encouraged to read: this, that or that.

Being an R geek and after reading Chapter 4 of Hubbell’s book, I decided to implement his model of local community dynamic under ecological drift. Do not expect any advanced analysis of the model behavior, Hubbell actually derive in this chapter analytical solutions for this model. This post is mostly educational to get a glimpse into the first part of Hubbell’s neutral model.

Before diving into the code it is helpful to verbally outline the main characteristics of the model:

  • The model is individual-based and neutral, assuming that all individuals are equal in their probability to die or to persist.
  • The model assume a zero-sum dynamic, the total number of individual (J) is constant, imagine for example a forest. When a tree dies it allows for a new individual to grow and use the free space
  • At each time step a number of individual (D) dies and are replaced by new recruits coming either from the metacommunity (with probability m) or from the local communities (with probability 1-m)
  • The species identity of the new recruit is proportional to the species relative abundance either in the metacommunity (assumed to be fixed and having a log-normal distribution) or in the local community (which obviously varies)

We are now equipped to immerse ourselves into R code:

#implement zero-sum dynamic with ecological drift from Hubbel theory (Chp4)
  #the metacommunity SAD
  #initialize the local community
  begin<-table(sample(x=1:R,size = J,replace = TRUE,prob = metaSAD))
  #loop through the time and get community composition
  for(i in 2:(Time+1)){
    newC<-helper_untb(vec = mat[(i-1),],R = R,D = D,m = m,metaSAD=metaSAD)
#the function that compute individual death and 
#replacement by new recruits 
  #remove D individuals from the community
  ind_new<-ind[-sample(x = 1:length(ind),size = D,replace=FALSE)]
  #are the new recruit from the Local or Metacommunity?
  #sample species ID according to the relevant SAD
  ind_recruit<-c(ind_new,sapply(recruit,function(isLocal) {

I always like simple theories that can be implemented with few lines of code and do not require fancy computations. Let’s put the function in action and get for a closed community (m=0) of 10 species the species abundance and the rank-abundance distribution dynamics:

library(reshape2) #for data manipulation I
library(plyr) #for data manipulation II
library(ggplot2) #for the nicest plot in the world
#look at population dynamics over time
comm1<-untb(R = 10,J = 100,D = 10,m = 0,Time = 200)
#quick and dirty first graphs
+geom_path()+labs(y="Species abundance")
+scale_color_discrete(name="Species ID")
#let's plot the RAD curve over time
pre_rad<-apply(comm1,1,function(x) sort(x,decreasing = TRUE))
+geom_line()+labs(x="Species rank",y="Species relative abundance")


We see there an important feature of the zero-sum dynamics, with no immigration the model will always (eventually) lead to the dominance of one species. This is because with no immigration there are so-called absorbing states, species abundance are attracted to these states and once these values are reached species abundance stay constant. In this case there are only two absorbing states for species relative abundance: 0 or 1, either you go extinct or you dominate the community.

In the next step we open up the community while varying the death rate and we track species richness dynamic:

#apply this over a range of parameters
 + m=c(0,0.01,0.05,0.1,0.5),Time=200)
#get species abundance for each parameter set
rich<-llply(comm_list,function(x) apply(x,1,function(y) length(y[y!=0])))
rich<-llply(rich,function(x) data.frame(Time=0:200,Rich=x))


As we increase the probability that new recruit come from the metacommunity, the impact of increasing the number of death per time step becomes null. This is because the metacommunity is static in this model, even if species are wiped out by important local disturbance there will always be new immigrants coming in maintaining species richness to rather constant levels.

There is an R package for everything these days and the neutral theory is not an exception, check out the untb package which implement the full neutral model (ie with speciation and dynamic metacommunities) but also allow to fit the model to empirical data and to get model parameter estimate.

Exploring the diversity of Life using Rvest and the Catalog of Life

I am writing the general introduction for my thesis and wanted to have a nice illustration of the diversity of Arthropods compared to other phyla (my work focus on Arthropods so this is a nice motivation). As the literature I have had access so far use pie charts to graphically represent these diversities and knowing that pie chart are bad, I decided to create my own illustration.

Fortunately I came across the Catalogue of Life website which provide (among other things) an overview of the number of species in each phylum. So I decided to try and have a go at directly importing the data from the website into R using the rvest package.

Let’s go:


#load the packages
library(scales)#for comma separator in ggplot2 axis

#read the data
#some minor data re-formatting
#re-format the data frame keeping only animals, plants and
#add a kingdom column
#remove the nasty commas and turn into numeric

Now we are read to make the first plot

 labs(y="Number of Species",x="",title="The diversity of life")



This is a bit overwhelming, half of the phyla have less than 1000 species so let’s make a second graph only with the phyla comprising more than 1000 species. And just to make things nicer I sort the phyla by the number of species:

 labs(y="Number of Species",x="",
 title="The diversity of multicellular organisms from the Catalog of Life")+



That’s it for a first exploration of the powers of rvest, this was actually super easy I expected to have to spend much more time trying to decipher xml code, but rvest seems to know its way around …

This graph is also a nice reminder that most of the described multicellular species out there are small crawling beetles and that we still know alarmingly very little about their ecology and status of threat. As a comparison all the vertebrates (all birds, mammals, reptiles, fishes and some other taxa) are within the Chordata and having a total of 50,000 described species. An even more sobering thought is the fact that the total number of described species is only a fraction of what is left undescribed.

Exploring Spatial Patterns and Coexistance

Today is a rainy day and I had to drop my plans for going out hiking, instead I continued reading “Self-Organization in Complex Ecosystems” from Richard Solé and Jordi Bascompte.

As I will be busy in the coming weeks with spatial models at the iDiv summer school I was closely reading chapter three on spatial self-organization and decided to try and implement in R one of the Coupled Map Lattice Models that is described in the book.

The model is based on a discrete time version of the Lotka-Volterra competition model, for example for two competing species we model the population normalized abundances (the population abundance divided by the carrying capacity) N1 and N2 using the following equations:

N_{1,t+1} = \phi_{N_{1}}(N_{1,t},N_{2,t}) = N_{1,t} * exp[r_{N_{1}} * (1 - N_{1} - \alpha_{12} * N_{2})]

N_{2,t+1} = \phi_{N_{2}}(N_{1,t},N_{2,t}) = N_{2,t} * exp[r_{N_{2}} * (1 - N_{2} - \alpha_{21} * N_{1})]

Where r is the growth rate and the \alpha are the interspecific competition coefficients. Note that since we use normalized abundance the intraspecific competition coefficient is equal to one.

Space can be introduced in this model by considering dispersion between neighboring cells. Say we have a 50 x 50 lattice map and that individuals disperse between neighboring cells with a probability D, as a result the normalized population abundances in cell k at time t+1 will be:

N_{1,t+1,k} = (1 - D) * \phi_{N_{1}}(N_{1,t,k},N_{2,t,k}) + \frac{D}{8} * \sum_{j=1}^{8} \phi_{N_{1}}(N_{1,t,j},N_{2,t,j})

Where j is an index of all 8 neighboring cells around cell k. We have a similar equation for the competing species.

Now let’s implement this in R (you may a nicer version of the code here).

I start by defining two functions: one that will compute the population dynamics based on the first two equations, and the other one that will return for each cell in lattice the row numbers of the neighboring cells:

#Lotka-Voltera discrete time competition function
#@Ns are the population abundance
#@rs are the growth rate
#@as is the competition matrix
  #return population abundance at time t+1

#an helper function returning line numbers of all neighbouring cells
  lnb<-which(dat[,"x"]%in%(vec["x"]+c(-1,-1,-1,0,1,1,1,0)) &
+ dat[,"y"]%in%(vec["y"]+c(-1,0,1,1,1,0,-1,-1)))
  #remove own line

Then I create a lattice, fill it with the two species and some random noise.


The next step is to initialize all the parameters, in the example given in the book the two species have equal growth rate, equal dispersal and equal interspecific competition.

#competition parameters
#growth rate
#dispersal rate
#infos on neighbouring cells
neigh<-apply(dat,1,function(x) find_neighbour(x,dat))

Now we are ready to start the model (the code is rather ugly, sorry for that, saturday coding makes me rather lazy …)

#model the dynamic assuming absorbing boundary (ie ind that go out of the grid are lost to the system)
for(i in generation){  
    #population dynamics within one grid cell
    #grab the neighbouring cells
    #the number of immigrant coming into the focal grid cell
    imm<-(ds/8)*rowSums(apply(neigh_cell,1,function(y) {
    + +imm[2],lnb=x["lnb"])

First let’s reproduce figure 3.8 from the book:

#look at coexistence between the two species within one cell
cell525<-ldply(list_dat,function(x) c(x[525,"N1"],x[525,"N2"]))
+ measure.vars=1:2,"Species","Pop")

#look at coexistence across the system
all<-ldply(list_dat,function(x) c(sum(x[,"N1"]),sum(x[,"N2"])))



And now figure 3.9 plus an extra figure looking at the correlation in population abundance between the two species at the beginning and the end of the simulation:

#compare species-species abundance correlation at the beginning and at the end of the simulation
exmpl$Gen<-rep(c("Gen : 1","Gen : 60"),each=2500)


#look at variation in spatial patterns at the beginning and the end of the simulation
+ measure.vars=c("N1","N2"),"Species","Pop")



Neat stuff, I am looking forward to implement other models in this book!


Plotting regression curves with confidence intervals for LM, GLM and GLMM in R

Once models have been fitted and checked and re-checked comes the time to interpret them. The easiest way to do so is to plot the response variable versus the explanatory variables (I call them predictors) adding to this plot the fitted regression curve together (if you are feeling fancy) with a confidence interval around it. Now this approach is preferred over the partial residual one because it allows the averaging out of any other potentially confounding predictors and so focus only on the effect of one focal predictor on the response. In my work I have been doing this hundreds of time and finally decided to put all this into a function to clean up my code a little bit. As always a cleaner version of this post is available here.
Let’s dive into some code (the function is at the end of the post just copy/paste into your R environment):

#####LM example######
#we measured plant biomass for 120 pots under 3 nutrient treatments and across a gradient of CO2
#due to limited place in our greenhouse chambers we had to use 4 of them, so we established a blocking design
data<-data.frame(C=runif(120,-2,2),N=gl(n = 3,k = 40,labels = c("Few","Medium","A_lot")),Block=rep(rep(paste0("B",1:4),each=10),times=3))

##         Block
## N        B1 B2 B3 B4
##   Few    10 10 10 10
##   Medium 10 10 10 10
##   A_lot  10 10 10 10

#the paramters of the models
#simulate a response vector
#fit the model

## Call:
## lm(formula = Biom ~ Block + C * N, data = data)
## Residuals:
##      Min       1Q   Median       3Q      Max
## -2.11758 -0.68801 -0.01582  0.75057  2.55953
## Coefficients:
##             Estimate Std. Error t value Pr(>|t|)
## (Intercept)  10.0768     0.2370  42.521  < 2e-16 ***
## BlockB2      -0.3011     0.2690  -1.119 0.265364
## BlockB3       2.3322     0.2682   8.695 3.54e-14 ***
## BlockB4      -1.4505     0.2688  -5.396 3.91e-07 ***
## C             0.7585     0.1637   4.633 9.89e-06 ***
## NMedium       0.4842     0.2371   2.042 0.043489 *
## NA_lot        2.4011     0.2335  10.285  < 2e-16 ***
## C:NMedium     0.7287     0.2123   3.432 0.000844 ***
## C:NA_lot      3.2536     0.2246  14.489  < 2e-16 ***
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
## Residual standard error: 1.028 on 111 degrees of freedom
## Multiple R-squared:  0.9201, Adjusted R-squared:  0.9144
## F-statistic: 159.8 on 8 and 111 DF,  p-value: < 2.2e-16

Here we would normally continue and make some model checks. As output from the model we would like to plot the effect of CO2 on plant biomass for each level of N addition. Of course we want to average out the Block effect (otherwise we would have to plot one separate line for each Block). This is how it works:

pred<-plot_fit(m,focal_var = "C",inter_var = "N")

##            C   N       LC     Pred       UC
## 1 -1.9984087 Few 8.004927 8.706142 9.407358
## 2 -1.7895749 Few 8.213104 8.864545 9.515986
## 3 -1.5807411 Few 8.417943 9.022948 9.627952
## 4 -1.3719073 Few 8.618617 9.181350 9.744084
## 5 -1.1630735 Few 8.814119 9.339753 9.865387
## 6 -0.9542397 Few 9.003286 9.498156 9.993026

#the function output a data frame with columns for the varying predictors
#a column for the predicted values (Pred), one lower bound (LC) and an upper one (UC)
#let's plot this
plot(Biom~C,data,col=c("red","green","blue")[data$N],pch=16,xlab="CO2 concentration",ylab="Plant biomass")
legend("topleft",legend = c("Few","Medium","A lot"),col=c("red","green","blue"),pch=16,lwd=3,title = "N addition",bty="n")


The cool thing is that the function will also work for GLM, LMM and GLMM. For mixed effect models the confidence interval is computed from parametric bootstrapping:

######LMM example#######
#now let's say that we took 5 measurements per pots and we don't want to aggregate them
data<-data.frame(Pots=rep(paste0("P",1:120),each=5),Block=rep(rep(paste0("B",1:4),each=5*10),times=3),N=gl(n = 3,k = 40*5,labels=c("Few","Medium","A_lot")),C=rep(runif(120,-2,2),each=5))
#a random intercept term

## Linear mixed model fit by REML ['lmerMod']
## Formula: Biom ~ Block + C * N + (1 | Pots)
##    Data: data
## REML criterion at convergence: 1765.1
## Scaled residuals:
##     Min      1Q  Median      3Q     Max
## -2.6608 -0.6446 -0.0340  0.6077  3.2002
## Random effects:
##  Groups   Name        Variance Std.Dev.
##  Pots     (Intercept) 0.1486   0.3855
##  Residual             0.9639   0.9818
## Number of obs: 600, groups:  Pots, 120
## Fixed effects:
##             Estimate Std. Error t value
## (Intercept)  9.93917    0.13225   75.15
## BlockB2     -0.42019    0.15153   -2.77
## BlockB3      2.35993    0.15364   15.36
## BlockB4     -1.36188    0.15111   -9.01
## C            0.97208    0.07099   13.69
## NMedium      0.36236    0.13272    2.73
## NA_lot       2.25624    0.13189   17.11
## C:NMedium    0.70157    0.11815    5.94
## C:NA_lot     2.78150    0.10764   25.84
## Correlation of Fixed Effects:
##           (Intr) BlckB2 BlckB3 BlckB4 C      NMedim NA_lot C:NMdm
## BlockB2   -0.572
## BlockB3   -0.575  0.493
## BlockB4   -0.576  0.495  0.493
## C          0.140 -0.012 -0.018 -0.045
## NMedium   -0.511  0.003  0.027  0.007 -0.118
## NA_lot    -0.507  0.008  0.003  0.003 -0.119  0.502
## C:NMedium -0.134  0.019  0.161  0.038 -0.601  0.175  0.071
## C:NA_lot  -0.107  0.077  0.020  0.006 -0.657  0.078  0.129  0.394

#again model check should come here
#for LMM and GLMM we also need to pass as character (vector) the names of the random effect
pred<-plot_fit(m,focal_var = "C",inter_var = "N",RE = "Pots")
#let's plot this
plot(Biom~C,data,col=c("red","green","blue")[data$N],pch=16,xlab="CO2 concentration",ylab="Plant biomass")
legend("topleft",legend = c("Few","Medium","A lot"),col=c("red","green","blue"),pch=16,lwd=3,title = "N addition",bty="n")

Please note a few elements:
– so far the function only return 95% confidence intervals
– I have tested it on various types of models that I usually build but there are most certainly still some bugs hanging around so if the function return an error please let me know of the model you fitted and the error returned
– the bootstrap computation can take some time for GLMM so be ready to wait a few minute if you have a big complex model
– the function accept a vector of variable names for the inter_var argument, it should also work for the RE argument even if I did not tried it yet

Happy plotting!

Here is the code for the function:

#for glm and glmer models and betareg models
#only correct for betareg models with logit link
#@m : the fitted lm, glm or merMod object (need to be provided)
#@focal_var: a character, the name of variable of interest that will be plotted on the x axis, ie the varying variable (need to be provided)
#@inter_var: a character or character vector, the name of variable interacting with the focal variable, ie categorical variables from which prediction will be drawn for each level across the focal_var gradient
#@RE: if giving a merMod object give as character or character vector of the name of the random effects variable (so far I only tried with one RE)
#@n: a numeric, the number of data point that will form the gradient of the focal variable
#@n_core: the number of core used to compute the bootstrapped CI for GLMM models
#@bootCImerMod: should the function compute bootstrapped confidence intervals for merMod objects

#turn all character variable to factor
#make a sequence from the focal variable
#grab the names and unique values of the interacting variables
#add the focal variable to this list
#expand.grid on it

#remove varying variables and non-predictors
fixed<-lapply(dat_red,function(x) if(is.numeric(x)) mean(x) else factor(levels(x)[1],levels = levels(x)))
#the number of rows in the new_dat frame
#create the new_dat frame starting with the varying focal variable and potential interactions
#get the name of the variable to average over, debug for conditions where no variables are to be avergaed over
name_f<-names(dat_red)[sapply(dat_red,function(x) ifelse(is.factor(x),TRUE,FALSE))]

#get the predicted values
pred<-predict(m,newdata = new_dat,

if(cl=="glm" | cl=="negbin"){
#predicted values on the link scale
if(cl=="glmerMod" | cl=="lmerMod"){
#for bootstrapped CI
#average over potential categorical variables
if(cl=="glmerMod" | cl=="lmerMod"){
coef_f<-lapply(name_f,function(x) fixef(m)[grep(paste0("^",x,"\\w+$"),names(fixef(m)))])
coef_f<-lapply(name_f,function(x) coef(m)[grep(paste0("^",x,"\\w+$"),names(coef(m)))])
pred$fit<-pred$fit+sum(unlist(lapply(coef_f,function(x) mean(c(0,x)))))
#to get the back-transform values get the inverse link function
#for now betareg only work with logit link
#get the back transformed prediction together with the 95% CI for LM and GLM
if(cl=="glm" | cl=="lm" | cl=="negbin"){
#for GLMM need to use bootstrapped CI, see ?predict.merMod
if(cl=="glmerMod" | cl=="lmerMod"){
predFun<-function(.) mm%*%fixef(.)
bb<-bootMer(m,FUN=predFun,nsim=200,parallel="multicore",ncpus=n_core) #do this 200 times
bb$t<-apply(bb$t,1,function(x) linkinv(x))
#as we did this 200 times the 95% CI will be bordered by the 5th and 195th value
bb_se<-apply(bb$t,1,function(x) x[order(x)][c(5,195)])

#the output

Two little annoying stats detail

UPDATED: Thanks to Ben and Florian comments I’ve updated the first part of the post


A very brief post at the end of the field season on two little “details” that are annoying me in paper/analysis that I see being done (sometimes) around me.

The first one concern mixed effect models where the models built in the contain a grouping factor (say month or season) that is fitted as both a fixed effect term and as a random effect term (on the right side of the | in lme4 model synthax). I don’t really understand why anyone would want to do this and instead of spending time writing equations let’s just make a simple simulation example and see what are the consequences of doing this:

#an example of a situation measuring plant biomass on four different month along a gradient of temperature
#the coefficient
#the simulated coefficient for Months are 0.5, 1.2 -0.9
#a simple lm
coef(m_fixed) #not too bad
## (Intercept)        temp      month2      month3      month4 
##   0.9567796   2.0654349   0.4307483   1.2649599  -0.8925088

#a lmm with month ONLY as random term

## (Intercept)        temp 
##    1.157095    2.063714


## $month
##   (Intercept)
## 1  -0.1916665
## 2   0.2197100
## 3   1.0131908
## 4  -1.0412343

VarCorr(m_rand) #the estimated sd for the month coeff

##  Groups   Name        Std.Dev.
##  month    (Intercept) 0.87720 
##  Residual             0.98016

sd(c(0,0.5,1.2,-0.9)) #the simulated one, not too bad!

## [1] 0.8831761

#now a lmm with month as both fixed and random term
m_fixedrand<-lmer(biom~temp+month+(1|month),data) fixef(m_fixedrand) ## (Intercept) temp month2 month3 month4 ## 0.9567796 2.0654349 0.4307483 1.2649599 -0.8925088 ranef(m_fixedrand) #very, VERY small ## $month ## (Intercept) ## 1 0.000000e+00 ## 2 1.118685e-15 ## 3 -9.588729e-16 ## 4 5.193895e-16 VarCorr(m_fixedrand) ## Groups Name Std.Dev. ## month (Intercept) 0.40397 ## Residual 0.98018 #how does it affect the estimation of the fixed effect coefficient? summary(m_fixed)$coefficients ## Estimate Std. Error t value Pr(>|t|)
## (Intercept)  0.9567796  0.2039313  4.691676 9.080522e-06
## temp         2.0654349  0.1048368 19.701440 2.549792e-35
## month2       0.4307483  0.2862849  1.504614 1.357408e-01
## month3       1.2649599  0.2772677  4.562233 1.511379e-05
## month4      -0.8925088  0.2789932 -3.199035 1.874375e-03


##               Estimate Std. Error    t value
## (Intercept)  0.9567796  0.4525224  2.1143256
## temp         2.0654349  0.1048368 19.7014396
## month2       0.4307483  0.6390118  0.6740851
## month3       1.2649599  0.6350232  1.9919901
## month4      -0.8925088  0.6357784 -1.4038048

#the numeric response is not affected but the standard error around the intercept and the month coefficient is doubled, this makes it less likely that a significant p-value will be given for these effect ie higher chance to infer that there is no month effect when there is some

#and what if we simulate data as is supposed by the model, ie a fixed effect of month and on top of it we add a random component
#an lmm model
fixef(m_fixedrand2) #weird coeff values for the fixed effect for month

## (Intercept)        temp      month2      month3      month4 
##   -2.064083    2.141428    1.644968    4.590429    3.064715

c(0,eff[3:5])+rnd.eff #if we look at the intervals between the intercept and the different levels we can realize that the fixed effect part of the model sucked in the added random part

## [1] -2.66714133 -1.26677658  1.47977624  0.02506236


##  Groups   Name        Std.Dev.
##  month    (Intercept) 0.74327 
##  Residual             0.93435

ranef(m_fixedrand2) #again very VERY small

## $month
##     (Intercept)
## 1  1.378195e-15
## 2  7.386264e-15
## 3 -2.118975e-14
## 4 -7.752347e-15

#so this is basically not working it does not make sense to have a grouping factor as both a fixed effect terms and a random term (ie on the right-hand side of the |)

Take-home message don’t put a grouping factor as both a fixed and random term effect in your mixed effect model. lmer is not able to separate between the fixed and random part of the effect (and I don’t know how it could be done) and basically gives everything to the fixed effect leaving very small random effects. The issue is abit pernicious because if you would only look at the standard deviation of the random term from the merMod summary output you could not have guessed that something is wrong. You need to actually look at the random effects to realize that they are incredibely small. So beware when building complex models with many fixed and random terms to always check the estimated random effects.

Now this is only true for factorial variables, if you have a continuous variable (say year) that affect your response through both a long-term trend but also present some between-level (between year) variation, it actually makes sense (provided you have enough data point) to fit a model with this variable as both a fixed and random term. Let’s look into this:

#an example of a situation measuring plant biomass on 10 different year along a gradient of temperature
#the coefficient
#a simple lm

lm(formula = y ~ temp + year, data = data)

Min       1Q   Median       3Q      Max
-2.27455 -0.83566 -0.03557  0.92881  2.74613

Estimate Std. Error t value Pr(>|t|)
(Intercept)  1.41802    0.25085   5.653 1.59e-07 ***
temp         2.11359    0.11230  18.820  < 2e-16 ***
year        -1.54711    0.04036 -38.336  < 2e-16 ***
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 1.159 on 97 degrees of freedom
Multiple R-squared:  0.9508,    Adjusted R-squared:  0.9498
F-statistic: 937.1 on 2 and 97 DF,  p-value: < 2.2e-16
coef(m_fixed) #not too bad

(Intercept)        temp        year
1.418019    2.113591   -1.547111
#a lmm

Linear mixed model fit by REML ['lmerMod']
Formula: y ~ temp + year + (1 | factor(year))
Data: data

REML criterion at convergence: 304.8

Scaled residuals:
Min      1Q  Median      3Q     Max
-2.0194 -0.7775  0.1780  0.6733  1.9903

Random effects:
Groups       Name        Variance Std.Dev.
factor(year) (Intercept) 0.425    0.6519
Residual                 1.004    1.0019
Number of obs: 100, groups:  factor(year), 10

Fixed effects:
Estimate Std. Error t value
(Intercept)  1.40266    0.49539   2.831
temp         2.01298    0.10191  19.752
year        -1.54832    0.07981 -19.400

Correlation of Fixed Effects:
(Intr) temp
temp  0.031
year -0.885  0.015
fixef(m_rand) #very close to the lm estimation

(Intercept)        temp        year
1.402660    2.012976   -1.548319
abline(0,1) #not too bad
VarCorr(m_rand) #the estimated sd for the within-year variation

Groups       Name        Std.Dev.
factor(year) (Intercept) 0.65191
Residual                 1.00189

Interestingly we see in this case that the standard error (and the related t-value) of the intercept and year slope are twice as big (small for the t-values) in the LMM compared to the LM. This means that not taking into account between-year random variation leads us to over-estimate the precision of the long-term temporal trend (we might conclude that there are significant effect when there are a lot of noise not taken into account). I still don’t fully grasp how this work, but thanks to the commenter for pointing this out.

The second issue is maybe a bit older but I saw it appear in a recent paper (which is a cool one excpet for this stats detail). After fitting a model with several predictors one wants to plot their effects on the response, some people use partial residuals plot to do this (wiki). The issue with these plots is that when two variables have a high covariance the partial residual plot will tend to be over-optimistic concerning the effect of variable X on Y (ie the plot will look much nice than it should be). Again let’s do a little simulation on this:

#say we measure plant biomass in relation with measured temperature and number of sunny hours say per week
#the variance-covariance matrix between temperature and sunny hours
#simulate some data

#partial residual plot of sun
plot(data$sun,sun_res,xlab="Number of sunny hours",ylab="Partial residuals of Sun")


#plot of sun effect while controlling for temp
plot(biom~sun,data,xlab="Number of sunny hours",ylab="Plant biomass")


#same stuff for temp
plot(biom~temp,data,xlab="Temperature",ylab="Plant biomass")


The first graph is a partial residual plot, from this graph alone we would be tempted to say that the number of hour with sun has a large influence on the biomass. This conclusion is biased by the fact that the number of sunny hours covary with temperature and temperature has a large influence on plant biomass. So who is more important temperature or sun? The way to resolve this is to plot the actual observation and to add a fitted regression line from a new dataset (sun_new in the example) where one variable is allowed to vary while all others are fixed to their means. This way we see how an increase in the number of sunny hour at an average temperature affect the biomass (the second figure). The final graph is then showing the effect of temperature while controlling for the effect of the number of sunny hours.

Happy modelling!

Count data: To Log or Not To Log

Count data are widely collected in ecology, for example when one count the number of birds or the number of flowers. These data follow naturally a Poisson or negative binomial distribution and are therefore sometime tricky to fit with standard LMs. A traditional approach has been to log-transform such data and then fit LMs to the transformed data. Recently a paper advocated against the use of such transformation since it led to high bias in the estimated coefficients. More recently another paper argued that log-transformation of count data followed by LM led to lower type I error rate (ie saying that an effect is significant when it is not) than GLMs. What should we do then?

Using a slightly changed version of the code published in the Ives 2015 paper let’s explore the impact of using these different modelling strategies on the rejection of the null hypothesis and the bias of the estimated coefficients.

#load the libraries

#download and load the functions that will be used

Following the paper from Ives and code therein I simulated some predictor (x) and a response (y) that follow a negative binomial distribution and is linearly related to x.
In the first case I look at the impact of varying the sample size on the rejection of the null hypothesis and the bias in the estimated coefficient between y and x.

######univariate NB case############
#range over n
output<-compute.stats(NRep=500,n.range = c(10,20,40,80,160,320,640,1280))
ggplot(output,aes(x=n,y=Reject,color=Model))+geom_path(size=2)+scale_x_log10()+labs(x="Sample Size (log-scale)",y="Proportion of Rejected H0")+base_theme
ggplot(output,aes(x=n,y=Bias,color=Model))+geom_path(size=2)+scale_x_log10()+labs(x="Sample Size (log-scale)",y="Bias in the slope coefficient")+base_theme

Figure 1

For this simulation round the coefficient of the slope (b1) was set to 0 (no effect of x on y), and only the sample size varied. The top panel show the average proportion of time that the p-value of the slope coefficient was lower than 0.05 (H0:b1=0 rejected). We see that for low sample size (<40) the Negative Binomial model has higher proportion of rejected H0 (type I error rate) but this difference between the model disappear as we reached sample size bigger than 100. The bottom panel show the bias (estimated value – true value) in the estimated coefficient. For very low sample size (n=10), Log001, Negative Binomial and Quasipoisson have higher bias than Log1 and LogHalf. For larger sample size the difference between the GLM team (NB and QuasiP) and the LM one (Log1 and LogHalf) gradually decrease and both teams converged to a bias around 0 for larger sample size. Only Log0001 is behaved very badly. From what we saw here it seems that Log1 and LogHalf are good choices for count data, they have low Type I error and Bias along the whole sample size gradient.

The issue is that an effect of exactly 0 never exist in real life where most of the effect are small (but non-zero) thus the Null Hypothesis will never be true. Let’s look know how the different models behaved when we vary b1 alone in a first time and crossed with sample size variation.

#range over b1
outpu<-compute.stats(NRep=500,b1.range = seq(-2,2,length=17))
ggplot(outpu,aes(x=b1,y=Reject,color=Model))+geom_path(size=2)+base_theme+labs(y="Proportion of Rejected H0")
ggplot(outpu,aes(x=b1,y=Bias,color=Model))+geom_path(size=2)+base_theme+labs(y="Bias in the slope coefficient")


Here the sample size was set to 100, what we see in the top graph is that for a slope of exactly 0 all model have a similar average proportion of rejection of the null hypothesis. As b1 become smaller or bigger the average proportion of rejection show very similar increase for all model expect for Log0001 which has a slower increase. This curves basically represent the power of the model to detect an effect and is very similar to the Fig2 in the Ives 2015 paper. Now the bottom panel show that all the LM models have bad behaviour concerning their bias, they have only small bias for very small (close to 0) coefficient, has the coefficient gets bigger the absolute bias increase. This means that even if the LM models are able to detect an effect with similar power the estimated coefficient is wrong. This is due to the value added to the untransformed count data in order to avoid -Inf for 0s. I have no idea on how one may take into account arithmetically these added values and then remove its effects …

Next let’s cross variation in the coefficient with sample size variation:

#range over n and b1
ggplot(output3,aes(x=n,y=Reject,color=Model))+geom_path(size=2)+scale_x_log10()+facet_wrap(~b1)+base_theme+labs(x="Sample size (log-scale)",y="Proportion of rejected H0")
ggplot(output3,aes(x=n,y=Bias,color=Model))+geom_path(size=2)+scale_x_log10()+facet_wrap(~b1)+base_theme+labs(x="Sample size (log-scale)",y="Bias in the slope coefficient")

Figure 3

The tope panel show one big issue of focussing only on the significance level: rejection of H0 depend not only on the size of the effect but also on the sample size. For example for b1=0.75 (a rather large value since we work on the exponential scale) less than 50% of all models rejected the null hypothesis for a sample size of 10. Of course as the effect sizes gets larger the impact of the sample size on the rejection of the null hypothesis is reduced. However most effect around the world are small so that we need big sample size to be able to “detect” them using null hypothesis testing. The top graph also shows that NB is slightly better than the other models and that Log0001 is again having the worst performance. The bottom graphs show something interesting, the bias is quasi-constant over the sample size gradient (maybe if we would look closer we would see some variation). Irrespective of how many data point you will collect the LMs will always have bigger bias than the GLMs (expect for the artificial case of b1=0)

To finish with in Ives 2015 the big surprise was the explosion of type I error with the increase in the variation in individual-level random error (adding a random normally distributed value to the linear predictor of each data point and varying the standard deviation of these random values) as can be seen in the Fig3 of the paper.

#range over b1 and sd.eps
ggplot(output4,aes(x=sd.eps,y=Reject,color=Model))+geom_path(size=2)+facet_wrap(~b1)+base_theme+labs(x="",y="Proportion of rejected H0")
ggplot(output4,aes(x=sd.eps,y=Bias,color=Model))+geom_path(size=2)+facet_wrap(~b1)+base_theme+labs(x="Standard Deviation of the random error",y="Bias of the slope coefficient")

Figure 4

Before looking at the figure in detail please note that a standard deviation of 2 in this context is very high, remember that these values will be added to the linear predictor which will be exponentiated so that we will end up with very large deviation. In the top panel there are two surprising results, the sign of the coefficient affect the pattern of null hypothesis rejection and I do not see the explosion of rejection rates for NB or QuasiP that are presented in Ives 2015. In his paper Ives reported the LRT test for the NB models when I am reporting the p-values from the model summary directly (Wald test). If some people around have computing power it would be interesting to see if changing the seed and/or increasing the number of replication lead to different patterns … The bottom panel show again that the LMs bias are big, the NB and QuasiP models have an increase in the bias with the standard deviation but only if the coefficient is negative (I suspect some issue with the exponentiating of large random positive error), as expected the GLMM perform the best in this context.

Pre-conclusion, in real life of course we would rarely have a model with only one predictor, most of the time we will build larger models with complex interaction structure between the predictors. This will of course affect both H0 rejection and Bias, but this is material for a next post 🙂

Let’s wrap it up; we’ve seen that even if LM transformation seem to be a good choice for having a lower type I error rate than GLMs this advantage will be rather minimal when using empirical data (no effect are equal to 0) and potentially dangerous (large bias). Ecologists have sometime the bad habits to turn their analysis into a star hunt (R standard model output gives stars to significant effects) and focusing only on using models that have the best behavior towards significance (but large bias) does not seem to be a good strategy to me. More and more people call for increasing the predictive power of ecological model, we need then modelling techniques that are able to precisely (low bias) estimate the effects. In this context transforming the data to make them somehow fit normal assumption is sub-optimal, it is much more natural to think about what type of processes generated the data (normal, poisson, negative binomial, with or without hierarchical structure) and then model it accordingly. There are extensive discussion nowadays about the use and abuse of p-values in science and I think that in our analysis we should slowly but surely shifts our focus from “significance/p-values<0.05/star hunting” only to a more balanced mix of effect-sizes (or standardized slopes), p-values and R-square.

Confidence Intervals for prediction in GLMMs

With LM and GLM the predict function can return the standard error for the predicted values on either the observed data or on new data. This is then used to draw confidence or prediction intervals around the fitted regression lines. The confidence intervals (CI) focus on the regression lines and can be interpreted as (assuming that we draw 95% CI): “If we would repeat our sampling X times the regression line would fall between this interval 95% of the time”. On the other hand the prediction interval focus on single data point and could be interpreted as (again assuming that we draw 95% CI): “If we would sample X times at these particular value for the explanatory variables, the response value would fall between this interval 95% of the time”. The wikipedia page has some nice explanation about the meaning of confidence intervals.
For GLMM the predict function does not allow one to derive standard error, the reason being (from the help page of predict.merMod): “There is no option for computing standard errors of predictions because it is difficult to define an efficient method that incorporates uncertainty in the variance parameters”. This means there is for now no way to include in the computation of the standard error for predicted values the fact that the fitted random effect standard deviation are just estimates and may be more or less well estimated. We can however still derive confidence or prediction intervals keeping in mind that we might underestimate the uncertainty around the estimates.


#first case simple lmer, simulate 100 data points from 10 groups with one continuous fixed effect variable
f<-gl(n = 10,k = 10)
#the fixed effect coefficient
#the random effect
#the simulated response values


#first CI and PI using predict-like method, using code posted here:
#predict(m,newdat,re.form=NA) would give the same results
pvar1 <- diag(mm %*% tcrossprod(vcov(m),mm))
tvar1 <- pvar1+VarCorr(m)$f[1] # must be adapted for more complex models
newdat <- data.frame(
  , plo = newdat$y-1.96*sqrt(pvar1)
  , phi = newdat$y+1.96*sqrt(pvar1)
  , tlo = newdat$y-1.96*sqrt(tvar1)
  , thi = newdat$y+1.96*sqrt(tvar1)

#second version with bootMer
#we have to define a function that will be applied to the nsim simulations
#here we basically get a merMod object and return the fitted values
predFun<-function(.) mm%*%fixef(.) 
bb<-bootMer(m,FUN=predFun,nsim=200) #do this 200 times
#as we did this 200 times the 95% CI will be bordered by the 5th and 195th value
bb_se<-apply(bb$t,2,function(x) x[order(x)][c(5,195)])

legend("topleft",legend=c("Fitted line","Confidence interval","Prediction interval","Bootstrapped CI"),col=c("red","blue","orange","darkgreen"),lty=2,lwd=2,bty="n")


This looks pretty familiar, the prediction interval being always bigger than the confidence interval.
Now in the help page for the predict.merMod function the authors of the lme4 package wrote that bootMer should be the prefered method to derive confidence intervals from GLMM. The idea there is to simulate N times new data from the model and get some statistic of interest. In our case we are interested in deriving the bootstrapped fitted values to get confidence interval for the regression line. bb$t is a matrix with the observation in the column and the different bootstrapped samples in the rows. To get the 95% CI for the fitted line we then need to get the [0.025*N,0.975*N] values of the sorted bootstrapped values.

The bootstrapped CI falls pretty close to the “normal” CI, even if for each bootstrapped sample new random effect values were calculated (because use.u=FALSE per default in bootMer)

Now let’s turn to a more complex example, a Poisson GLMM with two crossed random effects:

#second case more complex design with two crossed RE and a poisson response
f1<-gl(n = 10,k = 10)




#for GLMMs we have to back-transform the prediction after adding/removing the SE
pvar1 <- diag(mm %*% tcrossprod(vcov(m),mm))
tvar1 <- pvar1+VarCorr(m)$f1[1]+VarCorr(m)$f2[1]  ## must be adapted for more complex models
newdat <- data.frame(
   plo = exp(y-1.96*sqrt(pvar1))
  , phi = exp(y+1.96*sqrt(pvar1))
  , tlo = exp(y-1.96*sqrt(tvar1))
  , thi = exp(y+1.96*sqrt(tvar1))

#second version with bootMer
predFun<-function(.) exp(mm%*%fixef(.)) 
bb_se<-apply(bb$t,2,function(x) x[order(x)][c(5,195)])

legend("topleft",legend=c("Fitted line","Confidence interval","Prediction interval","Bootstrapped CI"),col=c("red","blue","orange","darkgreen"),lty=2,lwd=2,bty="n")


Again in this case the bootstrapped CI falled pretty close to the “normal” CI. We have here seen three different way to derive intervals representing the uncertainty around the regression lines (CI) and the response points (PI). Choosing among them would depend on what you want to see (what is the level of uncertainty around my fitted line vs if I sample new observations which value will they take), but also for complex model on computing power, as bootMer can take some time to run for GLMM with many observations and complex model structure.