Virtual ecologist approach: using simulations to evaluating experimental designs

This blog is the first in a series of modules that I will present on using simulations to evaluate experimental designs. The series will mostly consist of fairly simple simulations that I have built over the last decade to answer questions about experimental design and analysis for my own research as well as the research of my colleagues. I have found simulation exercises to be a tremendous tool for learning about methods in applied ecology and my hopes are that whoever reads this will enjoy it and perhaps take away some useful ideas and skills that will improve their research as well.

Often in ecological research we start with a question and then propose an experimental design to address the question. The experimental design is usually made up of a sampling design and a statistical analysis; however, it is typically unknown, a priori, how the design will perform for answering the question of interest. This is a small problem when the experimental design is simple and we do not expect the statistical assumptions of the analysis to be violated.  Under this circumstance, simple closed-form calculations and approximations of statistical power are available. Furthermore, canned programs exist to perform the power analyses for us.  In contrast, when sampling designs and analyses are complex and violations to some assumptions are expected, there are no simple canned programs or closed-form calculations to evaluate design performance.  Under these circumstances, a common method used to evaluate experimental designs is stochastic simulations where the data collection process and the analysis are repeatedly simulated to determine if the design can return the correct answer.  This approach has been termed the virtual ecologist approach in the literature because it simulates the ecologist collecting data, analyzing them and drawing inference. The advantages of engaging in a modeling exercise such as the virtual ecologist is tremendous. The researcher can evaluate various sampling designs and analyses to determine which will generate the appropriate statistical power.  The design can be evaluated for how well it performs when statistical assumptions are violated.  Furthermore, simulating the experiment before conducting it forces the researcher to clarify objects and to practice interpreting results, which, in practice, is often overlooked until after data are collected.  If these exercises were adopted more often by ecologist, greater success of ecological experiments would be realized.

A simulation exercise to evaluate an experimental design usually contains four components which are depicted in Figure 1. conseptual_modelThese components include some sort of representation of the true system, a model of data collection, a statistical analysis, and an inference.  The inference that is drawn is then compared to the true system to determine how the experimental design performed. When the simulation is repeated for many iterations (1000-10000), the probability of a successful or unsuccessful inference can be estimated.

There are many examples of the application of the virtual ecologist approach in the literature; however, they can be difficult to find because the focus of the work is typically the question that is being addressed by the research as opposed to the methodology used. Zurell et al. (2010) reviewed the virtual ecologist approach and reported many studies that have applied the method, making it a good paper to read as an introduction to the topic. My work has often included the use of stochastic simulations for evaluating ecological monitoring designs and I report a list of some of the key publications that I am aware of at the bottom of this blog.

An example with multiple regression:

The first example of the virtual ecologist that I want to present is a simulation of a multiple regression where AIC is used as a means for hypothesis test to investigate which independent variables are important for determining the dependent variable. The simulation can be used to evaluate the type-I error rates and statistical power of a range of sample sizes, residual error rates, effect sizes, AIC thresholds, and more (with some small modifications). For this module, we will focus on estimating the type-I error rate and statistical power for various sample sizes and levels of residual error. In the simulation I evaluate three independent variables where the first two are important (i.e. simulated with non-zero values) and the third is not (simulated with a value of zero). The statistical model is a multiple regression and the inference is drawn by comparing multiple models with AIC. For a model to be selected by AIC, two criterion must be met, (i) the model must have a delta AIC < 2 and (ii) it must have the lowest AIC of the least parameterized models with a delta AIC < 2.  The simulation occurs in four steps. First the data are simulated; second, the data are fit with each of the models from the model set; third, AIC is used to determine the best model (in terms of the criterion stated above); and fourth, a logical test is used to indicate if the chosen model is the correct model (sometimes referred to as the data-generating model). We then reiterate the simulation a number of times to determine the proportion of iterations where the correct model was chosen.  This proportion can be thought of as an analog to statistical power. The proportion of times that a model is selected that includes a variable that is simulated with a values of zero can be thought of as an analog to the type-I error rate.sim_results

Results

The code posted below took about 1.5 hours to run on my laptop, so be patient. If you would like to run a faster preliminary analysis, reduce the input parameter ‘iter’ to a lower value such as 100. If you run the code as is, the results should look something like the two graphs on the right. In the top graph, you can see that the probability of choosing the correct model increases with sample size and decreases with increasing residual error . You can also see that the type-I error rate is approximately 0.05 and varies little with different sample sizes and levels of residual error (bottom graph). The type-I rate is a function of two aspects of the simulation. First, we do not expect the type-I error rate to vary systematically with different sample sizes and levels of residual error because we simulated the data from the exact model that we analyzed the data with.  In other words, all statistical assumptions were met by our simulated data.  Second, the AIC threshold of 2 determined the magnitude of type-I error rate. If we would have chosen a lower or higher value, we may expect to see different values of type-I error rates.

Additional exercises

Although this simulation is simple, it can be used to learn quite a bit about the statistical properties of the analysis. For example, with minor modification to the code you could evaluate how correlation among variables, various levels of AIC thresholds, and violations to the assumption of normal error structure influence the statistical power and the type-I error rate.  The possibilities are literally endless. I hope you enjoy playing around with this simulation. As this is my first posting of this topic, I an sure to find better ways to present the methods. If you have any comments or suggestions, please let me know.

#------------------------------------------------------------------------------#
# Simulates data
#------------------------------------------------------------------------------#
fun=function(n,sig,int,b1,b2,b3){
x1 = rnorm(n,0,1) #variable one
x2 = rnorm(n,0,1) #variable two
x3 = rnorm(n,0,1) #variable three
error = rnorm(n,0,sig)#residual error

y = int + b1*x1 + b2*x2 + b3*x3 + error #dependent variable

models = list( #fits all possible models and stores them in a list
fit1 = lm(y ~ x1 + x2 + x3),
fit2 = lm(y ~ x1 + x2),
fit3 = lm(y ~ x1 + x3),
fit4 = lm(y ~ x3 + x2),
fit5 = lm(y ~ x1),
fit6 = lm(y ~ x2),
fit7 = lm(y ~ x3),
fit8 = lm(y ~ NULL)
)

aic = NULL #Calculates aic for all models
for(i in 1:length(models)){
 aic[i] = AIC(models[[i]])
 }

delta = aic-min(aic) #Calculates delta aic for all models
params = c(4,3,3,3,2,2,2,1)#Number of parameters of each model

#this is a logical test that returns a TRUE when the correct model is chosen
tmp = which(delta<2)[(params[delta<2]==min(params[delta<2]))]
correct = which(delta==min(delta[tmp]))== 2
#this is a logical test that returns a TRUE when the model with b3 is chosen
wrong = (which(delta==0)==1)*((sum(delta<2))==1)

#output of function
c(correct,wrong)
}
#------------------------------------------------------------------------------#
# Simulation inputs
#------------------------------------------------------------------------------#

int = 3 #The intercept of the data-generating model
b1 = 3 #The first parameter value of the data-generating model
b2 = -1 #The second parameter value of the data-generating model
b3 = 0 #The third parameter value of the data-generating model

n = seq(10,200,length=20) #vector of sample sizes
sig = seq(.0001,5,length=20)#vector of error variance values (sd)

#------------------------------------------------------------------------------#
# This part runs the simulation
#------------------------------------------------------------------------------#

iter = 1000 #number of iterations of the simulation

power = array(NA,dim = c(length(n),length(sig),2)) #matrix for results

ptm = proc.time()
for(i in 1:length(n)){
 for(j in 1:length(sig)){
 power[i,j,] = rowSums(replicate(iter,fun(n[i],sig[j],int,b1,b2,b3),simplify=T),
 na.rm=T)/iter
 }}
endtime = proc.time()-ptm
endtime[3]/60

windows(height=8,width=4)
par(mfrow=c(2,1))
contour(n,sig,power[,,1],xlab='Sample size',ylab='Residual error',
 main='Statistical power')
contour(n,sig,power[,,2],xlab='Sample size',ylab='Residual error',
 main='Type-I error rate')

Reading list

Boyce et al. 2012

Holt et al. 2011

Courbois et al. 2007

Jackson et al. 2008

Joseph et al. 2006

Kershner and Marschall 1998

Korman and Higgins 1997

Mansson et al. 2011

Maynard and Kaplan 2013

Nielsen et al. 2009

Nuno et al. 2013

Renner et al. 2011

Reynolds et al. 2011

Rhodes and Jonzen 2011

Enough for now……

Advertisements

About dangwinn

I am a quantitative ecologist interested in patterns in biodiversity and natural resource management.
This entry was posted in Ecological statistics, Ecology, Experimental design and tagged , , , , , , . Bookmark the permalink.

2 Responses to Virtual ecologist approach: using simulations to evaluating experimental designs

  1. Pingback: Virtual ecologist module #2: occupancy model design trade-offs | Dan Gwinn

  2. Pingback: Virtual ecologist module #3: occupancy model power analysis | Dan Gwinn

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s