APPENDIX 2. Power Simulation Methodology



Monte Carlo simulations were performed using numerical recipes from Press et al. (1992) and route regression code by Thomas (1997). We simulated 20-year trajectories for each site following an exponential model, which required specification of site-specific initial abundance and trend magnitude. Exponential trends were based on the assumption that populations experience constant rates of decline (Caughley and Sinclair 1994). Because achieving consistent timing of surveys from year to year is logistically difficult, randomly timed surveys were assumed. Initial abundance was determined by randomly selecting a deviate from N0, τ22), truncated at zero. Similarly, trend magnitude was determined by randomly selecting a deviate from N(-3%, τ32) when generating population trajectories, and N(-1%, τ32) when generating community metric trajectories. Population trends of -3% per year were simulated because this is the effect size goal of the Alberta Forest Biodiversity Monitoring Program (Farr et al. 1999). Over 20 years, this effect size equates to a 45% decline. Simulated community trends were reduced to -1% per year to account for the reduced sensitivity of community metrics to disturbance. Over 20 years, this effect size equates to an 18% decline. Negative trends were simulated to achieve conservative power estimates and because they are of greater conservation interest. For a given effect size, positive exponential trends result in larger overall changes and are therefore easier to detect.

Monitoring data collected from a site within a given year were simulated as gaussian deviates (truncated at zero) with mean equal to the point on the site-specific trajectory for that year, and variance τ12. Although a discrete distribution, such as the Poisson, is often used to generate count data, a continuous distribution was required to accommodate counts from multiple surveys that were averaged prior to analysis. Although the lognormal and zero-truncated normal distributions were considered when generating abundances, the zero-truncated normal was selected because the lognormal underrepresented zero counts. The zero-truncated normal was also used to generate community metrics because they are calculated by summing random variables and, according to the central limit theorem (Hilborn and Mangel 1997), should therefore be distributed normally.

To analyze the simulated data, we applied route regression (Geissler and Sauer 1990). Route regression is a two-stage estimator, whereby trends are first estimated at each site by fitting the data to the exponential model and are then combined to estimate regional trend. Trend significance was estimated following the method used by the U.S. National Biological Service when analyzing BBS data, as described in Thomas (1997). This involved taking 400 bootstrap samples of the site trends and using the bootstrap mean and variance in a two-sided z test to test the alternative hypothesis that regional trend differed from zero. Significance was set at 80%, following the recommendation of Gibbs et al. (1998) that power and significance be set at 90% and 80%, respectively, for ecological monitoring programs. The simulation process was repeated 1000 times (Link and Hatfield 1990) and power was estimated as the proportion of simulations for which trend was determined significant and trend direction was correctly identified.