Copyright © 2013 by the author(s). Published here under license by The Resilience Alliance.
Go to the
pdf version of this article
The following is the established format for referencing this article:
Margoluis, R., C. Stem, V. Swaminathan, M. Brown, A. Johnson, G. Placci, N. Salafsky, and I. Tilders. 2013. Results chains: a tool for conservation action design, management, and evaluation. Ecology and Society 18(3): 22.
http://dx.doi.org/10.5751/ES-05610-180322
Insight, part of a special feature on
Exploring Opportunities for Advancing Collaborative Adaptive Management (CAM): Integrating Experience and PracticeResults Chains: a Tool for Conservation Action Design, Management, and Evaluation
1Foundations of Success
ABSTRACT
Every day, the challenges to achieving conservation grow. Threats to species, habitats, and ecosystems multiply and intensify. The conservation community has invested decades of resources and hard work to reduce or eliminate these threats. However, it struggles to demonstrate that its efforts are having an impact. In recent years, conservation project managers, teams, and organizations have found themselves under increasing pressure to demonstrate measurable impacts that can be attributed to their actions. To do so, they need to answer three important questions: (1) Are we achieving our desired impact?; (2) Have we selected the best interventions to achieve our desired impact?; and (3) Are we executing our interventions in the best possible manner?
We describe results chains, an important tool for helping teams clearly specify their theory of change behind the actions they are implementing. Results chains help teams make their assumptions behind an action explicit and positions the team to develop relevant objectives and indicators to monitor and evaluate whether their actions are having the intended impact. We describe this tool and how it is designed to tackle the three main questions above. We also discuss the purposes for which results chains have been used and the implications of their use. By using results chains, the conservation community can learn, adapt, and improve at a faster pace and, consequently, better address the ongoing threats to species, habitats, and ecosystems.
Key words: adaptive management; assumption; effectiveness; evaluation; impact; measure; monitoring; outcome; planning; results chains; theory of change
INTRODUCTION
Conservation project managers, and the organizations for which they work, are under increasing pressure to demonstrate measurable and attributable impact of their actions (Ferraro and Pattanayak 2006). No longer does the concept: “We’re good people doing good work – trust us...” satisfy donors and the public at large (Sutherland et al. 2004). Constituencies want results, and they want proof that the results were, in fact, achieved by the organization they supported (Pullin and Knight 2003). Therefore, to determine the effectiveness and relative success of conservation interventions and for the conservation community to advance as a whole, managers need to be able to answer three important questions (Salafsky et al. 2002): (1) What should our goals be, and how do we measure progress in reaching them? (2) How can we most effectively take action to achieve conservation? (3) How can we do conservation better?
With respect to the first question, for many years, if managers and researchers did monitor and evaluate impact, they typically measured only variables that reflect the current or trending status of the biodiversity they were trying to conserve. However, measuring solely the status of biodiversity (species, habitats, ecosystems) is usually insufficient to gauge the efficacy of the interventions an organization is implementing or how well it is implementing them (Salzer and Salafsky 2006). In addition, conservation teams have often struggled with what indicators they should use to measure success and have not been systematic, strategic, or focused in their choices. Fortunately, over the last decade, there have been notable strides made toward developing, adopting, and implementing standards for doing systematic project and program management and monitoring, e.g., The
Open Standards for the Practice of Conservation (Conservation Measures Partnership 2007), The Nature Conservancy’s Conservation Action Planning, and WWF’s Project and Program Management Standards.
In terms of the second question, the conservation community has a long tradition of selecting interventions without much evidence that these interventions work under the conditions in which they are executed (Pullin and Knight 2003, Pullin et al. 2004, Sutherland et al. 2004). Historically, project managers have selected an intervention because they think it will work or because it is what they believe their organization does best. In the conservation community, we often make huge assumptions without testing them in any systematic way. We tend to indiscriminately accept or reject interventions based on limited examples of success or failure. The utility of a conservation intervention, however, is usually not so absolute. Rarely are there interventions that work or do not work under all conditions. The challenge for conservation practitioners is to determine which conservation interventions will be most successful in their context (Salafsky et al. 2002). Moreover, managers need to be aware of whether they are implementing their interventions in the best possible manner. This requires information systems that track not only project outcomes and impacts, but also financial inputs and resultant outputs to make sure that teams are on the right track and are taking the shortest and easiest path to achieving desired results.
Implicit in the third question is how can we learn what works, what does not work, and why and how can we learn from one another. As such, it takes the second question a step further. In conservation, we have often worked in isolation, and historically, if we did any monitoring, we did it primarily for reporting and accountability purposes (Stem et al. 2005). Though more teams are using monitoring for learning purposes, it is still fairly rare, and the learning remains primarily within the project team. The conservation community needs to get better at monitoring for learning purposes and finding ways to share that learning with others working under similar situations (Salafsky et al. 2002).
By not creating a system, culture, and process for asking and answering these three questions, the conservation community is vulnerable to the following:
- Compromised ability to demonstrate effectiveness,
- Diminished capacity to systematically learn from experience and across projects,
- Reduced power to avoid duplicating efforts and reinventing the wheel,
- Inability to gauge the extent to which funds are well spent, and ultimately
- Less faith from society that there is utility in supporting the work of conservation organizations.
To address these challenges, a group of international conservation organizations came together in 2002 to form the Conservation Measures Partnership (CMP), which currently consists of 23 nongovernmental and donor organizations (
http://conservationmeasures.org/). One of the most significant products of CMP is a set of conservation project design, management, and monitoring standards that help teams practice adaptive management and improve their conservation efforts (Conservation Measures Partnership 2007). In particular, these
Open Standards for the Practice of Conservation define a general approach and specific tools required to implement quality conservation interventions. Governmental and nongovernmental institutions across the world are increasingly adopting these open standards and the Miradi software that helps practitioners apply them.
One of the primary tools of the open standards is the results chain. We describe this tool and how it helps tackle the three questions described above and provides a framework for being explicit about assumptions and doing effectiveness monitoring. We also discuss the purposes and implications of the use of results chains, including the benefits of the tool as a foundation for focused project and program monitoring and evaluation for effectiveness.
WHAT IS A “RESULTS CHAIN”?
A results chain is a diagram that depicts the assumed causal linkage between an intervention and desired impacts through a series of expected intermediate results (Foundations of Success 2009). In the conservation community, Foundations of Success (FOS) has pioneered the development and use of results chains since the late 1990s (Margoluis and Salafsky 1998, Foundations of Success 2009, Margoluis et al. 2009
a). The results chain tool, however, traces its roots to the field of evaluation long before the conservation community started using it. In addition, related tools for testing assumptions, “theory of change” tools, have been used in various fields for over thirty years (Stem et al. 2005). These include decision trees, conceptual models (Margoluis and Salafsky 1998, Margoluis et al. 2009
a), and logic models. The term “theory of change” is used to describe the sequence of outcomes that is expected to occur as a result of an intervention (Weiss 1995;
http://www.theoryofchange.org/). Theory of change evaluations examine whether these expected outcomes actually materialize and to what extent they can be attributed to interventions. Before describing theory of change tools, it is helpful to review definitions of commonly-used terms related to results (Fig. 1).
The most common representations of theories of change are logic models and results chains. Logic models are a general, yet systematic and visual way to present the perceived relationships among the resources used to operate a program (inputs), the activities undertaken (outputs), and the intended changes or results (outcomes; den Heyer 2001, W. K. Kellogg Foundation 2001). A good logic model will show detailed information about each of these components. For example, the logic model in Figure 2 includes a comprehensive list of all the inputs provided by the project team, the outputs for which it will be directly responsible, and the outcomes it expects the inputs and outputs will produce.
The logic model in Figure 2 is among the more comprehensive examples of a logic model, yet it has several shortcomings that are typical of logic models in general. First, it lists the inputs, outputs, and outcomes in columns that do not explicitly link one result to another. Consequently, one cannot precisely trace the connection or logic horizontally across columns. Nor is it clear if everything in one column, e.g., inputs, equally influences everything in the next column, e.g., outputs. Also, the figure does not indicate how two outcomes listed in the same column might influence or be a necessary condition for one another. Another major shortcoming of logic models is that the results in each column tend to be very general, e.g., “skills,” “motivations,” and “economic conditions.” Such general wording does not help project teams specify their exact expectations and determine what their goals, objectives, and associated indicators should be. For these reasons, logic models fall short as a planning or evaluation tool.
Results chains are often equated to logic models, but in reality, they are much more specific and show direct assumed relationships among discrete actions, intermediate outcomes, and the desired final impact. In conservation terms, they show how a project team believes a certain conservation action will influence indirect threats, opportunities, and direct threats to have a positive impact on species, ecosystems, and/or natural resources. In particular, a results chain shows a series of “if...then” relationships that define how project team members believe an intervention is going to contribute to a specific impact (Foundations of Success 2009). For example, the bottom half of Figure 3 shows a very simple results chain with the following theory of change:
If the team implements a strategy to substitute other wood, for example, plantation grown ‘Melina,’ for mangrove wood in construction projects → Then there will be reduced use of mangrove wood for construction;
If there is reduced use of mangrove wood for construction → Then demand for mangrove wood will decline;
If demand for mangrove wood declines → Then mangrove harvesting will be reduced;
If mangrove harvesting is reduced → Then mangrove habitat will improve.
In the conservation community, teams have typically derived their results chains from general conceptual models of the situation at their project site prior to any intervention (Margoluis et al. 2009
a). These conceptual models can help teams identify strategies and the chain of threats, opportunities, and conservation targets they could influence. Results chains build off of and elaborate upon these initial chains from the conceptual model to show how an intervention is assumed to change or influence the state of the site (Fig. 3).
HOW ARE RESULTS CHAINS USED IN ADAPTIVE MANAGEMENT?
Planning: results chains clarify implicit assumptions
Results chains lay out the assumptions that project teams hold regarding the effects of the actions they implement. As an example, a project team may assume that if it can implement a good media campaign, then it will reduce harvesting of caviar, and thus conserve populations of sturgeon. In its simplest form, we can depict these assumptions in a rudimentary results chain (Fig. 4).
However clearly, much is missing. A reduction in caviar harvesting does not magically happen once the media campaign is initiated. In fact, there is a series of cause-and-effect consequences that must occur for the media campaign to reduce caviar harvesting. Most media campaigns aim to change knowledge about a subject, increase awareness, and ultimately promote a change in attitudes and behaviors in a target population. Figure 5 presents a plausible depiction of these assumed intermediate (and consequential) results.
The opportunity to make these assumptions explicit as a team allows project managers to harmonize unspoken assumptions about how team members think their project is going to unfold during implementation. At the same time, the project team can systematically test whether these assumptions hold as they implement the project. Rather than waiting to see whether caviar harvesting is declining, the only option if the rudimentary chain in Figure 4 were used, the team can more quickly gauge progress by looking at results that are more proximate to the implementation of the media campaign. For example, they can test whether their target audience has increased knowledge about the importance of sturgeon or whether interest in sturgeon conservation is on the rise. This ability to test assumptions quickly and early-on in the life of the project so that a team can reflect and adapt is a basic tenet of adaptive management.
Management: results chains facilitate the development of highly targeted and strategic action plans
All conservation action plans should include at least three primary components: (1) goals; (2) objectives; and (3) strategies (see Fig. 1 for definitions). Results chains provide a structure for defining how these components relate to one another. By explicitly linking strategies to intermediate results and ultimately to changes in conservation targets, a project team can more easily determine which actions are needed to implement these strategies. In practice, some planning and evaluation tools, including the ubiquitous logframe matrix, fail to explicitly link strategies, objectives, and goals (R. Davies,
unpublished manuscript). Consequently, action plans are often a laundry list of goals, objectives, and strategies. In the end, there is nothing strategic or “logical” about these logframe matrices because they fail to directly link action plans to clearly defined theories of change.
Results chains provide the basis for assessing how strategic an action plan is because they clarify key results (and inherent relationships among them) that must be achieved to reach specific goals. Teams then set objectives directly related to those key results. Those objectives, if well written, will clearly state the thresholds for the intermediate results that must be reached to expect any change in subsequent or “downstream” results. This setting of thresholds is yet another basic tenet of adaptive management.
To assess meaningful change over the short and long term, teams typically set objectives for key results. Objectives closer to the strategy permit project managers to quickly gauge if adequate thresholds are being achieved and the intervention is working as expected. For example, in Figure 6, the project team identified four key results, including the final result of healthy sturgeon populations, where it was essential to develop objectives or goals, which are shown in Figure 7.
Monitoring and evaluation: results chains help develop realistic and focused monitoring plans
Developing realistic and focused monitoring plans is made considerably easier by first explicitly defining assumptions and developing a strategic action plan. A perennial challenge to conservation project managers is determining which variables to monitor and by extension, what data to collect. Without good roadmaps, the tendency in the past has been to either: (1) err on the side of perceived “comprehensiveness” and collect data on a wide variety of variables with the hope that some data provide the information needed to gauge project effectiveness; or 2) collect nothing because it is not clear where to start or what to collect.
Results chains provide the roadmap needed to develop practical monitoring plans. In their most basic form, monitoring plans must include data associated with an action plan’s primary components. In particular, project teams must define data related to their goals and objectives, and they must have some way to gauge the execution of their strategies. In addition, results chains give project managers clear pointers as to what other data should be collected to measure progress toward goals and objectives. In theory, a team could develop indicators for all intermediate results along a given chain, but it is often unrealistic, and not advisable, to collect data on all results. Because results chains lay out the expected changes from an intervention, they narrow the universe of potential data to collect and help keep project teams focused.
Figure 6 shows potential indicators for each result along the chain. By collecting data at various, though not all points along the chain, project managers are better positioned to make timely decisions about their project’s progress and take corrective action, if necessary. For example, by monitoring “increased knowledge of importance of sturgeon,” the first result expected from the intervention, the project team has a very sensitive indication of how the project is progressing. If project managers see expected levels of change in knowledge, then they can be reasonably sure the logic of their results chain is thus far holding, and they should expect to see consistent changes further along the chain. If, however, after implementing the media campaign, the team sees little evidence of changes in knowledge, then either their logic is wrong (theory failure) or the way they executed the project is flawed (program failure). Regardless of the cause, this early monitoring provides an opportunity to reflect, analyze, and adapt.
One misconception of results chains is that they are a poor substitute for rigorous evaluation design. This, however, conflates two issues: articulating assumptions that lead from intervention to outcomes and evaluation approach. In fact, much of the evaluation literature over the past 20 years has stressed the importance of taking a “theory-based approach” when determining impact and the reasons why desired outcomes were achieved or not (Chen and Rossi 1980, Weiss 1997). Many authors describe creating results chains as the essential first step in conducting evaluations (White 2009). Creating results chains permits evaluators to examine essential variables, expected relationships among variables, e.g., association or cause-and-effect, potentially confounding variables, and the context within which a project is implemented (Elvik 2003). Only once these issues are clearly articulated, they propose, can evaluators decide what evaluation design is most appropriate. In the end, the decision of which type of evaluation design to use rests with the project team and may be influenced by factors, such as the availability of resources or desired precision on the evaluation. If a team has the resources and needs a high level of precision, it could use an experimental or quasi-experimental design to test the assumptions laid out in its results chains. In the conservation field, however, it is quite common that teams find themselves under severe resource limitations or face very real challenges of implementing highly precise experimental or quasi-experimental evaluation designs. Regardless of which design a team deems appropriate, results chains can help increase the likelihood of correctly inferring anticipated outcomes, and thus, the evaluation questions and approach (Margoluis et al. 2009
b).
Communications: results chains help teams understand and communicate expected results and their timing
Project managers often underestimate the amount of time required to reach project goals. This is especially true when the intermediate results that lead to these goals are not clear. By laying out a results chain in a causal and therefore temporal order, the expected timing of specific results is easier to estimate and communicate to stakeholders.
As shown in Figure 6, it is relatively straightforward to estimate relative and absolute timing from the beginning to the end of the results chain. This helps provide a more realistic representation of when project managers should expect to see results. In this example, it would be unrealistic to expect positive changes in the amount of sturgeon harvested in 2015. Instead, the results chain demonstrates that this intervention will require several years to take effect and lead to a reduction in harvest, with other more immediate results necessarily occurring first. Project managers can use this information to be more realistic and transparent with their stakeholders and donors about the expected results of project interventions and how long it will justifiably take to see these results.
BROADER IMPLICATIONS FOR RESULTS CHAINS
Harmonize vision and purpose among stakeholders
Results chains help project team members harmonize their vision for how a project should be executed and what it will achieve. It is all too easy and common for team members to hold divergent assumptions when these assumptions are not made explicit. Results chains also facilitate communication between project teams and their constituencies, including superiors, the home office, donors, partners, government agencies, and society in general. Often in larger organizations, home office managers fail to understand the mechanisms for achieving results in field offices. In addition, many donor organizations lament that the pathway to success is often obscure in funding proposals.
Help design feasible and appropriate interventions
Results chains help project teams realize what it takes to achieve a desired goal. Without thinking about all the intermediate results that may be required to achieve a given goal, project teams have a tendency to underestimate the true complexity of achieving impact. By making assumptions explicit, clearly articulating required action plan components, and defining appropriate indicators to measure progress, teams are in a better position to gauge the real level of effort required to implement their strategies. By using results chains, practitioners can determine if the path to impact is too complex or implausible, requires expertise that is not readily available, or leads to undesirable consequences. With this knowledge, the conservation community can more broadly make more efficient use of limited resources and take more effective actions.
Provide a transparent roadmap for evaluation/accountability
Because results chains help teams develop highly targeted and strategic action plans and focused monitoring plans, they provide a very solid foundation for evaluating project effectiveness. Regardless of what their evaluation design may be, evaluators can use a team’s results chain as a roadmap to assess whether their expected results materialized or if their project is on course to achieving expected results in the future.
Provide standardized basis for cross-project learning
Effective cross-project learning requires projects to share common strategies, threats, and conservation targets. To compare results, they also need to share common or related theories of change and a common currency for exchange (data and information). By harmonizing these features and letting vary other exogenous variables such as social, economic, and political contexts, project managers can compare across management units and thus determine the conditions under which a given intervention works or not.
In Figure 8, three different sites, or landscapes or ecoregions, in different areas of the world share a common strategy: community capacity building for forest resource management. They also share a common threat, i.e., illegal mangrove extraction, and a common conservation target, mangrove forest. Project teams in all three sites arrived at very similar or identical results chains. If the teams used the same or similar indicators, they would be in a good position to compare and share the analyses of their results and start to identify conditions that do or do not favor this approach to reducing mangrove extraction.
Using results chains to learn across projects and sites can help teams avoid reinventing the wheel. If they can learn from one another’s experiences, they will be in a better position to choose successful strategies and avoid unsuccessful ones.
Lead to common language and concepts
Results chains provide teams with a framework for thinking about their conservation projects. Under this framework, projects are composed of a series of strategies that affect indirect threats, opportunities, and/or direct threats. These strategies, directly or through these other factors, then ultimately affect the status of a conservation target. If teams can agree that these are the main factors that make up a conservation project, then they can begin to compare projects and agree on common language for describing those projects. Presently, some initiatives have taken steps in that direction. For example, the International Union for Conservation of Nature (IUCN) and the Conservation Measures Partnership (CMP) worked together to develop common taxonomies for conservation actions and direct threats (Salafsky et al. 2008). Similarly, CMP has developed Miradi adaptive management software (
https://miradi.org/). This is comprehensive software for conservation project management that includes steps to help teams develop results chains, using the common concept of projects that are composed of strategies designed to affect threats, opportunities, and/or conservation targets.
CHALLENGES TO USING RESULTS CHAINS
Although results chains have many advantages, there are some challenges to using them. The most important challenges include:
Significant “up-front” thinking and analysis are required
Teams must do a lot of work to identify targets, threats, and driving factors and to narrow down the most appropriate strategies for their project situation. If a team jumps directly to developing results chains without doing the upfront planning, it risks developing chains for actions that may not be the most strategic.
Finding the right balance of detail for different audiences is challenging
Some practitioners develop very simple, linear chains, whereas others develop very complex, branched chains. There is no right level of detail, but teams need to determine what is most useful and manageable for their own planning and monitoring needs. Likewise, if they are communicating with external audiences, the appropriate level of detail will likely be different. External audiences usually do not want to see all the details, so teams need to become well-versed in how to use results chains effectively for internal planning, as well as for external communication. In some cases, teams might choose to have an internal, “messy and complicated” chain but also produce a cleaner summary chain for sharing outside the team.
Developing results chains seems deceptively easy
It may seem like results chains are simple and relatively easy to construct. However, while developing results chains, it is often difficult to ensure that there are no gaps in logic and that the chain adequately describes the team’s assumptions. The conversations involved in working through these difficulties help team members get on the same page and prove to be one of the main values of the results chain tool.
It can be difficult to remain focused on results, not actions
A common pitfall is to develop implementation rather than results chains. Implementation chains describe the activities to be carried out, e.g., hold meetings, write reports, raise funds, monitor project, rather than the results expected and needed to attain a given goal. Implementation chains will not help teams determine if their strategy is effective and whether their theory of change holds.
No single results chain describes an entire project
Some teams are uncomfortable with reducing reality to a focused results chain that is isolated from the rest of what is happening with the project. They might criticize the tool as being too simplistic and reductionist. However, the intent of the tool is not to represent all interventions and their effects in a single chain. Instead there are usually multiple results chains that interact among themselves and in combination, lead to the desired final results. This interaction can also be depicted using the results chain tool.
Results chains do not necessarily represent the truth
Sometimes, teams consider their results chain to represent the truth rather than describe their assumptions about what they expect to happen. On the other hand, teams need to be comfortable knowing their interventions involve assumptions, not truths, and that results chains can help test those assumptions.
CONCLUSIONS
Results chains are a tool used by many other fields to enhance project design, management, monitoring, and evaluation. They have great potential to help the conservation community answer the three fundamental questions outlined in the beginning of this paper: (1) What should our goals be, and how do we measure progress in reaching them? (2) How can we most effectively take action to achieve conservation? (3) How can we do conservation better?
Every day, the challenges to achieving conservation grow. Threats to species, habitats, and ecosystems multiply and intensify. The only way the conservation community can meet this challenge is to learn and improve at a faster pace. However, our success over the years has been thwarted by an absence of the basic building blocks of any effective profession: common language, concepts, and practice. We must accelerate our understanding of what interventions work under what conditions by striving to answer these three questions.
Historically, the conservation community has used recipes for conservation success, as if there were certain interventions that would always work under all conditions. We have found that this approach is lacking. Efficacy of interventions in one place does not guarantee success in another. Rather than using existing recipes, project managers need the knowledge and skills to create their own recipes tailored to the situation in which they work. However, they need to depend on reliable lessons-learned from previous work under similar conditions, and they need to know how to design and manage their project adaptively. To help project managers achieve this, our understanding of conservation action effectiveness must take place at two levels: within projects and across the conservation community. Results chains can help us achieve this.
Results chains provide the conservation community a good starting point to create common language, concepts, and practice. Thus, they can serve as an indispensable building block in the continued professionalization of our field and the achievement of lasting and meaningful impact.
LITERATURE CITED
Chen, H.-T., and P. H. Rossi. 1980. The mulit-goal, theory-driven approach to evaluation: a model linking basic and applied social science.
Social Forces 59(1):106-122.
Conservation Measures Partnership. 2007.
Open standards for the practice of conservation. Conservation Measures Partnership.
den Heyer, M. 2001.
A bibliography for program logic models/logframe analysis. International Development Research Centre, Ottawa, Ontario, Canada.
Elvik, R. 2003. Assessing the validity of road safety evaluation studies by analysing causal chains.
Accident Analysis & Prevention 35(5):741-748.
http://dx.doi.org/10.1016/S0001-4575(02)00077-5
Ferraro, P. J., and S. K. Pattanayak. 2006. Money for nothing? A call for empirical evaluation of biodiversity conservation investments.
PLoS Biology 4(4):e105.
http://dx.doi.org/10.1371/journal.pbio.0040105
Foundations of Success. 2009.
Using results chains to improve strategy effectiveness: an FOS how-to guide. Foundations of Success, Bethesda, Maryland, USA. [online] URL:
http://www.fosonline.org/resource/using-results-chains
Margoluis, R., and N. Salafsky. 1998.
Measures of success. Island Press, Washington, D.C., USA.
Margoluis, R., C. Stem, N. Salafsky, and M. Brown. 2009
a. Using conceptual models as a planning and evaluation tool in conservation.
Evaluation and Program Planning 32:138-147.
http://dx.doi.org/10.1016/j.evalprogplan.2008.09.007
Margoluis, R., C. Stem, N. Salafsky, and M. Brown. 2009
b. Design alternatives for evaluating the impact of conservation projects. In M. Birnbaum and P. Mickwitz, editors. Environmental program and policy evaluation: addressing methodological challenges.
New Directions for Evaluation 122:85-96.
Pullin, A. S., and T. M. Knight. 2003. Support for decision making in conservation practice: an evidence-based approach.
Journal for Nature Conservation 11:83-90.
http://dx.doi.org/10.1078/1617-1381-00040
Pullin, A. S., T. M. Knight, D. A. Stone, and K. Charman. 2004. Do conservation managers use scientific evidence to support their decision-making?
Biological Conservation 119:245-252.
http://dx.doi.org/10.1016/j.biocon.2003.11.007
Salafsky, N., R. Margoluis, K. H. Redford, and J. G. Robinson. 2002. Improving the practice of conservation: a conceptual framework and research agenda for conservation science.
Conservation Biology 16(6):1469-1479.
http://dx.doi.org/10.1046/j.1523-1739.2002.01232.x
Salafsky, N., D. Salzer, A. J. Stattersfield, C. Hilton-Taylor, R. Neugarten, S. H. M. Butchart, B. Collen, N. Cox, L. L. Master, S. O’Connor, and D. Wilkie. 2008. A standard lexicon for biodiversity conservation: unified classifications of threats and actions.
Conservation Biology 22:897-911.
http://dx.doi.org/10.1111/j.1523-1739.2008.00937.x
Salzer, D., and N. Salafsky. 2006. Allocating resources between taking action, assessing status, and measuring effectiveness of conservation actions.
Natural Areas 26:310-316.
http://dx.doi.org/10.3375/0885-8608(2006)26[310:ARBTAA]2.0.CO;2
Stem, C., R. Margoluis, N. Salafsky, and M. Brown. 2005. Monitoring and evaluation in conservation: a review of trends and approaches.
Conservation Biology 19(2):295-309.
http://dx.doi.org/10.1111/j.1523-1739.2005.00594.x
Sutherland, W. J., A. S. Pullin, P. M. Dolman, and T. M. Knight. 2004. The need for evidence-based conservation.
Trends in Ecology & Evolution 19(6):305-308.
http://dx.doi.org/10.1016/j.tree.2004.03.018
W. K. Kellogg Foundation. 2001.
Logic model development guide: using logic models to bring together planning, evaluation, and action. W. K. Kellogg Foundation. Battle Creek, Michigan, USA.
White, H. 2009. Theory-based impact evaluation: principles and practice.
Journal of Development Effectiveness 1(3):271-284.
http://dx.doi.org/10.1080/19439340903114628
Weiss, C. H. 1995. Nothing as practical as good theory: exploring theory-based evaluation for comprehensive community initiatives for children and families. Pages 65-92
in J. P. Connell, J. L. Aber, and G. Walker, editors.
New approaches to evaluating community initiatives: concepts, methods, and contexts. Aspen Institute, Washington, D.C., USA.
Weiss, C. H. 1997. Theory-based evaluation: past, present, and future. In D. J. Rog and D. Fournier, editors. Progress and future directions in evaluation: perspectives on theory, practice, and methods.
New Directions for Evaluation 76:41-55.
http://dx.doi.org/10.1002/ev.1086
Address of Correspondent:
Vinaya Swaminathan
1473 Park Rd NW #4
Washington, DC
20010 USA
vinaya@fosonline.org