Subido por Mariangela Cerna Cuyatti

briggs2013

Anuncio
Environmental Impact Assessment Review 38 (2013) 16–25
Contents lists available at SciVerse ScienceDirect
Environmental Impact Assessment Review
journal homepage: www.elsevier.com/locate/eiar
Determination of significance in Ecological Impact Assessment: Past change, current
practice and future improvements
Sam Briggs, Malcolm D. Hudson ⁎
Centre for Environmental Sciences, Faculty of Engineering and the Environment, University of Southampton, Highfield, Southampton, Hampshire, SO17 1BJ, United Kingdom
a r t i c l e
i n f o
Article history:
Received 29 November 2011
Received in revised form 26 April 2012
Accepted 27 April 2012
Available online 7 June 2012
Keywords:
Ecological Impact Assessment (EcIA)
Environmental Impact Assessment (EIA)
Institute of Ecology and Environmental
Management (IEEM)
Environmental Statement
Monitoring
Significance
a b s t r a c t
Ecological Impact Assessment (EcIA) is an important tool for conservation and achieving sustainable development. ‘Significant’ impacts are those which disturb or alter the environment to a measurable degree. Significance is a crucial part of EcIA, our understanding of the concept in practice is vital if it is to be effective as a
tool. This study employed three methods to assess how the determination of significance has changed
through time, what current practice is, and what would lead to future improvements. Three data streams
were collected: interviews with expert stakeholders, a review of 30 Environmental Statements and a
broad-scale survey of the United Kingdom Institute of Ecology and Environmental Management (IEEM)
members.
The approach taken in the determination of significance has become more standardised and subjectivity has
become constrained through a transparent framework. This has largely been driven by a set of guidelines
produced by IEEM in 2006. The significance of impacts is now more clearly justified and the accuracy with
which it is determined has improved. However, there are limitations to accuracy and effectiveness of the determination of significance. These are the quality of baseline survey data, our scientific understanding of ecological processes and the lack of monitoring and feedback of results. These in turn are restricted by the
limited resources available in consultancies. The most notable recommendations for future practice are the
implementation of monitoring and the publication of feedback, the creation of a central database for baseline
survey data and the streamlining of guidance.
© 2012 Elsevier Inc. All rights reserved.
1. Introduction
1.1. Environmental Impact Assessment and Ecological Impact Assessment
There are two purposes of EIA. The first is to analyse the potentially significant environmental impacts resulting from major developments and to communicate this to decision-makers and the public
(Wood, 2008). This should result in either the abandonment of environmentally unacceptable actions, mitigation of the impacts to the
point of acceptability where possible or desirable or the compensation for unavoidable impacts (Sadler, 1996; C. Wood, 1995). The second purpose is in achieving or supporting the ultimate goals of
sustainable development (Gilpin, 1995; Sadler, 1996).
Similarly, Ecological Impact Assessment (EcIA) is the “process of
identifying, quantifying and evaluating the potential impacts of defined actions on ecosystems or their components” (Treweek, 1999
pp1). The main use of EcIA is within the broader remit of Environmental Impact Assessment (EIA). It is here that most experience in
EcIA has been gained (Treweek, 1999). Therefore the focus of this
⁎ Corresponding author. Tel.: + 44 2380 594797; fax: + 44 2380 677519.
E-mail address: mdh@soton.ac.uk (M.D. Hudson).
0195-9255/$ – see front matter © 2012 Elsevier Inc. All rights reserved.
doi:10.1016/j.eiar.2012.04.003
research is on Ecological Impact Assessment as a component of EIA.
The purpose of EcIA in this context is to ensure that the potential significant ecological impacts of a development are fully considered,
mitigated and communicated to decision-makers on a proposal. Additionally, it links the conservation of biodiversity with the goal of sustainable development. Only recently have humans begun to realise
the scale of value that biodiversity offers and our dependence upon
it (Rands et al., 2010; TEEB, 2010). It is important, even vital, that biodiversity is conserved for the benefit of current and future generations (Rands et al., 2010). Development has been a major cause of
biodiversity loss; it has rapidly driven habitat loss and fragmentation
(Dolman, 2000; MEA, 2005). The Economics of Ecosystems and Biodiversity study (TEEB, 2010) provides a number of case studies illustrating how ecosystems have been undervalued. In these examples
the cost resulting from the destruction of ecosystems for development overshadows the benefits of the development.
Principle Four of the Rio Declaration on Environment and Development (UN, 1992) states that “in order to achieve sustainable development, environmental protection shall constitute an integral part of
the development process and cannot be considered in isolation
from it.” This firmly establishes the link between the environment
and development. It requires that the potential environmental impacts of developments must be investigated in order to manage
S. Briggs, M.D. Hudson / Environmental Impact Assessment Review 38 (2013) 16–25
them. To achieve sustainable development existing environmentally
harmful developments must be managed as best they can, but new
developments must be designed to have the smallest practicable negative impact or even a positive one (Glasson et al., 2005). The purpose
of EIA is to prevent significant negative impacts from occurring; such
prevention is better than remedying impacts at a later date. However,
EIA is not always viewed as being able to help in achieving sustainable development without large improvements to practice (Benson,
2003; George, 1999). Since the outcome of an EIA hinges on the determination of significance of environmental impacts, significance itself becomes a critical issue. EIA's use as an effective tool in
contributing to sustainable development depends greatly on what is
regarded or determined as significant (George, 1999).
1.2. The concept of significance
The concept of ‘significance’ is central to the EIA process and is
considered throughout the process (Duinker and Beanlands, 1986;
Lawrence, 2007; Sadler, 1996). Schedule 4 of the United Kingdom
Town and Country Planning (Environmental Impact Assessment)
Regulations 1999 and Schedule 3 of the Town and Country Planning
(Assessment of Environmental Effects) Regulations 1988 make it a requirement for Environmental Statements (ESs) to include a description of the likely ‘significant’ effects of the proposed development
on the environment, including the flora and fauna. However, the legislation does not provide a definition of ‘significance’ and its determination can vary a great deal in practice (Gilpin, 1995; Lawrence,
2007). There is a paucity of research into this topic and the term itself
is poorly understood (Lawrence, 2007; Wood, 2008). The focus of this
paper is on the problematic issue of determination of significance for
predicted impacts. Generally an impact can be defined as a measurable change to the environment to a measurable degree; levels of significance can then be assigned to an impact in order to illustrate its
importance (Fortlage, 1990).
Subjectivity can be an area of contention in EcIA; it is looked upon
both favourably (Wilkins, 2003) and unfavourably (Treweek, 1996),
but is inherent within the determination of significance. Since there
is no widespread agreement on a definition of significance it becomes
a collective judgement of the stakeholders in each case—this usually
makes subjectivity inescapable (Fortlage, 1990; Gilpin, 1995). Additionally, subjectivity arises from the value placed on the receptor
(species or habitat) of an impact; it is dependent on the value society
places on it (Wood, 2008). There is concern that developers and consultants can use subjectivity to scale impacts down in order to increase the likelihood of achieving planning permission (Treweek,
1996).
Early reviews of ESs found that guidance was needed for practice
to improve (Thompson et al., 1997; Treweek, 1996). In the UK there
is now a variety of guidance for the different aspects of EIA (DETR,
2000; DoE, 1995; DoT, 1993; IEMA, 2004) and EcIA specifically
(Byron et al., 2000; IEEM, 2006), and these have had some positive
impacts; for example the third of a series of reviews of ESs for road
schemes found that Volume 11 of the Design Manual for Roads and
Bridges (DMRB) (DoT, 1993) had a positive impact on practice
(Byron et al., 2000).
In this paper we focus on the UK Institute of Ecology and Environmental Management's (IEEM) Guidelines for Ecological Impact Assessment in the United Kingdom, published in 2006. These will be referred
to as the “IEEM guidelines” hereafter. The level of uptake of these guideline and others from associated sectors and their effect on practice will
be investigated. The guidance provides a framework within which to assess significance and factors that should be considered. To do so the
guidelines propose placing a value on the ecological receptor at a geographic frame of reference, such as County Value. This determined
using a number of factors; designations, biodiversity value, habitat
value, species value, potential value, secondary or supporting value,
17
social value and economic value. The impact on the receptor is then
predicted taking into account the magnitude, extent, duration, reversibility, integrity, timing and frequency of the impact. The impact and
value are then combined to establish significance at a geographic level
alongside the probability of the predicted impact.
A great number of techniques are used to determine the significance
presented in ESs, ranging from wholly qualitative descriptions to quantitative statistical analysis (Bevan, 2009; Thompson, 1990). A mixture of
approaches are often used such that a balance between quantification
and professional judgement is often used (Sadler, 1996). The main examples of such techniques from literature (Bevan, 2009; Gilpin, 1995;
Glasson et al., 2005; Thompson, 1990; Treweek, 1999; Westman,
1985) are the use of matrices, cost-benefit analysis, monetary evaluation, multi-criteria analysis, standardised generic criteria specific to
each impact or the same for all impacts and ad hoc methods, such as
characterising significance with qualitative text or in tables.
The variety of techniques and the inconsistency of their use by consultants make the results from ESs difficult to compare (Treweek,
1999). All of the techniques offer different benefits, but also come
with inherent limitations; the technique used should be appropriate
to the context of the site (Thompson, 1990). When assessing the significance of an impact a variety of factors may be considered; again these
vary from project to project and between recommendations. The lack
of standardisation in the factors considered makes comparison between
projects difficult, especially if there is a lack of transparency.
Historically reviews of ESs have concluded that their overall quality has been poor (Byron et al., 2000; DoE, 1991; Thompson et al.,
1997; Treweek et al., 1993; P. Wood, 1995) but there have been noticeable improvements with time (Byron et al., 2000; DoE, 1996).
The ‘quality in the impact prediction and determination of significance’ component was found to be the “weakest in the majority of
ESs reviewed” (Treweek et al., 1993 pp301). The justification of the
levels of significance assigned and the methods of determining significance were found to be a major problem (Byron et al., 2000). This
paper examines how the quality of the justification of the assigned
levels of significance, and the methods used, have changed through
time.
The overall aim of this paper is to further the understanding of the
determination of significance in practice, in terms of its history, its
present state and how it might be improved in the future, to better
fulfil the purposes of EcIA.
2. Methods
Three data streams were collected: interviews with expert stakeholders, ES reviews and a broad-scale survey of IEEM members.
These three methods were conducted to complement one another:
the review of ESs provides a quantitative sample of the main document of the EcIA process; the interviews assess the evolution of practice through time, techniques and limitations of current practice and
ideas for future practice qualitatively; the survey provides a quantitative sample of these elements of practice and how the members currently determine significance in practice.
2.1. Semi-structured interviews with expert stakeholders
Until now projects looking at practice in EcIA have largely focussed
on simply reviewing ESs (Bevan, 2009; Byron et al., 2000; Treweek et
al., 1993; Wood, 2008) though some have also included questionnaires
(Matrunola, 2007). Generally the focus is narrow, based mostly on practice by consultants and studies often overlook the views of other bodies
involved in the process. Interviewees were therefore chosen to reflect
the different organisations involved. They were from six consultancies,
three local planning authorities in England, two statutory consultees
and two non-governmental organisations (NGOs). Confidentiality has
been provided for the interviewees—identities and affiliations are not
18
S. Briggs, M.D. Hudson / Environmental Impact Assessment Review 38 (2013) 16–25
revealed other than broad groupings so interviewees are referred to
anonymously as Consultants 1, 2 etc. Interviews were conducted with
the aim of exploring the key topics, using open and non-leading questions (see for example Hammond and Hudson, 2007). The interviewees
were asked for their views and experience in reference to past change,
present practice and future improvements of significance in EcIA.
2.2. Review of Environmental Statements
The purpose of reviewing ESs was to analyse how the practice of
determining significance has changed through time. To achieve this
thirty ESs were reviewed spanning three time categories, selected
around particular events. The first of these events was the almost simultaneous release of the 1999 Regulations, the Department for the
Environment, Transport and the Regions' (DETR) (1999) Circular 02/
99: Environmental Impact Assessment, which provides guidance on
the 1999 Regulations for local planning authorities, and the DETR
(2000) Environmental Impact Assessment: A Guide to Procedures. The
second event was the publication in 2006 of the IEEM guidelines in
the UK, which have been endorsed by a number of key bodies in
this field. The discrete time periods were therefore set at pre-1999,
2001–2005 and 2007–2011 in order to assess changes in the determination of significance through time with the inclusion of these events.
The ESs were selected on the basis of obtaining a sample which
was broadly comparable according to development type across the
discrete time periods, as opposed to selecting samples which represented the typical development types in those age ranges (Table 1
and Table S1).
Both the Lee & Colley Environmental Statement Review Package
(Lee et al., 1999) and the Impact Assessment Unit (IAU) Environmental Impact Statement Review Packages (In: Glasson et al., 2005) were
considered for the review. However, both of these review packages
are focussed on assessing the quality of ESs as a whole and so there
is little attention given to the determination of significance. Wood
(2008) avoided using predefined review criteria; rather he took a
more exploratory approach with the intention of capturing “an illustrative ‘snapshot’ of practice” (Wood, 2008, p25). This review had a
similar intention: to obtain a comparable illustrative snapshot of the
determination of significance for each time period. Bevan (2009) created a review framework which used both a description and checklist
type approach. Therefore, a quantitative checklist approach, based on
the structure used in Bevan (2009), of what is considered within the
evaluation of impact significance was created for this project. This
allowed for easy comparison between the time periods. The framework was adapted for this project, to make it more quantitative but
avoiding a quality approach.
Following a pilot exercise a quality section was added. This was
adapted from four criteria relating to significance in the IAU review
package. This approach had originally been avoided because it is
very subjective; this in itself is testament to what this research is
assessing with regard to the difficulty of avoiding subjectivity in the
determination of significance (Lawrence, 2007; Wilkins, 2003). In
Table 1
Development types of the ESs reviewed for each time period (10 per time period).
Development type
Waste
Oil
Energy
Quarry
Wastewater
Infrastructure
Residential
Lake
Airport
Total
Pre-1999
2
1
2
2
1
1
1
10
2001–2005
2007–2011
2
1
1
2
2
1
2
2
1
1
1
1
10
1
1
1
10
accordance with the IAU review package a sliding scale was used
(which IAU adapted from the Lee & Colley review package). See Supplementary Information for further details of the review framework.
The review framework consisted of seven sections; these were:
1. General information on the project and ES
2. The techniques used to determine significance and the guidance
used
3. Factors (or dimensions) of significance used to determine significance
4. Tools used to communicate significance in the ES
5. Justification of significance—graded using a quality approach
6. Conclusions
7. Other observations arising from the review
2.3. IEEM practitioner survey
An online survey was used for this exercise and was sent out by
email to almost 3000 practitioners listed by IEEM in the UK and Ireland.
The survey used the same three themes as the stakeholder interviews;
past change, current practice and potential future practice in the determination of significance. The IEEM membership includes Ireland so extends the scope of this part of our study, but it is expected that this has
had little influence on the results, because we are working to the same
overarching legislation, and guidelines (Geraghty, 1996; Lee et al.,
1999). Not all practitioners are members of IEEM, therefore this sample
should not be seen as a study of all practitioners in the UK (or Ireland).
3. Results
3.1. Semi-structured interviews with expert stakeholders
The views of the interviewees on the main themes are drawn together below, for more detailed information see the Supplementary
Information (Table S2). The lengths of time stakeholders have been
directly involved with EIA ranged from 1.5 years to 23 years. Involvement varies from writing, overseeing and reviewing ESs or individual
ecology chapters. Most of the interviewees have worked for 2–3
organisations.
Overall, the interviewees are of the opinion that practice has changed through time, only two do not agree. A major change is that professional judgement has decreased and a more structured/prescriptive
approach, that is becoming more standardised, is now used. Consultant
1, who has been conducting EIAs for over 20 years, believes this:
“When we started it was seen as a new thing and we had a free hand,
so to speak, and nobody challenged us too much as to what we delivered. Now it's very different, a lot more authorities and statutory consultees in particular and local authorities have a lot more things they
expect an EIA to deliver.”
Guidance came out as the greatest driver of change observed.
There is a particular focus on the IEEM guidelines (2006) driving a
structured approach/framework in recent years. Case law and appeal
decisions as drivers of change also appear a number of times. It is felt
that the degree to which the level of significance assigned to an impact reflects the impact has improved through time, but there are
still improvements to be made.
In terms of current practice, the quality and quantity of scientific
evidence (monitoring, feedback and baseline survey data) is the
greatest limiting factor when determining significance. This is the result of a lack of money and expert staff time assigned to the project,
but this varies according to the consultants; as described by Local
Planning Authority 1:
“One of the big problems we have… is the lack of decent survey data,
because it costs money and developers don't want to spend money
unnecessarily. Time and again we are having to go back to them
S. Briggs, M.D. Hudson / Environmental Impact Assessment Review 38 (2013) 16–25
and say “look, you've made this assessment on incomplete survey
data”.”
A lack of effective cross-referencing between chapters and a lack
of expertise of those writing the ES were also identified as limitations.
A number of interviewees identified that professional judgement
sometimes resulted in the level of significance assigned being swayed
in favour of the developer. There is generally agreement that subjectivity cannot be avoided in the process. Consequently, the view is that
subjectivity should be well informed by science and set in a framework that allows both flexibility and transparency. The use of statistical tests in the determination of significance is proposed as a method
to reduce subjectivity; three of the interviewees do not think such
tests are workable in practice; one sees it as still having potential
for bias but worthwhile if used appropriately, and one sees it as
crucial.
Post-monitoring and feedback featured the most highly as an approach to improve the determination of significance in the future.
By monitoring impacts and mitigation an evidence base can be
established of whether the level of significance assigned was appropriate and whether mitigation was effective. This would help us to
understand how accurate the impact assessment was (Consultant 3):
“Monitoring completes the picture of EIA and there is very little feedback from the actual impact into understanding ‘did we get the impact assessment right?’ Has the threshold actually been passed?”
Generally it was not felt that more legislation or guidance was
needed but that it needed to be streamlined and implemented more
effectively. An ecosystem services approach, which would provide a
more holistic technique for assigning values of significance, was also
raised by several interviewees. Peer reviews by IEEM or an independent review commission were expected to improve practice if
implemented.
3.2. Review of Environmental Statements
A total of 30 ESs were reviewed; (Table 1 and Table S1). Where ESs
implied significance but did not actually mention the word, they were
taken as discussing significance. The review found that 30% of the pre1999 ESs, and 20% of the 2001–2005 ESs, respectively, did not discuss
the significance of ecological impacts. The most commonly used technique was a standardised approach, applied the same way for all of
the impacts; that is a framework or method used to determine significance that is used for all impacts. For the 2007–2011 period 100% of
the ESs used this technique. The methods used to communicate or illustrate significance in ESs were reviewed; Table 2 presents the findings
for each time period.
A number of guidelines have been released by professional bodies,
statutory bodies and individuals across the years. The most frequently
used guidance in the pre-1999 time period was the DoE/Welsh Office
(1989) Environmental Assessment: A Guide to the Procedures which
was used in 40% of ESs. During the 2001–2005 time period the IEEM
EcIA pilot guidelines were most used occurring in 50% of ESs, closely
followed by the IEMA (1997) Guidelines for Baseline Ecological Assessment occurring in 40% of ESs. In the 2007–2011 time period the IEEM
guidelines were used in 90% of ESs; the only one which did not use
the IEEM guidelines was an offshore oil development.
The factors considered when determining significance were
reviewed in ESs. They were sorted in order of the greatest change in
the percentage of ESs referring to each factor, and compared for the
pre-1999 and 2007–2011 time periods. It was found that none of the
factors were considered less frequently in ESs from the 2007–2011 period than the pre-1999 period. Factors that increased by 50% or more
between the two periods were selected for further analysis since they
19
Table 2
The percentage of ESs employing certain methods for communicating significance
(n = 10 for each time period).
Communication of significance
Pre-1999
2001–2005
2007–2011
Matrices
Tables
Cross-referencing
Technical appendices
Maps
0
40
20
70
80
40
60
20
50
30
60
80
40
80
40
were the most significant changes; the six factors that met this criterion
are shown in Fig. 1.
In the 2007–2011 time period all of the ESs considered nine of the
factors reviewed. In contrast, no single factor was considered in all of
the ESs for either of the earlier time periods. The number of factors
considered in more than 50% of ESs for the pre-1999, 2001–2005
and 2007–2011 time periods were 6, 8 and 15 respectively. This suggests that the consistency of factors considered in the determination
of significance has increased with time.
The IAU review package, which was adapted for this study, rates
quality on a sliding scale from A to F (A being the highest quality, F
being the lowest). Criteria for the quality of the justification of significance were graded using this scale. The median grade for each of the
time periods for pre-1999, 2001–2005 and 2007–2011 scored E, D
and B respectively (grades A, B and C are classed as satisfactory
while grades D, E and F are all unsatisfactory, to varying degrees).
This shows progressive improvement in the quality of the justification for the assigned levels of significance. The median grade crossed
the ‘satisfactory’ threshold to B for 2007–2011 which is a marked improvement. The distribution of the grades in each time period is
shown in Fig. 2.
3.3. IEEM practitioner survey
In total, there were 90 responses to the survey sent out to practitioners listed by IEEM. Ninety respondents started the survey and 54
completed it in full, so the completion rate was 60%. General information on the respondents was collected to put the findings of the survey
in context; this is particularly useful to analyse changes in the determination of significance in the past. The length of time respondents have
been practicing EcIA gives a relatively even spread between the categories; 2–5 years (30.2%), 6–10 years (26.4), 11–20 years (22.6%) and
20+ years (18.9%). However practitioners in the time bracket b2 years
experience were underrepresented (2%). The majority of practitioners
(60%) have worked on more than 20 EcIA projects, so the level of experience of the respondents may be regarded as high. During the time in
which respondents have been practicing EcIA 89% of had worked in
1–3 organisations.
A question logic (a function to filter respondents according to their
answers) was used in the survey in order to filter respondents by
their views on the change in significance determination in the history
of EcIA; 92% of the respondents believed that determination of significance had changed. These were then filtered to give their views on
how it had changed and what had caused the change; whilst the
other respondents skipped the section. To identify how the determination of significance has changed through time the filtered respondents were given a list of statements and asked to indicate to what
extent they agreed with each, see Fig. 3. There is large consensus between practitioners of the changes to practice; in response to each
question the lowest consensus was 64% (including disagree and
strongly disagree) whilst the highest was 73%.
Following this the respondents were asked what factors had driven the change they identified. These factors were rated from greatest
driver to smallest driver; Fig. 4 shows the average rating for each of
the factors. The publication of guidelines was rated as the greatest
20
S. Briggs, M.D. Hudson / Environmental Impact Assessment Review 38 (2013) 16–25
Fig. 1. The six factors that changed the most from pre-1999 to 2007–2011. n = 30.
driver by 40% of respondents and also scored as the greatest on
average.
The respondents were asked to give the frequency with which
they use certain techniques and methods in determining significance.
Standardised significance levels that are the same for all impacts or
specific to each impact are indicated to be used most frequently and
received similar scores (used for all projects and regularly used) scoring 57% and 59% respectively. There is little use of economic evaluation among practitioners, with 79% either rarely using or never
using it. Respondents were also asked how frequently they use certain guidelines. Only 6% rarely used no guidelines in assessments,
the remaining 94% indicated that they always use guidelines. The
most widely used guidelines were the IEEM guidelines: 92% of respondents either used them on every project or regularly used
them, but practitioners use a variety of guidelines, each to varying degrees; see Supplementary Information (Fig. S1).
The respondents were asked what factors they found to be most
limiting when determining significance. These factors were rated
from most limiting to least limiting; Fig. 5 shows the average rating
for each of the factors. The quality of baseline data was rated as
being the most limiting by 29% of respondents and also scored as
the most limiting on average.
In order to establish how the determination of significance might
be improved in the future the respondents were asked to rate the approaches according to how beneficial they would be; the results are
shown in Fig. 6. The implementation of monitoring and feedback
was rated as the most beneficial approach by 26% of respondents
and also scored the most beneficial on average.
4. Discussion
4.1. Characterising change through time
All three data streams indicate that practice in determination of
significance has evolved and improved. This agrees with the findings
of other studies (DoE, 1996; Matrunola, 2007). It has been found that
the number and consistency of factors considered in the determination of significance has increased through time. The majority of
these factors are included in the IEEM guidelines which are used for
90% of the ESs from 2007 to 2011. No other guidelines received
such widespread uptake or the influence to bring about this standardisation in practice. The standardisation of practice through time
is a key theme that arose in the interviews and the survey, the review
shows a clear standardisation in the methods and factors considered
when determining significance. The expert view tends to be that prescriptive methods in the IEEM guidelines have largely been taken on
board by developers and driven the standardisation observed here.
However, it was noted by several interviewees that the determination
of significance was in danger of becoming too prescriptive and simply
becoming a “handle-cranking” exercise for consultants. The concern
is that the guidelines would be used to tick a box with the local planning authorities, removing opportunities for innovation within the
process.
In the early years of EcIA the determination of significance was
very subjective and heavily dependent upon the opinion of the authors of ESs (Byron et al., 2000; Thompson et al., 1997). This can be
clearly observed in some of our sample: for example the baseline
Fig. 2. The grades which ESs scored in each time period. (A—highest possible grade, F—lowest possible grade) n = 30.
S. Briggs, M.D. Hudson / Environmental Impact Assessment Review 38 (2013) 16–25
21
Fig. 3. The extent to which the respondents agree with statements on how significance has changed through time. n = 48.
data for the proposed extension of Eversley Quarry (1997) were collected over just three visits in two months. This constrained time
frame does not allow for surveying temporal variations in species
present. Interviewees believe that, in general, the quality of survey
data was lacking, meaning that subjectivity could be widely used to
bias the argument in favour of the developer. In contrast, the
Timsbury Lake (2009) ES provides a large amount of detail on the
baseline conditions from surveys conducted over nine months. Additionally, the proposal includes a conservation management plan for
the development.
Generally, subjectivity is considered by the interviewees to be inherent within the determination of significance; however, one individual maintains that the proper use of statistics could ‘iron out’
subjectivity (consultant 5). Overall the respondents to the IEEM practitioner survey disagree that subjectivity has increased through time.
The increase in the quality of the justification of significance identified in the ES review suggests that professional judgement has become more transparent and better qualified. This agrees with the
views of the interviewees who comment that professional judgement
has largely been set within a framework, which makes ESs more comparable and transparent. Excessive subjectivity has been constrained
through implementation of the IEEM framework. These changes create more level ground on which to base discussions between developers, local planning authorities, statutory consultees and NGOs.
This is mutually beneficial: discussions mean that reviewers can
have more of an influence on the design and mitigation of the project
and the developers are more likely to achieve permission as a result of
negotiations (NGO 1).
The next characteristic of change in the assessment of significance
is the extent to which the significance assigned to an impact reflects
the realised impact. The view from the interviewees is that where significance could be argued in either direction, consultants tended to
argue it in favour of the developer, playing down the significance of
the impact. Those who are involved in assessing ESs are aware of
this and feel that the purpose of their varying roles is to “see through”
this according to their particular expertise. The results from the survey show that the significance assigned to an impact usually better
reflects the actual impact now than it used to (Fig. 3). This could
also be attributed to an increase in the quality and quantity of the
data collected and scientific knowledge increasing. However, these
are rated as smaller drivers of change by the respondents. Overall,
the significance assigned to an impact now better reflects the realised
impact than it has in the past.
A frequent shortcoming with early ESs was the lack of justification
for the level of significance assigned to impacts and the clarity with
which the significance of each impact is indicated (Byron et al.,
2000; Thompson et al., 1997). This is found to be the case for the
early ESs reviewed in this study: some of the projects, such as the
Nea Farm (1995) ES, even failed to discuss it or use the term significance. However, we identified an improvement in the justification
for the levels of significance assigned and the clarity with which the
significance of each impact is indicated. The latter scored more highly
in the most recent ESs, largely due to better presentation. There is
greater use of tables for presenting significance in later ESs than in
earlier time periods. The Wakefield Waste, South Kirby ES (2008) is
an example of good practice: a summary table was used which covered the initial significance, justification for the level of significance,
proposed mitigation measures and the residual significance. There is
strong consensus between the respondents of the survey that the
justification for the level of significance assigned to impacts has
Fig. 4. The average ratings of respondents for the drivers of change in the determination of significance. n = 48.
22
S. Briggs, M.D. Hudson / Environmental Impact Assessment Review 38 (2013) 16–25
Fig. 5. The average ratings of respondents for the factors that they consider limitations of the determination of significance. n = 51.
improved through time. Guidelines have resulted in an improvement
to the justification of significance levels. Consultant 2 clearly illustrates how this change was driven:
“In the early years the ecologists found it difficult to identify how significance should be portrayed in what is a document for planning…
but I think much more recently the professional institutes have
started coming up with sets of criteria that are better understood
and better suited towards the EIA process because they are more
transparent and require less detailed ecological understanding to
work out how you've got from the start to the finish.”
when assessing significance; this was observed in all time periods
and is reflected in the IEEM guidelines. The emphasis on designated
sites has the potential to result in other important species or habitats
being overlooked, such as Nationally Scarce invertebrates or diverse
assemblages of species that do not include any protected species.
There is a tendency to assume that the absence of a designation
means the absence of ecological value, a tendency unintentionally
reinforced by statutory consultees since their limited resources force
them to focus on designated species and habitats (Treweek, 1996).
4.2. Current practice and its limitations
‘Changes in legislation’ also rated highly as a driver of change in
the survey. However, the 1999 Regulations did little to define how
significance should be evaluated for predicted impacts; hence, this
is unlikely to be the main legislative driver of change. Other legislation which applies to the potential impacts or receptors, rather than
the process itself, has been a greater driver, such as the Habitats Directive (92/43/EEC) and the Birds Directive (2009/147/EC). Internationally designated sites (mainly Natura 2000 sites) are afforded a
higher level of protection which results in a higher level of significance of any impact. There is a heavy inclusion of designations
A number of techniques were assessed for use when determining
significance in ESs. The majority of information in the ESs reviewed is
presented using descriptive text but they also make use of tables,
maps and matrices. It is felt by some interviewees that this is valid
since the document is essentially an argument for the development,
so it is expected that it will be mainly text but informed with data.
In contrast, the purpose of an ES is to present the findings of the impact assessment to inform decision makers. The review of 2007–2011
ESs found that they each employ a standard set of generic significance
Fig. 6. The average ratings of respondents for approaches that they consider will be most beneficial to the determination of significance in the future. n = 50.
S. Briggs, M.D. Hudson / Environmental Impact Assessment Review 38 (2013) 16–25
23
levels which are the same for all of the impacts. There is less consensus between respondents of the survey. There is still some variation in
the techniques employed: the majority of respondents do use a set of
standard generic significance levels, but this varies between levels
tailored to specific impacts and levels kept the same for all impacts.
Various general guidelines have been published for the EIA process (DETR, 2000; DoE, 1995; IEMA, 2004). Industry-specific guidance
has also been released, infrastructure for example (Byron et al., 2000;
DoT, 1993). However, the IEEM guidelines are the first non-sectorspecific guidelines covering the whole EcIA process and our survey
shows that they have received a widespread uptake. Since this survey
was sent out to IEEM practitioners there is expected to be a bias towards the use of their guidelines. However, IEEM is the leading UK
body in EcIA and it is clear from the interviews and ESs that their
guidelines are the most widely used in the industry.
The top three limitations (as rated by respondents of the survey) are
largely related to the understanding and use of science. The greatest
limitation is the quality of baseline data, recognised by the interviewees
as a major limitation when accurately assessing significance. The problem identified is that field surveys are not conducted extensively
enough to cover the varied sites. Common shortfalls identified in survey
collection are given in Table 3, with examples. One example from an interview which particularly illustrates this is that of an ecologist recording overwintering birds. Normally recording would only be conducted
during the day, however he also conducted a night time survey and in
doing so found that there was significant night-time feeding. It is
thought that this is to avoid predators and disturbance. As a result of
identifying this, mitigation to prevent lighting from reaching the birds
overnight could be put in place to avoid a significant impact. This hadn't
been picked up on an earlier survey by different consultants (Local Planning Authority 2). Local Planning Authority 1 commented that they
have turned applications away time and time again on the grounds of
incomplete survey data.
The scientific understanding of ecological processes has been
rated as the second most limiting factor by respondents to the survey
(Consultant 3):
complexity of these processes makes fully understanding them extremely difficult (Doak et al., 2008). This issue is particularly compounded by the lack of monitoring (Consultant 3) and the lack of
clarity on applying the precautionary principle (Statutory Consultee
2). If a natural system or species' reaction to an impact is not understood, to a reasonable degree of certainty, then there is a greater
risk that a significant impact will occur without attempted mitigation.
There is still a great deal not understood about the complex interactions of ecology (Doak et al., 2008); it is here that the precautionary
principle is useful. However, there is lack of guidance on the precautionary principle. The IEEM guidelines mention it but provide little
guidance on its use, and it arose in the interviews that guidance
from statutory bodies on this principle is lacking. The concern is
that it could be pushed too far, even to the point of halting development entirely, since there is always some uncertainty. On the contrary, it can be used to drive improvements in the quality of EIA if it
is used effectively.
The accuracy of impact predictions and significance levels cannot be assessed without conducting post-development monitoring
(Matrunola, 2007; Treweek et al., 1993). The implementation of
monitoring and feedback would help to understand the reactions
of species to specific impacts. Monitoring can be used to calibrate
the levels of significance assigned to impacts, resulting in a better
reflection of the significance of impacts on the ground (ODA,
1996). However, the respondents to the practitioner survey rate
the lack of monitoring and feedback as the third most limiting factor. Interviewees reported that generally the impacts of developments are not monitored, making it difficult to tell whether the
significance assigned is accurate or even representative. Local Planning Authority 2 illustrated this point in response to a question on
the accuracy of predictions and significance:
“I think some of the main challenges are our levels of understanding
between physical processes impact and the reaction of a species.
We don't really know very well other than those systems that are
fairly discrete.”
In addition, mitigation measures cannot be used to reduce the
level of residual significance, with much certainty, if there hasn't
been any feedback from monitoring their effectiveness in previous
developments as evidence (ODA, 1996).
Deficiencies in the quality of baseline survey data and the lack of
monitoring and feedback are directly related to the fourth most highly rated limitation in the survey—company resources—lack of time
and finances. Unfortunately, the lack of company resources allocated
to EcIA can mean that consultants don't always have the time to conduct as comprehensive baseline surveys as they would like, or to implement monitoring measures and feedback. Resources for EcIA are
generally constrained by limited budgets, in order to satisfy clients
and to obtain new work. Consequently the accuracy with which significance can be determined is compromised.
The amount of information and our understanding of ecological
processes have dramatically increased in recent years; however, the
Table 3
Shortfalls in quality of survey collection, with examples, from the interviews.
Shortfall
Minimal frequency
of surveys
Explanation
Species may move between
sites and is more likely not to
be picked up as present if the
survey not conducted enough
times
Poor seasonal spread Species vary temporally
of surveys
according to their
requirements; if surveys do
not cover the seasons they are
likely to miss particular
species
Lack of night
Species that may be active
time surveys
overnight are overlooked
when night time surveys are
not conducted
Surveys in the zone
Remote impacts are more
of influence
difficult to predict, so it is
necessary to have survey data
to be able to do so
Example
Fish require large numbers
of surveys to obtain
reliable population
estimates
Spring plants or migratory
birds are only present at
certain times of the year
Bats are nocturnal and
birds can also be found
active in the night
Important habitat linkages
for species travelling
through a site
“There is very little monitoring done so I don't think we really know
the significance of the impact because we haven't measured the significance of impacts which have occurred in previous developments
so we have no real baseline to work from.”
4.3. Future improvements
Without monitoring we cannot learn from any impacts that do occur
or determine the accuracy of significance assessment (Dipper et al.,
1998); especially since the response of an ecosystem is often
unpredictable (Doak et al., 2008). There has been widespread recognition for a number of years that monitoring and feedback would improve
practice of EIA, but it has received little uptake (Gilpin, 1995; ODA, 1996;
Tinker et al., 2005; Wood, 1999). Our survey identified the lack of monitoring as one of the main limitations to practice today, and the factor
most likely to improve practice in the future. The recognised benefits
of the implementation of monitoring and feedback has been at stalemate with a lack of planning conditions, meaning no legal requirements,
and limited resources for its enforcement (Tinker et al., 2005). To move
24
S. Briggs, M.D. Hudson / Environmental Impact Assessment Review 38 (2013) 16–25
forward the stalemate needs to broken (Dipper et al., 1998). Three ways
of doing this include mandatory requirements in legislation (Dipper et
al., 1998), more effective use of planning conditions and obligations
(Tinker et al., 2005), including the expansion of the use of formalised Environmental Management Plans to deliver them (Palframan, 2010); or
professional institutes, such as IEEM, driving implementation with the
potential to create a central database for the results of monitoring and
encouraging its members to monitor as best practice. Consultant 3 explains the advantage of this to consultants:
“If a development on another site has proven no impact through
using standard approaches, monitoring and identifying no impact
then I can pick that up and use that as evidence.”
The desire for more streamlined guidance, and to a lesser extent legislation, arose in the interviews (Local Planning Authority 1). The respondents to the survey rate more streamlined guidance in the top
three approaches most likely to improve practice in the future. The
IEEM guidelines are currently going through a review process and it
would be useful for consultants to give their views on streamlining
the guidance in this process.
Surprisingly, perhaps, respondents to the IEEM practitioner survey
rate a more standardised approach to significance assessment in the
top three approaches which are most likely to improve practice.
This contrasts with the views of consultant interviewees where
there is concern that the innovation in conducting an EcIA could be
lost. However, there is more desire for a standardised approach
with the other stakeholders because it would further increase transparency and the ability to compare projects for decision making.
Even so, there is still the recognition that enough flexibility must be
built into the approach to allow for the vastly differing situation and
context of every site. Better implementation of the guidelines (particularly the IEEM guidelines) would be the most likely cause of a more
standardised approach. There is also a call for peer reviewing by professional bodies to ensure that the guidelines are being used as
intended (Consultant 4).
The interviews and questionnaire respondents advocated that
the determination of significance would benefit from a central database to store and provide access to baseline survey data collected for
the purposes of EcIA. NGO 1 believes it to be important for all
stakeholders:
“The sharing of information and best practice is going to help us be
more efficient in how we respond to applications and hopefully then
help developers be more effective in what they're proposing.”
Currently the main available database of biological data is the National Biodiversity Network's Gateway (NBN, 2011). However, consultants have not tended to upload their raw data- the great majority
available there originates from conservation organisations. A collective
source for consultants data would improve the effectiveness of scoping
(thereby reducing costs) and preliminary studies, improve research and
inform policy making.
The concept of ecosystem services is a relatively new approach to
valuing ecosystems which has largely been birthed from the Millennium Ecosystem Assessment (MEA, 2005). It is still an emerging concept and is being considered more at a policy level (Carpenter et al.,
2009). The potential for the inclusion of ecosystem services in EcIA
arose in the interviews; some believe that it is important for it to be
considered at project-level as well as policy-level. However, it is
recognised that this would take time since the concept is still emerging and methods for assessing ecosystem services at project level
would need to be developed. Surprisingly, there is strong disagreement with the use of the concept as an approach to improve the determination of significance from the survey; it was rated as the least
likely factor to result in improvement. This may be a consequence of
the current lack of methods (Consultant 2; Consultant 3), the fact
that it is a relatively new concept and the current relatively low levels
of accuracy that can be achieved (TEEB, 2010).
A number of the interviewees were asked for their views on the
use of statistics to assign levels of significance to ecological impacts.
The view is largely that if it was workable it would be beneficial.
Most, however, thought that it is not practical, particularly with reference to the context of schemes and the wider zone of influence. This
agrees with the findings of Treweek (1996). Although not included
explicitly in the review framework, the use of statistics to determine
significance was not noted in any of the ESs, showing that it hasn't
arisen on a large scale. In addition, the survey respondents rate the
use of a numerical weighting system based on species, survey data,
county rarity, etc. as 9th most likely approach, out of 11, to improve
the determination of significance in the future.
5. Conclusions
The practice of determining significance has changed considerably
through time; this has largely been improvements in practice. The
most notable change is the standardisation of the approaches used for
determining significance and greater transparency. Other improvements include increased accuracy of impact predictions and increases
in the quality of the justification of significance. These changes have
been driven most noticeably by the IEEM Guidelines (2006) which
have received widespread uptake and are responsible for the framework which is now widely used by consultants. However, there are
still a number of limitations associated with the determination of significance. The quality of baseline data and the scientific understanding of
ecological processes have been found to be lacking. It was also found
that the lack of monitoring and feedback could hinder the accuracy
with which significance is determined.
6. Future research
Firstly, there is a continuing need for research into the determination of significance in other topics covered within the scope of EIA,
such as air quality, climatic factors and cumulative impacts. Practice
varies between chapters since impacts can be more easily quantified
and compared against thresholds in some disciplines, whereas others
rely more heavily on value judgements; noise and landscape impacts
being prime examples.
Secondly, post-development auditing of developments to assess
which predictive techniques are the most accurate at determining
significance would be valuable research.
Acknowledgements
Our thanks to the interviewees who took time out to answer our
questions and for providing some of the Environmental Statements;
they have provided another dimension to this project. Thanks also
to the Institute of Ecology and Environmental Management for sending the survey to their members and to the members who responded
to the survey.
Appendix A. Supplementary data
Supplementary data to this article can be found online at http://
dx.doi.org/10.1016/j.eiar.2012.04.003.
References
Benson J. Round table: what is the alternative? Impact assessment tools and sustainable planning. Impact Assess Proj Appraisal 2003;21:261–80.
Bevan J. Determining significance in Environmental Impact Assessment: a review of
impacts upon the socio-economic and water environments. MSc Thesis. Norwich:
University of East Anglia, 2009.
S. Briggs, M.D. Hudson / Environmental Impact Assessment Review 38 (2013) 16–25
Byron H, Treweek J, Sheate W, Thompson S. Road developments in the UK: an analysis
of ecological assessment in environmental impact statements produced between
1993 and 1997. J Environ Plann Manage 2000;43:71–97.
Carpenter S, Mooney H, Agard J, Capistrano D, DeFries R, Díaz S, et al. Science for managing ecosystem services: beyond the millennium ecosystem assessment. Proc
Natl Acad Sci U S A 2009;106:1305–12.
DETR (Department for the Environment, Transport and the Regions). Circular 02/99:
Environmental Impact Assessment. London: HMSO; 1999.
DETR (Department for the Environment, Transport and the Regions). Environmental
Impact Assessment: a guide to procedures. London: HMSO; 2000.
Dipper B, Jones C, Wood C. Monitoring and post-auditing in Environmental Impact Assessment: a review. J Environ Plann Manage 1998;41:731–47.
Doak D, Estes J, Halpern B, Jacob U, Lindberg D, Lovvorn J, et al. Understanding and
predicting ecological dynamics: are major surprises inevitable? Ecology 2008;89:
952–61.
DoE (Department of the Environment). Monitoring environmental assessment and
planning. London: HMSO; 1991.
DoE (Department of the Environment). Preparation of environmental statements for
planning projects that require environmental assessment: a good practice guide.
London: HMSO; 1995.
DoE (Department of the Environment). Changes in the quality of environmental statements for planning projects: research project. London: HMSO; 1996.
DoE (Department of the Environment)/Welsh Office. Environmental assessment: a
guide to the procedures. London: HMSO; 1989.
Dolman P. Biodiversity and ethics. In: O'Riordan T, editor. Environmental Science for environmental management. 2nd ed. London: Pearson Education Ltd; 2000. p. 119–49.
DoT (Department of Transport). Design manual for roads and bridges, volume 11. London:
HMSO; 1993.
Duinker P, Beanlands G. The significance of environmental impacts: an exploration of
the concept. Environ Manage 1986;10:1-10.
Fortlage C. Environmental assessment: a practical guide. Aldershot: Gower Publishing;
1990.
George C. Testing for sustainable development through environmental assessment. Environ Impact Assess 1999;19:175–200.
Geraghty P. Environmental Impact Assessment practice in Ireland following the adoption of the European Directive. Environ Impact Assess 1996;16:189–211.
Gilpin A. Environmental Impact Assessment: cutting edge for the twenty-first century.
Cambridge: Cambridge University Press; 1995.
Glasson J, Therivel R, Chadwick A. Introduction to Environmental Impact Assessment.
3rd ed. Abingdon: Routledge; 2005.
Hammond R, Hudson MD. Environmental management of UK golf courses—attitudes
and actions. Land Urban Plann 2007;83:127–36.
IEEM (Institute of Ecology and Environmental Management). Guidelines for Ecological
Impact Assessment in the United Kingdom (version 7 July 2006). Available from:
http://www.ieem.org.uk/ecia/index.html 2006. [Accessed 28/04/11].
IEMA (Institute of Environmental Management and Assessment). Guidelines for baseline ecological assessment. Lincoln: IEMA; 1997.
IEMA (Institute of Environmental Management and Assessment). Guidelines for Environmental Impact Assessment. Lincoln: IEMA; 2004.
Lawrence D. Impact significance determination—back to basics. Environ Impact Assess
2007;27:755–69.
Lee N, Colley R, Bonde J, Simpson J. Reviewing the quality of environmental statements
and environmental appraisals. Occassional Paper, 55. Manchester: University of
Manchester, Department of Planning and Landscape; 1999.
Matrunola L. Mitigation measures proposed for ecological conservation: opinions of
consultants and a review of environmental statements. MSc Thesis. Norwich: University of East Anglia; 2007.
Millennium Ecosystem Assessment. Ecosystems and human well-being: synthesis.
Washington, DC: Island Press; 2005.
25
NBN (National Biodiversity Network). Welcome to the NBN Gateway. Available from:
http://data.nbn.org.uk/2011. [Accessed 20/04/11].
ODA (Overseas Development Administration). The manual of environmental appraisal.
London: ODA; 1996.
Palframan L. The integration of environmental impact assessment and environmental
management systems: experiences from the UK. 30th Annual Conference of the International Association for Impact Assessment Proceedings. Fargo, USA: IAIA;
2010.
Rands M, Adams W, Bennun L, Butchart S, Clements A, Coomes D, et al. Biodiversity
conservation: challenges beyond 2010. Science 2010;329:1298–303.
Sadler B. Environmental assessment in a changing world: evaluating practice to improve performance. Ottawa, ON: Ministry of Supply and Services; 1996.
TEEB (The Economics of Ecosystems and Biodiversity). The economics of ecosystems
and biodiversity: mainstreaming the economics of nature. A synthesis of the approach, conclusions and recommendations of TEEB; 2010.
Thompson M. Determining impact significance in EIA: a review of 24 methodologies.
J Environ Manage 1990;30:235–50.
Thompson S, Treweek J, Thurling D. The ecological component of Environmental Impact Assessment: a critical review of British environmental statements. J Enviro
Plann Manage 1997;40:157–71.
Tinker L, Cobb D, Bond A, Cashmore M. Impact mitigation in Environmental Impact Assessment: paper promises or the basis of consent conditions. Impact Assess Proj
Appraisal 2005;23:265–80.
Treweek J. Ecology and Environmental Impact Assessment. J Appl Ecol 1996;33:191–9.
Treweek J. Ecological Impact Assessment. Oxford: Blackwell Science; 1999.
Treweek J, Thompson S, Veitch N, Japp C. Ecological assessment of proposed road developments: a review of environmental statements. J Environ Plann Manage
1993;36:295–307.
UN. Rio Declaration on Environment and Development. Available from: http://www.un.
org/documents/ga/conf151/aconf15126-1annex1.htm 1992. [Accessed 28/04/11].
Westman W. Ecology, impact assessment, and environmental planning. New York:
John Wiley & Sons; 1985.
Wilkins H. The need for subjectivity in EIA: discourse as a tool for sustainable development. Environ Impact Assess 2003;23:401–14.
Wood C. Environmental Impact Assessment: a comparative review. Harlow: Pearson
Education; 1995.
Wood P. Towards a robust judgement of significance in Environmental Impact Assessment. MSc Thesis. Oxford: Oxford Brookes University; 1995.
Wood G. Post-development auditing of EIA predictive techniques: a spatial analytical
approach. J Environ Plann Manage 1999;42:671–89.
Wood G. Thresholds and criteria for evaluating and communicating impact significance
in environmental statements: ‘See no evil, hear no evil, speak no evil’? Environ Impact Assess 2008;28:22–38.
Sam Briggs graduated from the University of Southampton having completed a Masters degree in Environmental Sciences in 2011. His interests lie in environmental impact assessment, conservation agriculture for poverty alleviation and the relationship
between business and the environment.
Malcolm Hudson is a lecturer at the Centre for Environmental Sciences, University of
Southampton. His research interests include managing human impacts on natural environments, both terrestrial and coastal, ecosystem services and poverty alleviation
and environmental impact assessment.
Descargar