21
Lecture 8 review Mark-recapture methods Organize your data into capture/recapture table Follow the simple rules: predict number at risk, calculate Pcap’s, calculate predicted captures, use Poisson likelihood Stock assessment models and information (today?) Use age-structured model, even if simple data Get full catch history for the stock, even if catch estimates are crude Try to get a current or recent estimate of U (F) Fit to abundance trend data, with great suspicion Do not rely upon extracting mortality information from age/size composition data

Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

Embed Size (px)

Citation preview

Page 1: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

Lecture 8 review

• Mark-recapture methods– Organize your data into capture/recapture table– Follow the simple rules: predict number at risk, calculate Pcap’s,

calculate predicted captures, use Poisson likelihood

• Stock assessment models and information (today?)– Use age-structured model, even if simple data– Get full catch history for the stock, even if catch estimates are

crude– Try to get a current or recent estimate of U (F)– Fit to abundance trend data, with great suspicion– Do not rely upon extracting mortality information from age/size

composition data

Page 2: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

Why you should expect hyperstability in cpue data

• Range contraction with effort targeted in areas of remaining high density

• Concentration of effort in high cpue areas (scratch and mopup model)

• Gear saturation effects (can’t handle more) at high abundances

• Effort sorting (only the best keep fishing when abundance drops)

• Covariation of effort with N in seasonal fisheries

Page 3: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

N-E covariation in seasonal fisheries

• Many fisheries are highly seasonal (eg shrimp), and deplete each season’s No down to some economic quitting abundance Ne. No changes from year to year, Ne tends to be stable

• Annual catch in such cases is just C=No-Ne

• Since N during season varies as N=Noe-qE, can predict E as E=-ln(Ne/No)/q

• C/E then varies as (No-Ne)/(-ln(Ne/No)/q, which is NOT linear in No

Page 4: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

N-E covariation in seasonal fisheries causes hyperstability

• Plots of cpue=(total catch)/(total effort) show apparent density dependence in q:

q (cpue/No) versus No

y = 0.0587x-0.4048

0

0.02

0.04

0.06

0.08

0.1

0 0.5 1 1.5 2 2.5

No

CP

UE

=Cat

ch/E

ffort

CPUE vs No

y = 0.0587x0.5952

0

0.03

0.06

0.09

0.12

0 0.5 1 1.5 2 2.5

No

CP

UE

=C

atc

h/E

ffo

rt

Page 5: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

Lecture 9: Designing effective sampling programs with many asides about crab assessment

• When you measure something like fish density per area, there are two concerns:– Mean density– Total area to which this mean applies

• Total area is called the “sampling universe” or “sampling frame” or “universe of inference”

• Most sampling programs are designed without due care in defining the universe of inference (biologists focus on variability in density measurements)

Page 6: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

The sampling universe

• N is the sum over all units i of the abundance ni in each unit: N=Σni

• N is also the mean ni times the number of sampling units in the universe

Each little box is a sampling unit, has abundance ni

Page 7: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

The sampling universe

• To estimate N, remember that you must somehow assign an abundance ni to EVERY unit I, whether or not you could or did sample it

• Your options include:1. Assign mean of sampled ni to all unsampled units (assume your

units are a random sample)2. Sample units at regular spacing (grid) so as to uncover any spatial

structure that may be present (take a systematic sample, whose mean will have lower variance than a random sample if and only if there is large-scale structure)

3. Assume structure in how the ni vary over space (or time), and try to estimate that structure (assign ni values to unsampled I) using spatial statistics models

Page 8: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

Spatial statistics and Kriging

• The estimate for every site i, eg the red site, is taken to be a weighted average of estimates for sites sampled around it:

• The Kriging weights wij generally decrease with distance, with rate of decrease determined by examining correlations between observations at different distances

j

jiji XwX

wij

Distance from i to j

This is called the “variogram”

Page 9: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

Spatial statistics and Kriging: what goes wrong

• Best performance when samples have been evenly spaced on a grid (systematic sampling)

• Averaging using the Kriging weights wij results in a smooth surface of estimated values, i.e. there is a hidden assumption that densities vary smoothly over the map, with no abrupt edges.

• Must be very careful to “mask” (assign zero values to) cells known not to have X>0, i.e. unsuitable habitat sites. NEVER use spatial statistics when you cannot construct such a masking map

Page 10: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

What happens when you are not careful about defining the sampling universe:

Australian NT mud crab example• Size frequency data give F

estimates ranging from 0.5 to 5, depending on growth and vulnerability assumptions; either the fishery is barely touching the population, or is grossly overfishing it.

• Depletion experiments (10, two areas, 1997-2004) to measure density, catchability (area swept by each trap)

Page 11: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

(An aside about why you should not try to estimate fishing mortality rates from

size-frequency data)• Norm Hall’s model,

gives F=4.5/year• Carl’s GTG model,

gives F=1.1/yearPredicted vs observed carapace width frequency,

GTG model with M=1.2, F=1.37, vonB linf=180, K=0.8

0

0.01

0.02

0.03

0.04

0.05

0.06

120 140 160 180 200

Carapace width (mm)

Pro

po

rtio

n o

f c

atc

h

Model VulnerableProportion

Norm's Roperproportions

Page 12: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

Another aside about why you should be fascinated by crab and

shrimp fisheries• Apparent extreme resilience to fishing, but

possibly due to a) bionomic shutdowns and b) invulnerable reserve stock

• Very high economic value, growing importance even where finfish not depleted

• Cannot age them and apply “standard” finfish assessment tools

• Fast dynamics, so can see lots of changes

Page 13: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

What happens when you are not careful about defining the sampling universe: Australian NT mud crab example

• The depletion experiments have provided really neat results, showing size-selective depletion and recovery of abundance over short time periods:

• So we now know that one trap sweeps about 100 m2/day, i.e. a radius of about 5.6 m.

• And we can then calculate the total area swept by traps per year

• But q=(effective area swept per trap day)/(Total area)

2003 Adelaide River, 110mm+ crabs

y = -0.3143x + 19.834y = -0.3256x + 38.662

y = -0.4686x + 65.442

-5

0

5

10

15

20

25

0 50 100 150

Cumulative Catch

Da

ily

Ca

tch

2003 Adelaide River, <110mm crabs

y = -0.1624x + 2.2833 y = -0.1661x + 5.3029y = -0.0526x + 3.3684

0

1

2

3

4

5

6

0 10 20 30 40

Cumulative Catch

Da

ily

Ca

tch

Page 14: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

What happens when you are not careful about defining the sampling universe: Australian NT mud crab example

• We can then calculate the total area swept by traps per year (100m2 x Total trap days=128 million m2=100km2

• But q=(area per trap day)/(Total area), and F=q(Total trap days) or F=(total area swept)/(Total Area of stock)

• What total area should we assume?

Mud crabs are fished all along the 1200 km coastline of the “top end”, mainly in mangrove estuarine areas near major river mouths; the total area over which they are distributed could easily be less than 100km2

Page 15: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

We also tried to to get around the q, F problem by fitting to monthly population

model, but fits are excellent for wide range of q assumptions

Mud crab stock synthesis model fit to catch data

0

20

40

60

80

100

120

140

160

180

1983 1988 1993 1998 2003

Pred Catch

Catch(mt)

Page 16: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

What it would take to make the NT depletion data usable

• Detailed coastal habitat maps (potential habitat area)

• Detailed logbooks and/or surveys to determine distribution boundaries and concentration areas (e.g. FISHMAP), but as usual logbooks do not have accurate enough geo-referencing

Page 17: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

Systematic vs random sampling (how would you map the NT mud

crab distribution (“Universe”)?

• Take a random sample, calculate variance σ2

random among observations and sample mean Xran

• Take a systematic sample, calculate variance σ2

systematic among observations and sample mean Xsyst

Page 18: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

Systematic vs random sampling (how would you map the NT mud

crab distribution (“Universe”)?

The fundamental theorem of systematic sampling says that systematic sample mean is more precise if and only if

σ2systematic>σ2

random

i.e. if the systematic sample deliberately inflates variance by crossing gradients

Page 19: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

A common violation of the theorem is to take transects parallel to known

gradients, rather than across them(without pre-defining strata)

(What bonehead wildlife ecologists do in the Grand Canyon)

(Whereas smart fish guys like Lauretta do it right, crossing gradient to maximize σ2)

Page 20: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

The few vs many sites tradeoff

• Typically multiple measurements are taken on each sampling unit:” important” y1,y2, … and “explanatory” x1, x2, x3 etc.

• High cost and/or collaborator interest in the x’s typically leads to choosing few (or even just one) sites and measuring many things in these sites, on the assumption that the x’s are needed to explain (or capable of explaining) variation in the y’s.

• But in complex systems, explanatory models based on the x’s generally fail (or cannot be validated) due to measuring y’s and x’s on too few sampling units.

• The message: choose your y’s and x’s (and collaborators) carefully, and sample more sites. Do not expect to be able to predict successfully with data from one or a few sites, no matter how detailed the data are.

Page 21: Lecture 8 review Mark-recapture methods –Organize your data into capture/recapture table –Follow the simple rules: predict number at risk, calculate Pcap’s,

Examples of sampling too few sites intensively rather than many extensively

(with careful variable choice)

• Carnation creek salmon-logging study

• Florida FIM program

• Northern Territory mud crab depletions

• LTER studies in general

• IBP biome studies (except Polish and Russian Lake system studies)