#Make code wrap text so it doesn't go off the page when Knitting to PDF
library(knitr)
opts_chunk$set(tidy.opts=list(width.cutoff=60),tidy=TRUE)

Cite as: Logan CJ, Lukas D, Bergeron L, Folsom M, McCune K. 2019. Is behavioral flexibility related to foraging and social behavior in a rapidly expanding species? (version: 6 Aug 2019) In principle recommendation by PCI Ecology.

This preregistration has been pre-study peer reviewed and received an In Principle Recommendation by:

Julia Astegiano and Esther Sebastián Gonzalez (2019) Understanding geographic range expansions in human-dominated landscapes: does behavioral flexibility modulate flexibility in foraging and social behavior? Peer Community in Ecology, 100026. 10.24072/pci.ecology.100026

  • Reviewers: Esther Sebastián Gonzalez and Pizza Ka Yee Chow

ABSTRACT

This is one of the first studies planned for our long-term research on the role of behavioral flexibility in rapid geographic range expansions. Project background: Behavioral flexibility, the ability to change behavior when circumstances change based on learning from previous experience (Mikhalevich, Powell, and Logan (2017)), is thought to play an important role in a species’ ability to successfully adapt to new environments and expand its geographic range (e.g., Lefebvre et al. (1997), Griffin and Guez (2014), Chow, Lea, and Leaver (2016), Sol and Lefebvre (2000), Sol, Timmermans, and Lefebvre (2002), Sol et al. (2005)). However, behavioral flexibility is rarely directly tested at the individual level, thus limiting our ability to determine how it relates to other traits (e.g., behavior, invasion success, diet generalism, foraging techniques, foraging innovations, mortality, brain size), which limits the power of predictions about a species’ ability to adapt behavior to new environments. We use great-tailed grackles (a bird species) as a model to investigate this question because they have rapidly expanded their range into North America over the past 140 years (i.e., they increased their nesting range by over 5500% between 1880 and 2000 (Wehtje 2003), (Peer 2011)) (Fig. 1). Foraging behavior is considered central to the rapid geographic range expansion of this species and it is thought that they have been so successful by following human urban and agricultural corridors (Wehtje (2003), Peer (2011)). Therefore, as humans continue to modify landscapes, this increases the amount of suitable grackle habitat. We expect this species to be behaviorally flexible because they are fast at reversal learning (Logan (2016)), they often encounter human-made “puzzle boxes” in the wild as they attempt to open packaging to access food when digging through garbage cans and eating at outdoor cafes, and they may track resources across time and space. Results will allow us to determine whether, as predicted by hypotheses and cross-species correlational data, in this expanding species, individual-level variation in flexibility is linked with diet breadth, foraging proficiency, social interactions, habitat use, and movement into new geographic areas. This investigation: In this piece of the long-term project, we will assess whether individual performance in experiments that assess behavioral flexibility relates to individual variation in ecological and social behavior in the natural environment. In particular, we aim to determine whether the more behaviorally flexible (measured by reversal learning and solution switching on a multi-access box in a separate preregistration) grackles have more flexible foraging behavior (eat a larger number of different foods, use a wider variety of foraging techniques), are more flexible in their habitat use (are found in more diverse habitat types, disperse farther from their natal area), and are more flexible in their social relationships (have more or stronger social bonds particularly with less related individuals). We will be able to compare the grackle’s ability to adapt behavior according to social context with data from other species, as well as determine whether it is linked with measures of flexibility in asocial contexts.

Figure 1. Five-year project overview.

A. STATE OF THE DATA

This preregistration was written (Dec 2017) prior to collecting any data, and submitted to PCI Ecology for peer review (Oct 2018) after 46 practice focal follows had been conducted (we will not use this data in our analyses below because they were for learning how to do focals and for testing interobserver reliability). We received the peer reviews (Mar 2019, after 3 more practice follows had been conducted (these data will not be used in analyses) and 13 real follows had been conducted), revised, and resubmitted to PCI Ecology (Apr 2019, 3 more real follows had been conducted) after 16 focal follows had been conducted. We received the second round of peer reviews from PCI Ecology, revised, and resubmitted (Jul 2019) after a total of 41 focal follows had been conducted (as of 10 Jul 2019).

B. PARTITIONING THE RESULTS

We may decide to present the results from different hypotheses in separate papers.

C. HYPOTHESIS

H3: Individuals that are behaviorally flexible (see Mikhalevich, Powell, and Logan (2017) for a detailed definition) will differ in their use of microhabitats within human-modified landscapes (substrate qualification during each focal follow), but the macrohabitat (square kilometer) of each population will not differ in human population density (measured with a GPS point for each focal follow after their release from the aviaries; we measure microhabitat types according to the last substrate the focal individual was seen on at the end of the focal follow: grass, gravel, tree, cement cafe, dumpster, [additional substrates will be added as they are encountered]). Flexibility is measured in aviaries using two paradigms: reversal learning and switching between options on a multi-access box. Although we were only able to find this species in association with human-modified landscapes based on eBird sightings (i.e., there appear to be no forest-based populations), individuals could use these landscapes in a variety of ways. For example, they could specialize on particular foods or at particular types of locations (e.g., foraging exclusively at cafes or in grassy areas), they could generalize across all foods and location types, or they might fall somewhere in between these extremes.

P5: Individuals immigrating into a population are more likely to be flexible, potentially because they need to learn how to obtain resources in an unfamiliar area. Immigrants are individuals who carry many genetic variants (identified using ddRADseq) that are not found in other individuals in this population.

P5 alternative: Individuals immigrating into a population are not more likely to be flexible, potentially because the human urban environment is comparable across landscapes.

P6: Flexible individuals will be found more regularly in a wider diversity microhabitats (human-modified substrates including cement, dumpsters or cafes; or natural substrates including grass, shrubs, and trees [additional substrates will be added as they are encountered]) during focal follows.

P6 alternative: Flexibility is not associated with presence in diverse microhabitats because the more flexible individuals might specialize in specific foraging strategies best suited to particular microhabitats.

P7: There will be no difference in human population density among the sites for our three grackle populations because all great-tailed grackle populations are highly associated with human-modified landscapes. Human population density per square mile data will be obtained from census information (US census bureau: https://www.census.gov/quickfacts/fact/note/US/LND110210, still looking for a source for Central American countries)

P8: Flexible individuals will not be associated with presence in diverse microhabitats not necessarily because they are specialists or generalists in specific foraging strategies, but rather because they may focus on high quality resources in particular habitat types. If this prediction is supported, we will conduct an additional analysis to examine the proportion of focal follows associated with a particular microhabitat type, which will allow us to determine whether the more flexible individuals are associated with particular microhabitats more than the less flexible individuals.

D. METHODS

Planned sample

Great-tailed grackles (n > 200) will be caught in the wild at three field sites across their geographic range: the center of their original range (at a site to be determined in Central America), the middle of the northward expanding edge (Tempe, Arizona USA), and on the northern expanding edge (to be determined). Individuals will be identified using colored leg bands in unique combinations, their data collected (blood, feathers, and biometrics), and then they will be released back to the wild. Some individuals (64-100) will be brought temporarily into aviaries for behavioral testing, and then they will be released back to the wild where the data for this study will be collected.

Sample size rationale

We will band as many birds as possible throughout the year, and conduct behavioral tests in aviaries (during the non-breeding seasons; approximately September through March) and in the wild (year-round) on as many birds as we can at each field site. The minimum sample size will be 200 banded birds and 60 behavior-tested birds in total, however we expect to be able to sample many more.

Data collection stopping rule

We will stop collecting data in April 2023 when the current funding ends, or after data have been collected at all three field sites, whichever date comes first.

Open materials

Ethogram for Prim8.

Individuals for Prim8.

Open data

When the study is complete, the data will be published in the Knowledge Network for Biocomplexity’s data repository.

Randomization and counterbalancing

No randomization or counterbalancing is involved in this study.

Blinding of conditions during analysis

No blinding is involved in this study.

Dependent variables

P1-P2

  1. Number of different foods eaten in the first X minutes (X=the sum of the total observation time per individual, using the individual who had the lowest sum to equalize observation time across individuals)

  2. Number of foraging techniques used (based on Table 1 in Overington et al. (2009)) in the first X minutes (X=the sum of the total observation time per individual, using the individual who had the lowest sum to equalize observation time across individuals)

One model will be run per dependent variable.

P1 alternative 2 additional analysis: flexible = more valuable food types

Food type (listed under the What modifier in the Eat behavior in the ethogram)

P3: flexible = more human foods

  1. Proportion of diet per individual that is human food

  2. Distance to outdoor human food areas during focal follows

  3. Number of outdoor human food areas within the home range

P4: flexible = a greater number of bonds or stronger bonds

  1. Strength of the maximum bond (calculated as the half-weight index based on association behavior during focal follows. See Analysis Plan > P4 for a complete description).

  2. Individual strength (the sum of all bonds an individual has; Wey et al. (2008))

  3. Individual degree (maximum number of other individuals that the focal subject associated with; Wey et al. (2008)).

  4. Male shares territory with another male: yes, no

  5. Relatedness for the strongest bond (measured following the protocol in Thrasher et al. (2018) to estimate pairwise relatedness between all individuals based on the extent of sharing of genetic variants as determined by ddRADseq)

P5: flexible = immigrants

Probability of being an immigrant (measured using maximum-likelihood-based individual assignment probabilities calculated using an admixture program with the ddRADseq data)

P6: flexible = wider range of habitats

Evenness in the proportion of time spent in each habitat type (grass, gravel and other natural substrate, cement, cafe, dumpster)

P8: flexible = particular habitats

Proportion of focal follows in each habitat type (grass, gravel and other natural substrate, cement, cafe, dumpster)

Independent variables

P1-P4 and P6

  1. Flexibility 1: Number of trials to reverse a preference in the last reversal (in the reversal learning experiment) an individual experienced (individuals in the flexibility control group only experience 1 reversal so this data will come from their first and only reversal; individuals in the flexibility manipulation group experience serial reversals until they pass a certain criterion, therefore we will only use data from their most recent reversal). An individual is considered to have a preference if it chose the rewarded option at least 17 out of the most recent 20 trials, with a minimum of 8 or 9 correct choices out of 10 on the two most recent sets of 10 trials). See behavioral flexibility preregistration.

  2. Flexibility 3: If the number of trials to reverse a preference does not positively correlate with the number of trials to attempt or solve (meet criterion) new loci on the multi-access box (an additional measure of behavioral flexibility), then the number of trials to solve and the number of trials to attempt a new option on the multi-access box will be additional dependent variables. See behavioral flexibility preregistration.

  3. Flexibility 4: This measure is currently being developed and is intended be a more accurate representation of all of the choices an individual made, as well as accounting for the degree of uncertainty exhibited by individuals as preferences change. It will be based on a Bayesian estimate of the reduction in error across trials estimated from the number of correct choices from the beginning of each reversal. If this measure more effectively represents flexibility (determined using a modeled dataset and not the actual data), we may decide to solely rely on this measure and not use flexibility measures 1 through 3. If this ends up being the case, we will modify the code in the analysis plan below to reflect this change.

  4. Flexibility manipulation: control, manipulated

  5. Dominance rank: measured as the number of wins minus the number of losses, divided by the sum of wins + losses. This calculation will give each individual a dominance rank number, and we will order individuals by rank from lowest to highest to create a dominance hierarchy.

  6. Condition: aviary-tested, not-aviary-tested

  7. ID (random effect because multiple measures per individual)

  8. Population: center (Central America), middle (Arizona), edge (northern US) (random effect because each population might have a different slope)

P1 alternative 2 additional analysis: flexible = more valuable food types

Flexibility 1 (as above)

P5: immigrants = more flexible

  1. Flexibility 1: Number of trials to reverse a preference in the last reversal an individual experienced (reversal learning; an individual is considered to have a preference if it chose the rewarded option at least 17 out of the most recent 20 trials, with a minimum of 8 or 9 correct choices out of 10 on the two most recent sets of 10 trials). See behavioral flexibility preregistration.

  2. Flexibility 3: If the number of trials to reverse a preference does not positively correlate with the latency to attempt or solve new loci on the multi-access box (an additional measure of behavioral flexibility), then the average latency to solve and the average latency to attempt a new option on the multi-access box will be additional dependent variables. See behavioral flexibility preregistration.

  3. Flexibility 4: A Bayesian measure is currently being developed and is intended be a more accurate representation of all of the choices an individual made, as well as accounting for the degree of uncertainty exhibited by individuals as preferences change. If this measure more effectively represents flexibility (determined using a modeled dataset and not the actual data), we may decide to solely rely on this measure and not use flexibility measures 1 through 3. If this ends up being the case, we will modify the code in the analysis plan below to reflect this change.

  4. Population: center (Central America), middle (Arizona), edge (northern US) (random effect because each population might have a different slope)

P8: flexible = particular habitats

Flexibility 1 (as above)

Focal follow interobserver reliability

To be able to conduct focal follows (methods as in Altmann (1974)), a coder must pass interobserver reliability before the data they collect is used in the data set. To pass, coders must have an intra-class correlation (ICC; Hutcheon, Chiolero, and Hanley (2010)) of 0.90 or greater based on at least six 10-min focal follows where both coders recorded the behavior of the same focal individual at the same time.

Luisa Bergeron was the first person to conduct focal follows, therefore she trained Kelsey McCune and Melissa Folsom until they passed interobserver reliability (on 10 June 2019) for each of the 6 variables listed in the preregistrations.

ff <- read.csv("/Users/corina/GTGR/data/data_flexforaging.csv",
    header = T, sep = ",", stringsAsFactors = F)

# Intra-class correlation / reliability coefficient / the
# degree of bias in the regression slope (Hutcheon et al.
# 2010. Random measurement error and regression dilution
# bias www.bmj.com/content/340/bmj.c2289). 'The ratio of
# variation in error-free (true) X values to the variation
# in the observed error-prone (observed) values is known as
# the reliability coefficient, attenuation factor, or
# intra-class correlation.'

# Download the datasheet as a .csv file and put in a folder
# where you know the file path Load the datasheet here

data <- read.csv("/Users/corina/ownCloud/Documents/Grackles/Data/ASU/InterObsRelKel.csv",
    header = TRUE, sep = ",", stringsAsFactors = FALSE)
data  #Check to make sure it looks right

# LOAD the irr package, and run it in R
library(irr)

### ICCs for agreement between the 2 coders for 6 variables
### Note: c(4,5) is telling R to look at columns 4
### ('1NumberForagingTechniques') and 5
### ('2NumberForagingTechniques') and compare them

# NumberDifferentFoodsEaten
icc(data[, c(2, 3)], model = "oneway", type = "agreement", unit = "single",
    conf.level = 0.95)

# NumberDifferentForagingTechniques
icc(data[, c(4, 5)], model = "oneway", type = "agreement", unit = "single",
    conf.level = 0.95)

# NumberAffiliativeInteractions
icc(data[, c(6, 7)], model = "oneway", type = "agreement", unit = "single",
    conf.level = 0.95)

# NumberAggressiveInteractions
icc(data[, c(8, 9)], model = "oneway", type = "agreement", unit = "single",
    conf.level = 0.95)

# NumberTimesSubjectInitiatedAggression
icc(data[, c(10, 11)], model = "oneway", type = "agreement",
    unit = "single", conf.level = 0.95)

# Microhabitat
icc(data[, c(12, 13)], model = "oneway", type = "agreement",
    unit = "single", conf.level = 0.95)

ICCs for Melissa (n=6 focal follows, data):

Different Foods Eaten = 1.00

Diffent Foraging Techniques = 1.00

Number of Affiliative Interactions = 1.00

Number of Aggressive Interactions = 0.96

Number of Initiated Aggressive Interactions = 0.94

Microhabitat = 1.00

ICCs for Kelsey (n=6 focal follows, data):

Different Foods Eaten = 1.00

Different Foraging Techniques = 0.97

Number of Affiliative Interactions = 0.96

Number of Aggressive Interactions = 1.00

Number of Initiated Aggressive Interactions = 1.00

Microhabitat = 1.00

NOTE: the ICCs for the variable Different Foods Eaten for these focal follows was originally 0.63 (Melissa) and 0.64 (Kelsey) because Melissa and Kelsey recorded a “bug” being eaten while Luisa recorded no food type because she couldn’t identify it to a more specific category. At this point, we decided that we would prefer to enter a general category for food type rather than having no information about what was eaten. Therefore, this data point was removed from the interobserver reliability analysis. This resulted in ICCs of 1.00 for both Kelsey and Melissa on the Different Foods Eaten variable because they matched Luisa in the other food type data points.

E. ANALYSIS PLAN

We do not plan to exclude any data. When missing data occur, the existing data for that individual will be included in the analyses for the tests they completed. Analyses will be conducted in R (current version 4.0.3; R Core Team (2017)). When there is more than one experimenter within a test, experimenter will be added as a random effect to account for potential differences between experimenters in conducting the tests. If there are no differences between models including or excluding experimenter as a random effect, then we will use the model without this random effect for simplicity.

We will analyze data for females and males separately because each sex has a distinct natural history.

Ability to detect actual effects

To begin to understand what kinds of effect sizes we will be able to detect given our sample size limitations and our interest in decreasing noise by attempting to measure it, which increases the number of explanatory variables, we will revise the Analysis Plan after reading McElreath (2016). We currently don’t have any estimates for any of our measured variables because these tests have never been done in grackles and we have not encountered previous research that has manipulated flexibility in this way. We will use Bayesian analyses to estimate our likely confidence in the results given simulated data. We will revise this preregistration to include these new analyses before conducting the planned analyses on our actual data. Based on the simulations, we might adapt the number of focal follows per individual or decide to collect much more data just with the aviary-tested birds to increase the amount of information per individual.

Data checking

The data will be checked for overdispersion, underdispersion, zero-inflation, and heteroscedasticity with the DHARMa R package (Hartig 2019) following methods by Hartig. Note: DHARMa doesn’t support MCMCglmm, therefore we will use the closest supported model: glmer from the R package lme4 (Bates et al. 2015).

P1-P2

Analysis: Because the independent variables could influence each other, we will analyze them in a single model: Generalized Linear Mixed Model (GLMM; MCMCglmm function, MCMCglmm package; (Hadfield 2010)) with a Poisson distribution and log link using 130,000 iterations with a thinning interval of 10, a burnin of 30,000, and minimal priors (V=1, nu=0) (Hadfield 2014). We will ensure the GLMM shows acceptable convergence (lag time autocorrelation values <0.01; (Hadfield 2010)), and adjust parameters if necessary to meet this criterion. We will determine whether an independent variable had an effect or not using the Estimate in the full model.

ff <- read.csv("/Users/corina/GTGR/data/data_flexforaging.csv",
    header = T, sep = ",", stringsAsFactors = F)

# Separate the sexes
fem <- ff[ff$Sex == "f", ]
mal <- ff[ff$Sex == "m", ]

# Factor the random effect variables
ID <- as.factor(fem$ID)
Population <- as.factor(fem$Population)
ID <- as.factor(mal$ID)
Population <- as.factor(mal$Population)

# DATA CHECKING
library(DHARMa)
library(lme4)

# Data checking for GLMM 1 females
simulationOutput <- simulateResiduals(fittedModel = glmer(NumberFoodsEaten ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        (1 | ID) + (1 | Population), family = poisson, data = fem),
    n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(FlexRatio, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet

# Data checking for GLMM 1 males
simulationOutput <- simulateResiduals(fittedModel = glmer(NumberFoodsEaten ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        (1 | ID) + (1 | Population), family = poisson, data = mal),
    n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(FlexRatio, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet

# Data checking for GLMM 2 females
simulationOutput <- simulateResiduals(fittedModel = glmer(NumberForagingTechniques ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        Condition + (1 | ID) + (1 | Population), family = poisson,
    data = fem), n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(FlexRatio, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet

# Data checking for GLMM 2 males
simulationOutput <- simulateResiduals(fittedModel = glmer(NumberForagingTechniques ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        Condition + (1 | ID) + (1 | Population), family = poisson,
    data = mal), n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(FlexRatio, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet


# GLMM
library(MCMCglmm)
prior = list(R = list(R1 = list(V = 1, nu = 0), R2 = list(V = 1,
    nu = 0), R3 = list(V = 1, nu = 0), R4 = list(V = 1, nu = 0)),
    G = list(G1 = list(V = 1, nu = 0), G2 = list(V = 1, nu = 0)))

# GLMM 1 with response variable = NumberFoodsEaten females
f1 <- MCMCglmm(NumberFoodsEaten ~ TrialsToReverseLast + ExperimentalGroup +
    DominanceRank + Condition, random = ~Population + ID, family = "poisson",
    data = fem, verbose = F, prior = prior, nitt = 130000, thin = 10,
    burnin = 30000)
summary(f1)
# autocorr(f1$Sol) #Did fixed effects converge?
# autocorr(f1$VCV) #Did random effects converge?

# males
f2 <- MCMCglmm(NumberFoodsEaten ~ TrialsToReverseLast + ExperimentalGroup +
    DominanceRank + Condition, random = ~Population + ID, family = "poisson",
    data = mal, verbose = F, prior = prior, nitt = 130000, thin = 10,
    burnin = 30000)
summary(f2)
# autocorr(f2$Sol) #Did fixed effects converge?
# autocorr(f2$VCV) #Did random effects converge?

# GLMM 2 with response variable = NumberForagingTechniques
# female
f3 <- MCMCglmm(NumberForagingTechniques ~ TrialsToReverseLast +
    ExperimentalGroup + DominanceRank + Condition, random = ~Population +
    ID, family = "poisson", data = fem, verbose = F, prior = prior,
    nitt = 130000, thin = 10, burnin = 30000)
summary(f3)
# autocorr(f3$Sol) #Did fixed effects converge?
# autocorr(f3$VCV) #Did random effects converge?

# male
f4 <- MCMCglmm(NumberForagingTechniques ~ TrialsToReverseLast +
    ExperimentalGroup + DominanceRank + Condition, random = ~Population +
    ID, family = "poisson", data = mal, verbose = F, prior = prior,
    nitt = 130000, thin = 10, burnin = 30000)
summary(f4)
# autocorr(f4$Sol) #Did fixed effects converge?
# autocorr(f4$VCV) #Did random effects converge?

We will quantify the number of different food types and foraging techniques during focal follows according to the ethogram. If a grackle forages during a focal follow we record WHAT it eats, as well as HOW the bird is searching for food. Foraging techniques include: Flipping over objects (flip), digging in ground with bill or feet (dig), sweeping head back and forth [i.e., actually sweeping the bill across the substrate; (sw)], extracting from a substrate (ex), lowers body posture to be parallel to ground to stalk/catch prey from air, from ground, from tree, etc (sc), using gaping bill to search through substrate (ga), lifting or nudging objects with bill (ln).

P1 alternative 2: flexible = more valuable food types

Analysis: We will rank all food types eaten by the grackles by their caloric value, examine the food types eaten per individual and relate this to their flexibility scores on their most recent reversal learning color tube experiment. This will allow us to see whether the more flexible individuals (faster to reverse) eat more valuable (i.e., higher calorie) food types than the less flexible individuals.

P3: flexible = more human foods

Analysis: A GLMM was conducted as in P1-P2, except this GLMM used a binomial distribution (called “categorical” in MCMCglmm) due to the response variable being a proportion.

ff <- read.csv("/Users/corina/GTGR/data/data_flexforaging.csv",
    header = T, sep = ",", stringsAsFactors = F)

# Separate the sexes
fem <- ff[ff$Sex == "f", ]
mal <- ff[ff$Sex == "m", ]

# Factor the random effect variables
ID <- as.factor(fem$ID)
Population <- as.factor(fem$Population)
ID <- as.factor(mal$ID)
Population <- as.factor(mal$Population)

# DATA CHECKING
library(DHARMa)
library(lme4)

# Data checking for female GLMM
simulationOutput <- simulateResiduals(fittedModel = glmer(ProportionFoodsEatenHumanFood ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        Condition + (1 | ID) + (1 | Population), family = binomial,
    data = fem), n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(ProportionFoodsEatenHumanFood, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet

# Data checking for male GLMM
simulationOutput <- simulateResiduals(fittedModel = glmer(ProportionFoodsEatenHumanFood ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        Condition + (1 | ID) + (1 | Population), family = binomial,
    data = mal), n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(ProportionFoodsEatenHumanFood, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet


# GLMM
library(MCMCglmm)
prior = list(R = list(R1 = list(V = 1, nu = 0), R2 = list(V = 1,
    nu = 0), R3 = list(V = 1, nu = 0), R4 = list(V = 1, nu = 0)),
    G = list(G1 = list(V = 1, nu = 0), G2 = list(V = 1, nu = 0)))

# GLMM with response variable = NumberFoodsEaten females
f1 <- MCMCglmm(ProportionFoodsEatenHumanFood ~ TrialsToReverseLast +
    ExperimentalGroup + DominanceRank + Condition, random = ~Population +
    ID, family = "categorical", data = fem, verbose = F, prior = prior,
    nitt = 130000, thin = 10, burnin = 30000)
summary(f1)
# autocorr(f1$Sol) #Did fixed effects converge?
# autocorr(f1$VCV) #Did random effects converge?

# males
f2 <- MCMCglmm(ProportionFoodsEatenHumanFood ~ TrialsToReverseLast +
    ExperimentalGroup + DominanceRank + Condition, random = ~Population +
    ID, family = "categorical", data = mal, verbose = F, prior = prior,
    nitt = 130000, thin = 10, burnin = 30000)
summary(f2)
# autocorr(f2$Sol) #Did fixed effects converge?
# autocorr(f2$VCV) #Did random effects converge?

# Is distance from outdoor cafes a repeatable trait within
# individuals, as measured with a GPS point of individual
# locations during several separate focal follows
d1 <- rpt(dist ~ Sex + (1 | ID), grname = "ID", data = ff, datatype = "poisson",
    nboot = 1000, npermut = 300)
summary(d1)
# If repeatable, take average distance across focal follows
# to model relationship with flexibility
tmp = aggregate(dist ~ ID, FUN = "mean", data = ff)
colnames(tmp) = c("avg_dist", "ID")
ff = merge(ff, tmp, by = "ID", all = T)
d2 <- MCMCglmm(avg_dist ~ TrialsToReverseLast + ExperimentalGroup +
    DominanceRank + Condition, random = ~Population + ID, family = "categorical",
    data = ff, verbose = F, prior = prior, nitt = 130000, thin = 10,
    burnin = 30000)
summary(d2)

# Number of outdoor cafes within the home range of each
# individual, measured by calculating home range size and
# then summing the number of cafes. Load packages
library(adehabitatHR)
library(sf)
# Point to the correct data file and load it
setwd("~/Documents/Grackle project/Space use")
pts <- read.csv("gtgr_points.csv", header = T)
# Set variables to ensure the models read the data properly
pts$Latitude = as.numeric(as.character(pts$Latitude))
pts$Longitude = as.numeric(as.character(pts$Longitude))
pts$Date = as.character(pts$Date)
pts$Time = as.character(pts$Time)
pts$da.ti = as.POSIXct(paste(pts$Date, pts$Time), format = "%Y-%m-%d %H:%M:%S")
# Home range size can only be calculated when a bird has
# more than 5 relocations Count the number of relocations
# for each bird, then exclude individuals with less than 6
tmp = pts
tmp$count = 1
tmp = aggregate(count ~ Bird.Name, FUN = "sum", data = tmp)
pts.min = merge(pts, tmp, by = "Bird.Name", all = T)
pts.min = pts.min[-which(pts.min$count < 6), ]
pts.min$Bird.Name = as.factor(as.character(pts.min$Bird.Name))
# Convert dataframe to Spatial file type
p.sf <- st_as_sf(pts.min, coords = c("Longitude", "Latitude"),
    crs = 4326)
class(p.sf)
p.spatial <- as(p.sf, "Spatial")
class(p.spatial)
# Calculate home range in square meters for each
# individual, excluding outliers
hr = mcp(p.spatial[1], percent = 100, unout = "m2")
plot(hr)
# Read in dataframe with cafe locations
cafs <- read.csv("cafe_points.csv", header = T)
# Convert to spatial points dataframe
c <- SpatialPointsDataFrame(cafs)
# Count the number of cafes in each home range polygon
cafes <- over(c, hr)
cafe_data <- table(cafes$NAME_1)
cafe_data <- merge(cafe_data, ff, by = "ID")

c1 <- MCMCglmm(NumberCafesInTerritory ~ TrialsToReverseLast +
    ExperimentalGroup + DominanceRank + Condition, random = ~Population +
    ID, family = "categorical", data = cafe_data, verbose = F,
    prior = prior, nitt = 130000, thin = 10, burnin = 30000)
summary(c1)

P4: flexible = a greater number of bonds or stronger bonds

Analysis: A GLMM was conducted as in P1-P2.

To quantify social relationships, we will conduct at least four 10-minute focal follows on each subject spaced equally across breeding and non-breeding seasons. We will find subjects in the wild because we will attach radio transmitter tags to all grackles that are released from the aviaries upon completion of their test battery. To ensure we fully sample social and foraging behavior, we will prioritize conducting focal follows on these tagged grackles for which we have a much larger amount of individualized data, including multiple measures of flexibility. We will also sample many other color marked grackles that were never tested in the aviaries, and thus do not have measures of flexibility. By conducting focal follows on grackles that were not in captivity, we can verify that the time in captivity had no effect on grackle social behavior after release because aviary-tested birds should be indistinguishable from non-aviary-tested birds in these analyses.

To measure affiliative bonds, during each focal follow we will record when another grackle comes within one body length of the focal bird (and does not engage in aggressive interactions). In case we do not observe enough of these close associations, we will also record when another grackle comes within 3m of the focal subject (and does not engage in aggressive interactions). Finally, we will conduct a scan sample at the end of the follow to determine group size as the number of other grackles within 10m of the focal individual. Unmarked grackles that are seen in proximity of the focal individual will be recorded and included in the count of group size and individual degree (the number of unique associates), but because we cannot distinguish unmarked individuals from each other, we will exclude unmarked bird data from calculations of an individual’s summed bond strengths (see details in the next paragraph).

We will also measure aggressive behavioral interactions, as indicated in our ethogram. The outcome of these dyadic interactions will be used to create our index of dominance (wins - losses / wins + losses).

We will conduct subsequent follows on the same individual only when 3 or more weeks have passed since the previous focal follow to prevent temporal autocorrelation in behavior (Whitehead (2008)). From the data sheet of dyadic associations during focal follows, we will create a matrix of association strengths between all marked grackles by calculating the Half-Weight association index. This index determines association strength based on the proportion of observations in which two individuals are seen together versus separately, and accounts for bias arising from subjects that are more likely to be observed separately rather than together in the same group (Cairns and Schwager (1987)). From the matrix of association values, we will use the R package igraph (Csardi, Nepusz, and others (2006)) to create a social network, and calculate each individual’s strength (sum of all association values) and degree (maximum number of unique associates) values (Croft, James, and Krause (2008)).

Before analyzing degree and strength, we will determine if these values differ between breeding (Apr - Aug) and non-breeding seasons (Sept - Mar) because social associations could change as a result of breeding behaviors. If there is no difference, we will combine all data for the analyses described below. If there is a difference, we will only use the non-breeding season data.

Social network data are not independent (Croft et al. (2011)), therefore, to determine whether individuals are associating non-randomly based on flexibility (i.e., association strength between two grackles is larger than would be expected by random chance), we will compare our model results to those obtained from random networks. To make a random network, we will use the R package sna (Butts (2016)) and the function “rperm” to randomly rearrange the association strengths (edges) of grackles in our network. We will conduct this edge randomization (called permutation) 10,000 times to create our sample of random networks. We will then re-calculate our dependent variables from the random networks and re-run the same models (as in Croft et al. (2011) and Whitehead, Bejder, and Ottensmeyer (2005)). We will conclude that social bonds are significantly related to flexibility if the coefficients describing the relationship in our observed data are in the top 2.5% and bottom 2.5% of the coefficients resulting from models run on the random networks.

ff <- read.csv("/Users/corina/GTGR/data/data_flexforaging.csv",
    header = T, sep = ",", stringsAsFactors = F)

# Separate the sexes
fem <- ff[ff$Sex == "f", ]
mal <- ff[ff$Sex == "m", ]

# Factor the random effect variables
ID <- as.factor(fem$ID)
Population <- as.factor(fem$Population)
ID <- as.factor(mal$ID)
Population <- as.factor(mal$Population)

# DATA CHECKING
library(DHARMa)
library(lme4)

# Data checking for female GLMM 1
simulationOutput <- simulateResiduals(fittedModel = glmer(MaxBondStrength ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        Condition + (1 | ID) + (1 | Population), family = poisson,
    data = fem), n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(MaxBondStrength, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet

# Data checking for male GLMM 1
simulationOutput <- simulateResiduals(fittedModel = glmer(MaxBondStrength ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        Condition + (1 | ID) + (1 | Population), family = poisson,
    data = mal), n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(MaxBondStrength, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet

# Data checking for female GLMM 2
simulationOutput <- simulateResiduals(fittedModel = glmer(SumBondStrength ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        Condition + (1 | ID) + (1 | Population), family = poisson,
    data = fem), n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(SumBondStrength, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet

# Data checking for male GLMM 2
simulationOutput <- simulateResiduals(fittedModel = glmer(SumBondStrength ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        Condition + (1 | ID) + (1 | Population), family = poisson,
    data = mal), n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(SumBondStrength, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet


# Data checking for female GLMM 3
simulationOutput <- simulateResiduals(fittedModel = glmer(Degree ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        Condition + (1 | ID) + (1 | Population), family = poisson,
    data = fem), n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(Degree, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet

# Data checking for male GLMM 3
simulationOutput <- simulateResiduals(fittedModel = glmer(Degree ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        Condition + (1 | ID) + (1 | Population), family = poisson,
    data = mal), n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(Degree, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet

# Data checking for female GLMM 4
simulationOutput <- simulateResiduals(fittedModel = glmer(MaleSharesTerritory ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        Condition + (1 | ID) + (1 | Population), family = poisson,
    data = fem), n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(MaleSharesTerritory, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet

# Data checking for male GLMM 4
simulationOutput <- simulateResiduals(fittedModel = glmer(MaleSharesTerritory ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        Condition + (1 | ID) + (1 | Population), family = poisson,
    data = mal), n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(MaleSharesTerritory, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet

# Data checking for female GLMM 5
simulationOutput <- simulateResiduals(fittedModel = glmer(RelatednessForMaxBond ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        Condition + (1 | ID) + (1 | Population), family = poisson,
    data = fem), n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(RelatednessForMaxBond, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet

# Data checking for male GLMM 5
simulationOutput <- simulateResiduals(fittedModel = glmer(RelatednessForMaxBond ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        Condition + (1 | ID) + (1 | Population), family = poisson,
    data = mal), n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(RelatednessForMaxBond, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet


# GLMM
library(MCMCglmm)
prior = list(R = list(R1 = list(V = 1, nu = 0), R2 = list(V = 1,
    nu = 0), R3 = list(V = 1, nu = 0), R4 = list(V = 1, nu = 0)),
    G = list(G1 = list(V = 1, nu = 0), G2 = list(V = 1, nu = 0)))


# Social network values differ by season?
season = MCMCglmm(MaxBondStrength ~ Season, random = ~ID, family = "poisson",
    data = ff, verbose = F, prior = prior, nitt = 130000, thin = 10,
    burnin = 30000)
season = MCMCglmm(SumBondStrength ~ Season, random = ~ID, family = "poisson",
    data = ff, verbose = F, prior = prior, nitt = 130000, thin = 10,
    burnin = 30000)
season = MCMCglmm(Degree ~ Season, random = ~ID, family = "poisson",
    data = ff, verbose = F, prior = prior, nitt = 130000, thin = 10,
    burnin = 30000)

# GLMM 1 with response variable = MaxBondStrength females
f1 <- MCMCglmm(MaxBondStrength ~ TrialsToReverseLast + ExperimentalGroup +
    DominanceRank + Condition, random = ~Population + ID, family = "poisson",
    data = fem, verbose = F, prior = prior, nitt = 130000, thin = 10,
    burnin = 30000)
summary(f1)
# autocorr(f1$Sol) #Did fixed effects converge?
# autocorr(f1$VCV) #Did random effects converge?

# males
f2 <- MCMCglmm(MaxBondStrength ~ TrialsToReverseLast + ExperimentalGroup +
    DominanceRank + Condition, random = ~Population + ID, family = "poisson",
    data = mal, verbose = F, prior = prior, nitt = 130000, thin = 10,
    burnin = 30000)
summary(f2)
# autocorr(f2$Sol) #Did fixed effects converge?
# autocorr(f2$VCV) #Did random effects converge?

# GLMM 2 with response variable = AvgBondStrength female
f3 <- MCMCglmm(SumBondStrength ~ TrialsToReverseLast + ExperimentalGroup +
    DominanceRank + Condition, random = ~Population + ID, family = "poisson",
    data = fem, verbose = F, prior = prior, nitt = 130000, thin = 10,
    burnin = 30000)
summary(f3)
# autocorr(f3$Sol) #Did fixed effects converge?
# autocorr(f3$VCV) #Did random effects converge?

# male
f4 <- MCMCglmm(SumBondStrength ~ TrialsToReverseLast + ExperimentalGroup +
    DominanceRank + Condition, random = ~Population + ID, family = "poisson",
    data = mal, verbose = F, prior = prior, nitt = 130000, thin = 10,
    burnin = 30000)
summary(f4)
# autocorr(f4$Sol) #Did fixed effects converge?
# autocorr(f4$VCV) #Did random effects converge?

# GLMM 3 with response variable = Degree female
f5 <- MCMCglmm(Degree ~ TrialsToReverseLast + ExperimentalGroup +
    DominanceRank + Condition, random = ~Population + ID, family = "poisson",
    data = fem, verbose = F, prior = prior, nitt = 130000, thin = 10,
    burnin = 30000)
summary(f5)
# autocorr(f5$Sol) #Did fixed effects converge?
# autocorr(f5$VCV) #Did random effects converge?

# male
f6 <- MCMCglmm(Degree ~ TrialsToReverseLast + ExperimentalGroup +
    DominanceRank + Condition, random = ~Population + ID, family = "poisson",
    data = mal, verbose = F, prior = prior, nitt = 130000, thin = 10,
    burnin = 30000)
summary(f6)
# autocorr(f6$Sol) #Did fixed effects converge?
# autocorr(f6$VCV) #Did random effects converge?

# GLMM 4 with response variable = MaleSharesTerritory
# female
f7 <- MCMCglmm(MaleSharesTerritory ~ TrialsToReverseLast + ExperimentalGroup +
    DominanceRank + Condition, random = ~Population + ID, family = "poisson",
    data = fem, verbose = F, prior = prior, nitt = 130000, thin = 10,
    burnin = 30000)
summary(f7)
# autocorr(f7$Sol) #Did fixed effects converge?
# autocorr(f7$VCV) #Did random effects converge?

# male
f8 <- MCMCglmm(MaleSharesTerritory ~ TrialsToReverseLast + ExperimentalGroup +
    DominanceRank + Condition, random = ~Population + ID, family = "poisson",
    data = mal, verbose = F, prior = prior, nitt = 130000, thin = 10,
    burnin = 30000)
summary(f8)
# autocorr(f8$Sol) #Did fixed effects converge?
# autocorr(f8$VCV) #Did random effects converge?

# GLMM 5 with response variable = RelatednessForMaxBond
# female
f9 <- MCMCglmm(RelatednessForMaxBond ~ TrialsToReverseLast +
    ExperimentalGroup + DominanceRank + Condition, random = ~Population +
    ID, family = "poisson", data = fem, verbose = F, prior = prior,
    nitt = 130000, thin = 10, burnin = 30000)
summary(f9)
# autocorr(f9$Sol) #Did fixed effects converge?
# autocorr(f9$VCV) #Did random effects converge?

# male
f10 <- MCMCglmm(RelatednessForMaxBond ~ TrialsToReverseLast +
    ExperimentalGroup + DominanceRank + Condition, random = ~Population +
    ID, family = "poisson", data = mal, verbose = F, prior = prior,
    nitt = 130000, thin = 10, burnin = 30000)
summary(f4)
# autocorr(f10$Sol) #Did fixed effects converge?
# autocorr(f10$VCV) #Did random effects converge?

P5: flexible = immigrants

Analysis: A GLMM was conducted as in P1-P2.

ff <- read.csv("/Users/corina/GTGR/data/data_flexforaging.csv",
    header = T, sep = ",", stringsAsFactors = F)

# Separate the sexes
fem <- ff[ff$Sex == "f", ]
mal <- ff[ff$Sex == "m", ]

# Factor the random effect variables
ID <- as.factor(fem$ID)
Population <- as.factor(fem$Population)
ID <- as.factor(mal$ID)
Population <- as.factor(mal$Population)

# DATA CHECKING
library(DHARMa)
library(lme4)

# Data checking for female GLMM
simulationOutput <- simulateResiduals(fittedModel = glmer(ImmigrantProbability ~
    TrialsToReverseLast + (1 | Population), family = poisson,
    data = fem), n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(ImmigrantProbability, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet

# Data checking for male GLMM
simulationOutput <- simulateResiduals(fittedModel = glmer(ImmigrantProbability ~
    TrialsToReverseLast + (1 | Population), family = poisson,
    data = mal), n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(ImmigrantProbability, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet


# GLMM
library(MCMCglmm)
prior = list(R = list(R1 = list(V = 1, nu = 0), R2 = list(V = 1,
    nu = 0), R3 = list(V = 1, nu = 0)), G = list(G1 = list(V = 1,
    nu = 0), G2 = list(V = 1, nu = 0)))

# GLMM females
f1 <- MCMCglmm(ImmigrantProbability ~ TrialsToReverseLast, random = ~Population,
    family = "poisson", data = fem, verbose = F, prior = prior,
    nitt = 130000, thin = 10, burnin = 30000)
summary(f1)
# autocorr(f1$Sol) #Did fixed effects converge?
# autocorr(f1$VCV) #Did random effects converge?

# males
f2 <- MCMCglmm(ImmigrantProbability ~ TrialsToReverseLast, random = ~Population,
    family = "poisson", data = mal, verbose = F, prior = prior,
    nitt = 130000, thin = 10, burnin = 30000)
summary(f2)
# autocorr(f2$Sol) #Did fixed effects converge?
# autocorr(f2$VCV) #Did random effects converge?

P6: flexible = wider range of habitats

Analysis: A GLMM was conducted as in P1-P2.

ff <- read.csv("/Users/corina/GTGR/data/data_flexforaging.csv",
    header = T, sep = ",", stringsAsFactors = F)

# Separate the sexes
fem <- ff[ff$Sex == "f", ]
mal <- ff[ff$Sex == "m", ]

# Factor the random effect variables
ID <- as.factor(fem$ID)
Population <- as.factor(fem$Population)
ID <- as.factor(mal$ID)
Population <- as.factor(mal$Population)

# DATA CHECKING
library(DHARMa)
library(lme4)

# Data checking for female GLMM
simulationOutput <- simulateResiduals(fittedModel = glmer(ShannonDiversityIndexHabitat ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        Condition + (1 | Population), family = poisson, data = fem),
    n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(ShannonDiversityIndexHabitat, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet

# Data checking for male GLMM
simulationOutput <- simulateResiduals(fittedModel = glmer(ShannonDiversityIndexHabitat ~
    TrialsToReverseLast + ExperimentalGroup + DominanceRank +
        Condition + (1 | Population), family = poisson, data = mal),
    n = 250)  #250 simulations, but if want higher precision change n>1000
simulationOutput$scaledResiduals  #Expect a flat distribution of the overall residuals, and uniformity in y direction if plotted against any predictor
testDispersion(simulationOutput)  #if under- or over-dispersed, then p-value<0.05, but then check the dispersion parameter and try to determine what in the model could be the cause and address it there, also check for zero inflation
testZeroInflation(simulationOutput)  #compare expected vs observed zeros, not zero-inflated if p<0.05
testUniformity(simulationOutput)  #check for heteroscedasticity ('a systematic dependency of the dispersion / variance on another variable in the model' Hartig, https://cran.r-project.org/web/packages/DHARMa/vignettes/DHARMa.html), which is indicated if dots aren't on the red line and p<0.05. Also...
plot(simulationOutput)  #...there should be no pattern in the data points in the right panel
plotResiduals(ShannonDiversityIndexHabitat, simulationOutput$scaledResiduals)  #plot the residuals against other predictors (in cases when there is more than 1 fixed effect) - can't get this code to work yet

# GLMM
library(MCMCglmm)
prior = list(R = list(R1 = list(V = 1, nu = 0), R2 = list(V = 1,
    nu = 0), R3 = list(V = 1, nu = 0), R4 = list(V = 1, nu = 0)),
    G = list(G1 = list(V = 1, nu = 0), G2 = list(V = 1, nu = 0)))

# females
f1 <- MCMCglmm(ShannonDiversityIndexHabitat ~ TrialsToReverseLast +
    ExperimentalGroup + DominanceRank + Condition, random = ~Population +
    ID, family = "poisson", data = fem, verbose = F, prior = prior,
    nitt = 130000, thin = 10, burnin = 30000)
summary(f1)
# autocorr(f1$Sol) #Did fixed effects converge?
# autocorr(f1$VCV) #Did random effects converge?

# males
f2 <- MCMCglmm(ShannonDiversityIndexHabitat ~ TrialsToReverseLast +
    ExperimentalGroup + DominanceRank + Condition, random = ~Population +
    ID, family = "poisson", data = mal, verbose = F, prior = prior,
    nitt = 130000, thin = 10, burnin = 30000)
summary(f2)
# autocorr(f2$Sol) #Did fixed effects converge?
# autocorr(f2$VCV) #Did random effects converge?

This species is primarily found within urbanized environments, however there are many different substrates within urban habitats that could provide a variety of food items. Since we are interested in the flexibility of grackle foraging behaviors within the urban habitat, we have focused our habitat diversity measures on the different substrates on which we are mostly likely to see individual variability in foraging behaviors and food types, if present. For example, cement, cafe, and dumpster substrates are all likely to contain human-provided food (either because people leave food out for wild animals or wild animals are able to scrounge human foods), whereas grass, gravel, or other natural substrates such as trees likely contain non-human provided prey items including insects and small vertebrates. Using the Shannon diversity index to understand the evenness of substrate use within urban habitats has been recommended by others in the field of urban ecology (Alberti, Botsford, and Cohen (2001) & Tews et al. (2004)).

P8: flexible = particular habitats

Analysis: We will examine the proportion of focal follows associated with each microhabitat per individual and relate this to their flexibility scores on their most recent reversal learning color tube experiment. This will allow us to see whether the more flexible individuals (faster to reverse) are associated with particular microhabitats more than the less flexible individuals.

F. ETHICS

This research is carried out in accordance with permits from the:

  1. US Fish and Wildlife Service (scientific collecting permit number MB76700A-0,1,2)
  2. US Geological Survey Bird Banding Laboratory (federal bird banding permit number 23872)
  3. Arizona Game and Fish Department (scientific collecting license number SP594338 [2017], SP606267 [2018], SP639866 [2019])
  4. Institutional Animal Care and Use Committee at Arizona State University (protocol number 17-1594R)
  5. University of Cambridge ethical review process (non-regulated use of animals in scientific procedures: zoo4/17)

###G. AUTHOR CONTRIBUTIONS

Logan: Hypothesis development, study design, materials, data collection, data analysis and interpretation, write up, funding.

Lukas: Hypothesis development, study design, data analysis and interpretation, write up, revising/editing.

Bergeron: Data collection, data interpretation, revising/editing.

McCune: Hypothesis development, study design, data collection, data analysis, data interpretation, revising/editing.

H. FUNDING

This research is funded by the Department of Human Behavior, Ecology and Culture at the Max Planck Institute for Evolutionary Anthropology, and by a Leverhulme Early Career Research Fellowship to Logan.

I. CONFLICT OF INTEREST DISCLOSURE

We, the authors, declare that we have no financial conflicts of interest with the content of this article. Corina Logan and Dieter Lukas are Recommenders at PCI Ecology and Corina Logan is on the Managing Board at PCI Ecology.

J. ACKNOWLEDGEMENTS

We thank our Recommenders at PCI Ecology, Julia Astegiano and Esther Sebastián Gonzalez, and our reviewers, Esther Sebastián Gonzalez and Pizza Ka Yee Chow, who provided valuable feedback on previous versions of this preregistration; Dieter Lukas for help polishing the predictions; Ben Trumble for providing us with a wet lab at Arizona State University and Angela Bond for lab support; Melissa Wilson Sayres for sponsoring our affiliations at Arizona State University and lending lab equipment; Kevin Langergraber for serving as local PI on the ASU IACUC; Kristine Johnson for technical advice on great-tailed grackles; Jay Taylor for grackle scouting at Arizona State University; Arizona State University School of Life Sciences Department Animal Care and Technologies for providing space for our aviaries and for their excellent support of our daily activities; Julia Cissewski for tirelessly solving problems involving financial transactions and contracts; Richard McElreath for project support and helping develop the flexibility 4 estimate; Julia Astegiano, our Recommender at PCI Ecology, and reviewers Esther Sebastian-Gonzalez and Pizza Ka Yee Chow for their wonderful feedback; two anonymous reviewers and the Recommender, Emanuel Fronhofer, at PCI Ecology for their useful feedback; and our research assistants: Aelin Mayer, Nancy Rodriguez, Brianna Thomas, Aldora Messinger, Elysia Mamola, Michael Guillen, Rita Barakat, Adriana Boderash, Olateju Ojekunle, August Sevchik, Justin Huynh, Jennifer Berens, Amanda Overholt, and Michael Pickett.

K. REFERENCES

Alberti, Marina, Erik Botsford, and Alex Cohen. 2001. “Quantifying the Urban Gradient: Linking Urban Planning and Ecology.” In Avian Ecology and Conservation in an Urbanizing World, 89–115. Springer.
Altmann, Jeanne. 1974. “Observational Study of Behavior: Sampling Methods.” Behaviour 49 (3-4): 227–66.
Bates, Douglas, Martin Mächler, Ben Bolker, and Steve Walker. 2015. “Fitting Linear Mixed-Effects Models Using lme4.” Journal of Statistical Software 67 (1): 1–48. https://doi.org/10.18637/jss.v067.i01.
Butts, C. 2016. “Sna: Tools for Social Network Analysis (Version r Package Version 2.4).”
Cairns, Sara J, and Steven J Schwager. 1987. “A Comparison of Association Indices.” Animal Behaviour 35 (5): 1454–69.
Chow, Pizza Ka Yee, Stephen EG Lea, and Lisa A Leaver. 2016. “How Practice Makes Perfect: The Role of Persistence, Flexibility and Learning in Problem-Solving Efficiency.” Animal Behaviour 112: 273–83. https://doi.org/10.1016/j.anbehav.2015.11.014.
Croft, Darren P, Richard James, and Jens Krause. 2008. Exploring Animal Social Networks. Princeton University Press.
Croft, Darren P, Joah R Madden, Daniel W Franks, and Richard James. 2011. “Hypothesis Testing in Animal Social Networks.” Trends in Ecology & Evolution 26 (10): 502–7.
Csardi, Gabor, Tamas Nepusz, and others. 2006. “The Igraph Software Package for Complex Network Research.” InterJournal, Complex Systems 1695 (5): 1–9.
Ducatez, Simon, Joanne Clavel, and Louis Lefebvre. 2015. “Ecological Generalism and Behavioural Innovation in Birds: Technical Intelligence or the Simple Incorporation of New Foods?” Journal of Animal Ecology 84 (1): 79–89.
Griffin, Andrea S, and David Guez. 2014. “Innovation and Problem Solving: A Review of Common Mechanisms.” Behavioural Processes 109: 121–34. https://doi.org/10.1016/j.beproc.2014.08.027.
Hadfield, JD. 2010. “MCMC Methods for Multi-Response Generalized Linear Mixed Models: The MCMCglmm r Package.” Journal of Statistical Software 33 (2): 1–22. https://doi.org/10.18637/jss.v033.i02.
———. 2014. “MCMCglmm Course Notes.” http://cran.r-project.org/web/packages/MCMCglmm/vignettes/CourseNotes.pdf.
Hartig, Florian. 2019. DHARMa: Residual Diagnostics for Hierarchical (Multi-Level / Mixed) Regression Models. http://florianhartig.github.io/DHARMa/.
Hutcheon, Jennifer A, Arnaud Chiolero, and James A Hanley. 2010. “Random Measurement Error and Regression Dilution Bias.” Bmj 340: c2289. https://doi.org/10.1136/bmj.c2289.
Johnson, Kristine, Emily DuVal, Megan Kielt, and Colin Hughes. 2000. “Male Mating Strategies and the Mating System of Great-Tailed Grackles.” Behavioral Ecology 11 (2): 132–41.
Lefebvre, Louis, Patrick Whittle, Evan Lascaris, and Adam Finkelstein. 1997. “Feeding Innovations and Forebrain Size in Birds.” Animal Behaviour 53 (3): 549–60. https://doi.org/10.1006/anbe.1996.0330.
Logan, C J. 2016. “Behavioral Flexibility in an Invasive Bird Is Independent of Other Behaviors.” PeerJ 4: e2215.
McElreath, Richard. 2016. Statistical Rethinking: A Bayesian Course with Examples in r and Stan. CRC Press. https://doi.org/10.1201/9781315372495.
Mikhalevich, Irina, Russell Powell, and Corina Logan. 2017. “Is Behavioural Flexibility Evidence of Cognitive Complexity? How Evolution Can Inform Comparative Cognition.” Interface Focus 7 (3): 20160121. https://doi.org/10.1098/rsfs.2016.0121.
Overington, Sarah E, Julie Morand-Ferron, Neeltje J Boogert, and Louis Lefebvre. 2009. “Technical Innovations Drive the Relationship Between Innovativeness and Residual Brain Size in Birds.” Animal Behaviour 78 (4): 1001–10.
Peer, Brian D. 2011. “Invasion of the Emperor’s Grackle.” Ardeola 58 (2): 405–9.
R Core Team. 2017. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org.
Selander, Robert K, and Donald R Giller. 1961. “Analysis of Sympatry of Great-Tailed and Boat-Tailed Grackles.” The Condor 63 (1): 29–86.
Sol, Daniel, Richard P Duncan, Tim M Blackburn, Phillip Cassey, and Louis Lefebvre. 2005. “Big Brains, Enhanced Cognition, and Response of Birds to Novel Environments.” Proceedings of the National Academy of Sciences of the United States of America 102 (15): 5460–65. https://doi.org/10.1073/pnas.0408145102.
Sol, Daniel, and Louis Lefebvre. 2000. “Behavioural Flexibility Predicts Invasion Success in Birds Introduced to New Zealand.” Oikos 90 (3): 599–605. https://doi.org/10.1034/j.1600-0706.2000.900317.x.
Sol, Daniel, Sarah Timmermans, and Louis Lefebvre. 2002. “Behavioural Flexibility and Invasion Success in Birds.” Animal Behaviour 63 (3): 495–502.
Tews, Jörg, Ulrich Brose, Volker Grimm, Katja Tielbörger, MC Wichmann, Monika Schwager, and Florian Jeltsch. 2004. “Animal Species Diversity Driven by Habitat Heterogeneity/Diversity: The Importance of Keystone Structures.” Journal of Biogeography 31 (1): 79–92.
Thrasher, Derrick J, Bronwyn G Butcher, Leonardo Campagna, Michael S Webster, and Irby J Lovette. 2018. “Double-Digest RAD Sequencing Outperforms Microsatellite Loci at Assigning Paternity and Estimating Relatedness: A Proof of Concept in a Highly Promiscuous Bird.” Molecular Ecology Resources 18 (5): 953–65.
Wehtje, Walter. 2003. “The Range Expansion of the Great-Tailed Grackle (Quiscalus Mexicanus Gmelin) in North America Since 1880.” Journal of Biogeography 30 (10): 1593–1607. https://doi.org/10.1046/j.1365-2699.2003.00970.x.
Wey, Tina, Daniel T Blumstein, Weiwei Shen, and Ferenc Jordán. 2008. “Social Network Analysis of Animal Behaviour: A Promising Tool for the Study of Sociality.” Animal Behaviour 75 (2): 333–44.
Whitehead, Hal. 2008. Analyzing Animal Societies: Quantitative Methods for Vertebrate Social Analysis. University of Chicago Press.
Whitehead, Hal, Lars Bejder, and C Andrea Ottensmeyer. 2005. “Testing Association Patterns: Issues Arising and Extensions.” Animal Behaviour 69 (5): e1.