Tuesday, November 10, 2015

Culling Koalas for Conservation

Guest post by Stefanie Thibert, who is currently enrolled in the Professional Masters of Environmental Science program at the University of Toronto-Scarborough


Euthanizing diseased koalas may be the most effective management strategy to save koalas from extinction in Queensland. A recent study published in the Journal of Wildlife Disease suggests that if 10% of terminally diseased and sterile koalas were culled while other infected koalas were treated with antibiotics, chlamydial infections could be completely eliminated and population sizes could increase within four years. 
The beloved koala relaxing in a eucalyptus tree
(Source:
http://www.onekind.org/be_inspired/animals_a_z/koala/) 

Although koalas are under pressure from habitat degradation, dog attacks and road accidents, disease burden is the largest threat to its population sizes. It is estimated that 50% of the current koala population in South-East Queensland is infected with the Chlamydia spp. The sexually transmitted disease causes lesions in the genitals and eyes, leading to blindness, infertility, and ultimately death. Rhodes et al. (2011) suggest that reversing the observed population decline in Queensland koalas would require either entirely eliminating deaths from cars and dogs, complete reforestation, or reducing deaths caused by Chlamydia by 60%. It is clear that the best conservation tool is to reduce the prevalence of chlamydial infection.

In the study, Wilson et al. (2015) examined the potential impact of euthanizing koalas infected with Chlamydia. As shown in Figure 1, computer simulation models were used to project koala population sizes based on four separate intervention programs: “no intervention”, “cull only”, “treat only”, and “cull or treat”. In the “cull or treat” program, sterile and terminal koalas were euthanized, while infected kolas that were not sterile or terminal were treated with antibiotics. It was concluded that the “cull or treat” is the most successful intervention program for increasing long-term population growth and eliminating chlamydial infections. 
The projected numbers of koalas in the Queensland population under different intervention programs.(From Wilson et al. 2015)
Without intervention, it is estimated that merely 185 koalas will persist in 2030. Under both the “cull only” and “treat only” intervention, it would take seven years before there would be greater koalas numbers than there would be without intervention. Under the “cull or treat” program, the population size was projected to overtake the no-intervention population after four short years. The population size in 2030 is also greatest under the “cull or treat” intervention. The increase in koala numbers in the “cull or treat” strategy is due to the considerable decrease in the prevalence of Chlamydia.
As expected, the proposal received considerable attention and was scrutinized by the public. Some argue that it is inhumane, while others suggest alternative management strategies. However, when it comes down to it, the science is clear. Euthanizing can be done in a humane way, and it is the most effective method for conservation of the species. The only real alternative to culling is treatment with antibiotics, which is costly, requires an immense amount of monitoring, and has been shown to take much longer to eliminate the disease and increase population sizes.
The question we must ask ourselves is: we cull other species, so why not koalas? For instance, in the United States, the culling of four million cattle successfully prevented bovine tuberculosis from spreading to humans. Even when based on sound scientific research, culling has always been dismissed as a management option for the iconic Australian marsupial. In 1997, culling was suggested as a method to protect the overabundant koala population on Kangaroo Island, but sterilization and relocation was used instead. It is amazing that a program that was significantly more expensive and less effective was chosen because the public could never think of killing the adorable and innocent koala.
Managing koala populations is clearly a case in which science intersects with emotion. However, it is essential that we put our emotions aside, and make a decision that is based on scientific evidence. Let us remember that the study only suggests culling or treating 10% of the population each year, which is equivalent to approximately 140 koalas. It is also important to improve the communication of science to the public. It needs to be made abundantly clear that without culling, the koala populations will continue to decrease.


To read the full article visit: http://www.bioone.org/doi/full/10.7589/2014-12-278 

References:

Oliver, M. (2015, October, 20). Proposal to euthanise koalas with chlamydia divides experts. The Guardian. Retrieved from: http://www.theguardian.com/world/2015/oct/20/proposal-to-euthanise-koalas-with-chlamydia-divides-experts.

Olmstead, A.L., & Rhode, P.W. (2004). An impossible undertaking: The eradication of bovine tuberculosis in the United States. Journal of Economic History, 64, 734-772.

Rhodes, J.R., Ng, C.F., de Villiers, D.L., Preece, H.J., McAlpine, C.A., & Possingham, H.P. (2011). Using integrated population modeling to quantify the implications of multiple threatening processes for a rapidly declining population. Biological Conservation, 144, 1081–1088.

Wilson, D., Craig, A., Hanger, J., & Timms, P. (2015). The paradox of euthanizing koalas to save populations from elimination. Journal of Wildlife Diseases, 51, 833-842.


Friday, November 6, 2015

Science in China –feeding the juggernaut*

For those of us involved in scientific research, especially those that edit journals, review manuscripts or read published papers, it is obvious that there has been a fundamental transformation in the scientific output coming from China. Both the number and quality of papers have drastically increased over the past 5-10 years. China is poised to become a global leader in not only scientific output, but also in the ideas, hypotheses and theories that shape modern scientific investigation.

I have been living in China for a couple of months now (and will be here for 7 months more), working in a laboratory at Sun Yat-sen University in Guangzhou, and I have been trying to identify the reasons for this shift in scientific culture in China. Moreover, I see evidence that China will soon be a science juggernaut (or already is), and there are clear reasons why this is. Here are some reasons why I believe that China has become a science leader, and there are lessons for other national systems.

The reasons for China’s science success:

1.      University culture.

China is a country with a long history of scholarly endeavours. We can look to the philosophical traditions of Confucius 2500 years ago as a prime example of the respect and admiration of scholarly traditions. Though modern universities are younger in China than elsewhere (the oldest being about 130 years old), China has invested heavily in building Universities throughout the country. In the mid-1990s, the government built 100 new universities in China, and now graduates more than 6 million students every year from undergraduate programs.
Confucius (551-479 BC), the grand-pappy of all Chinese scholars

This rapid increase in the number of universities means that many are very modern with state-of-the-art facilities. This availability of infrastructure has fostered the growth of new colleges, institutes and departments, meaning that new faculty and staff have been hired. Many departments that I have visited have large numbers of younger Assistant and Associate Professors, many having been trained elsewhere, that approach scientific problems with energy and new ideas.
My new digs


2.      Funding

From my conversations with various scientists, labs are typically very well funded. With the expansion of the number of universities seems to have been an expansion in funds available for research projects. Professors need to write a fair number of grant proposals to have all of their projects funded, but it seems that success rates are relatively high, and with larger grants available to more senior researchers. This is in stark contrast to other countries, where funding is inadequate. In the USA, National Science Foundation funding rates are often below 10% (only 1 in 10 proposals are funded). This abysmal funding rate means that good, well-trained researchers are either not able to realize their ideas or spend too much of their time applying for funding. In China, new researchers are given opportunities to succeed.


3.      Collaboration

Chinese researchers are very collaborative. There are several national level ecological research networks (e.g., dynamic forest plots) that involve researchers from many institutions, as well as international collaborative projects (e.g., BEF China). In my visits to different universities, Chinese researchers are very eager to discuss shared research interests and explore the potential for collaboration. Further, there are a number of funding schemes to get students, postdocs and junior Professors out of China and into foreign labs, which promotes international collaboration. Collaborations provide the creative capital for new ideas, and allow for larger, more expansive research projects.

4.      Environmental problems

It is safe to say that the environment in China has been greatly impacted by economic growth and development over the past 30 years. This degradation of the environment has made ecological science extremely relevant to the management of natural resources and dealing with contaminated soil, air and water. Ecological research appears to have a relatively high profile in China and is well supported by government funding and agencies.

5.      Laboratory culture

In my lab in Canada, I give my students a great deal of freedom to pursue their own ideas and allow them much latitude in how they do it. Some students say that they work best at night, others in spurts, and some just like to have four-day weekends every week. While Chinese students seem equally able to pursue their own ideas and interests, students tend to have more strict requirements about how they do their work. Students are often expected to be in the lab from 9-5 (at least) and often six days a week. This expectation is not seen as demanding or unreasonable (as it probably would be in the US or Canada), but rather in line with general expectations for success (see next point).

Labs are larger in China. The lab I work in has about 25 Masters students and a further 6 PhD students, plus postdocs and technicians. Further, labs typically have a head professor and several Assistant or Associate Professors. When everyone is there everyday, there is definitely a vibe and culture that emerges that is not possible if everyone is off doing their own thing.

The lab I'm working in -"the intellectual factory"

Another major difference is that there is a clear hierarchy of respect. Masters students are expected to respect and listen to PhD students, PhD students respect postdocs and so on up to the head professor. This respect is fundamental to interactions among people. As it has been described to me, the Professor is not like your friend, but more like a father that you should listen to.

What’s clear is that lab culture and expectations are built around the success of the individual people and the overall lab. And success is very important –see next point.


6.      Researcher/student expectations

I left the expectations on researchers for last because this needs a longer and more nuanced discussion. My own view of strict expectations has changed since coming to China, and I can now see the motivating effect these can have.

For Chinese researchers it is safe to say that publications are gold. Publishing papers, and especially the type of journal those papers appear determine career success in a direct way. A masters student is required to publish one paper, which could be in a local Chinese journal. A PhD student is required to publish two papers in international journals. PhD students who receive a 2-year fellowship to travel to foreign labs are required to publish a paper from that work as well. For researchers to get a professor position, they must have a certain number of publications in high-impact international journals (e.g., Impact Factor above 5).

Professors are not immune from these types of expectations. Junior professors are not tenured, nor are they able to get tenure until they qualify for the next tier, and they need to constantly publish. To get a permanent position as a full professor or group leader, they need to have a certain number of high impact papers. For funding applications, their publication records are quantified (number and impact factors of journals) and they must surpass some threshold.

Of course in any country, your publication record is the most important component for your success as a researcher, but in China the expectations are clearly stated.

While there are pros and cons of such a reward based system, and certainly the pressure can be overwhelming, I’ve witnessed the results of this system. Students are extremely motivated and have a clear idea what it means to be successful. To get two publications in a four year PhD requires a lot of focus and hard work; there is no time for drifting or procrastinating.

So why has Chinese science been so successful? It is because a number of factors have coalesced around and support a general high demand for success. Regardless of the number of institutional and funding resources available, this success is only truly realized because of researchers' desire to exceed strict expectations. And they are doing so wonderfully.  

*over the next several months I will write a series of posts on science and the environment in China

Monday, November 2, 2015

The Toronto Salmon Run

Guest Post by Sara Bowman, currently enrolled in the Professional Masters of Environmental Science program at the University of Toronto-Scarborough

The Toronto Salmon Run

Toronto has been called a lot of things, but I think my favourite is “A City Within a Park”. Between High Park, the Rouge, and countless other parks based around our river systems, there are so many opportunities for people to connect with nature and forget they live in a city that is 2.6 million people strong. Despite my frequent excursions into the parks of Toronto, I still will often see something new that spurs a whole whack of questions and excitement about the area I call home.

Case in point: the salmon run! Cycling to work on October 13th I was lucky enough to witness my very first salmon run along the Don River, between Sheppard and Finch. You couldn’t help but notice the nearly 2 foot long fish struggling northward against the current, especially when a few individuals would have a violent encounter followed by swimming speedily away. I whipped out my cell phone and took as many videos and pictures as I could without being late to work. All throughout my shift questions started cycling through my head. Where in the river do the fish spawn? How many types of salmon are in Lake Ontario? Where did they come from? How are they faring from a conservation perspective? In no particular order, here are some answers I found to these questions!

Photo Credit: Tony Bock, The Toronto Star
First of all – I think my salmons were Chinook, which is the largest of the Pacific Salmons[1]. Chinook Salmon were intentionally introduced to Lake Ontario some time in the 1960s (Coho Salmon, another Pacific Salmon species, was introduced at around the same time), mainly for sport fishing, and as a bio-control for non-native fishes[1]. Their introduction was also important for essentially replacing the native Atlantic Salmon and Lake Trout, which were the top predators[1]. Atlantic Salmon were extirpated from Lake Ontario in the late 1800s due to fishing pressures, and today programs like Bring Back The Salmon are undertaking re-introduction efforts, and also habitat restoration and public outreach so that extirpation doesn’t just happen again[2].

Although thousands of individuals of Chinook are stocked in Lake Ontario every year, it is believed that natural reproduction occurs, and that they are well on their way to becoming naturalized[1]. In an ocean system, Chinook Salmon migrates up streams (the ones where they were born) from the Pacific Ocean to mate and lay eggs (spawn). Once they have spawned, they die, unlike the Atlantic Salmon which makes the trip back down to the Ocean after spawning[3]. The adult female will choose a site to make her “redd” (essentially a nest for the fish eggs) based on the water velocity and depth, and on the composition of the substrate, which should be gravel[3]. At first I was confused about how the fish managed to get so large in just a year, but it turns out that once they hatch after 3-5 months, they can spend up to 2 years in the streams where they undergo certain changes to prepare them for salt-water life[3]. Once they are back in the Pacific, they will stay there to feed and grow for up to six years[3]!

Lake Ontario is home to seven species of fish in the family Salmonidae, of which only 3 are native: the Atlantic Salmon, the Lake Trout, and the Brook Trout[1]. The Brown Trout, Chinook Salmon, Rainbow Trout, and Coho Salmon were all introduced[1].  My first thought, and this may be yours to, is how Atlantic Salmon could be considered native to Lake Ontario – after all “Atlantic” is in their name, and the distance between the Atlantic and Lake Ontario is pretty far, even for a determined migrating fish. So how did the fish get into our lakes? The Ice Age. The last one ended about 12,000 years ago, and Toronto was under about a kilometer or two of ice. When the glaciers retreated northward, basins were carved into the land and were filled with the melted water, and because of all the extra water from the ice, the St. Lawrence connection between the lake and the ocean was stronger[4]. Because the Atlantic Salmon had some freshwater adaptations for when it was spawning, it was able to naturalize to its new all freshwater environment[1].

National Oceanic and Atmospheric Association, 1999 

As the 2012 Fishes of Toronto report explains, as settlement around Lake Ontario and its streams increased in the 1800s and 1900s, the river temperatures increased, erosion increased, pollution from sewage increased, and physical structures blocking migration like dams were built. This would ultimately result in the local demise of the species from Lake Ontario.  Luckily, as I mentioned above, restoration efforts are under way to restore Atlantic Salmon populations. I wondered whether or not here might be some detrimental effects on any of the salmonid populations when or if Atlantic Salmon makes a come back, but a study in Ecology of Freshwater Fish from 2012 by Jessica Van Zwol and others found that a mix of Atlantic Salmon, Brown Trout and Rainbow Trout in stream breeding grounds did not significantly impact productivity[5]. Lake Trout is another native of Lake Ontario that suffered major population declines. In the 1970s some restoration efforts were began, but today the population has to be maintained by fish reared in a hatchery – the amount of natural reproduction occurring is not enough to prevent the species from extirpation[2].  

What can we do to ensure the future of these top open-water predators in Lake Ontario? For starters, we can be more conscious of what we are putting down our drains – it leads to the rivers and can pollute them. Be aware of proper chemical disposal. You can engage in tree planting programs along riverbanks to help prevent erosion. You can even help with salmon hatchery programs and habitat restoration to help give the populations a boost so that they can maintain their ecological roles, and be around for fishers to fish for generations to come.

Thanks for reading!!

References

1.Fishes of Toronto: A Guide to Their Remarkable World. City of Toronto, 2012. URL: 
https://www1.toronto.ca/City Of Toronto/Toronto Water/Files/pdf/F/Fishes of TO_PRINT_Feb23%5B1%5D.pdf
2. Lake Ontario Atlantic Salmon Restoration Program. Bring Back the Salmon Lake Ontario. 2013. URL:  http://www.bringbackthesalmon.ca/?page_id=12 
3. Chinook Salmon. NOAA Fisheries. Updated May 14, 2015. URL: http://www.nmfs.noaa.gov/pr/species/fish/chinook-salmon.html 
4. About Our Great Lakes: Background. National Oceanic and Atmospheric Administration. 
[U.S. Army Corps of Engineers and the Great Lakes Commission] Published 1999. URL: 
http://www.glerl.noaa.gov/pr/ourlakes/background.html 
5. Van Zwol, J., Neff, B., Wilson, C. 2012. The effect of competition among three salmonids on dominance and growth during the juvenile life stage. Ecology of Freshwater Fish. 21: 533-540. Accessed online: http://publish.uwo.ca/~bneff/papers/Van Zwol et al_Salmonid Dominance.pdf

Wednesday, October 21, 2015

Scientists + Communication = ??

An academic is expected to be a jack of many trades – handling research, teaching, mentorship, administration, committee work, reviewing, grant-writing, and editorial duties. Science communication is increasingly being added to that list as well. Outreach, public engagement and science communication are all terms thrown around (e.g. the 'Broader Impacts' section of many NSF grants, for example, includes the possibility "Broaden dissemination to enhance scientific and technological understanding"). Sometimes this can include communication between academics (conferences, seminars, blogs like this one) but often it is meant to include communication with the general public. Statistics about low science literacy at least partially motivate this. For example, “Between 29% and 57% of Americans responded correctly to various questions measuring the concepts of scientific experiment and controlling variables. Only 12% of Americans responded correctly to all the questions on this topic, and nearly 20% did not respond correctly to any of them”. (http://www.nsf.gov/statistics/seind14/index.cfm/chapter-7/c7s2.htm).

Clearly improving scientific communication is a worthy goal. But at times it feels like it is a token addition to an application, one that is outsourced to scientists without providing the necessary resources or training. . This is a problem because if we truly value scientific communication, the focus should be on doing it in a thoughtful manner, rather than as an afterthought. I say this because firstly because communicating complex ideas, some of which may require specialized terms and background knowledge, is difficult. The IPCC summaries, meant to be accessible to lay readers were recently reported to be incredibly inaccessible to the average reader (and getting worse over time!). Their Flesch reading ease scores were lower than those of Einstein’s seminal papers, and certainly far lower than most popular science magazines. Expert academics, already stretched between many skills, may not always be the best communicators of their own work.

Secondly, even when done well, it should be recognized that the audience for much science communication is a minority of all media consumers – the ‘science attentive’ or ‘science enthusiast’ portion of the public. Popular approaches to communication are often preaching to the choir. And even within this group, there are topics that naturally draw more interest or are innately more accessible. Your stochastic models will inherently be more difficult to excite your grandmother about than your research on the extinction of a charismatic furry animal. Not every topic is going to be of interest to a general audience, or even a science-inclined audience, and that should be okay.

So what should our science communication goals be, as scientists and as a society? There is entire literature on this topic (the science of science communication, so to speak), and it provides insight into what works and what is needed. However, “....despite notable new directions, many communication efforts continue to be based on ad-hoc, intuition-driven approaches, paying little attention to several decades of interdisciplinary research on what makes for effective public engagement.”

One approach supported by this literature process follows 4 steps:

1) Identify the science most relevant to the decisions that people face;
2) Determine what people already know;
3) Design communications to fill the critical gaps (between what people know and need to know);
4) Evaluate the adequacy of those communications.


This approach inherently includes human values (what do people want or need to know), rather than a science-centric approach. In addition, to increase the science-enthusiast fraction of the public, focusing on education and communication for youth should be emphasized.

The good news is that science is respected, even when not always understood or communicated well. When asked to evaluate various professions, nearly 70% of Americans said that scientists “contribute a lot” to society (compared to 21% for business executives), and scientists typically are excited about interacting with the public. But it seems like a poor use of time and money to simply expect academics to become experts on science communication, without offering training and interdisciplinary relationships. So, for example, in the broader impacts section of a GRFP, maybe NSF should value taking part in a program (led by science communication experts) on how to communicate with the public; maybe more than giving a one-time talk to 30 high school students. Some institutions provide more resources to this end than others, but the collaborative and interdisciplinary nature of science communication should receive far more emphasis. And the science of science communication should be a focus – data-driven approaches are undeniably more valuable.

None of this is to say that you shouldn't keep perfecting your answer for when the person besides you on an airplane asks you what you do though :-) 

Tuesday, October 6, 2015

Does context alter the dilution effect?

Understanding disease and parasites from a community context is an increasingly popular approach and one that has benefited both disease and ecological research. In communities, disease outbreaks can reduce host populations, which will in turn alter species' interactions and change community composition, for example. Community interactions can also alter disease outcomes - decreases in diversity can incr-
Frogs in California killed by the chytrid fungus
(source: National Geographic News)
ease disease risk for vulnerable hosts, a phenomenon known as the dilution effect. For example, in a high diversity system, a mosquito may bite individuals from multiple resistant species as well as those from a focal host, potentially reducing the frequency of focal host-parasite contact. Hence the dilution effect may be a potential benefit of biodiversity, and multiple recent studies provide evidence for its existence.

Not all recent studies support this diversity-disease risk relationship, however, and it is not clear whether the dilution effect might depend on spatial scale, the definition of disease risk used, or perhaps the system of study. A recent paper in Ecology Letters from Alexander Strauss et al. does an excellent job of deconstructing the assumptions and implicit models behind the dilution effect and exploring whether context dependence might explain some of the variation in published results. The authors develop theoretical models capturing hypothesized mechanisms, and then use these to predict the outcomes of mesocosm experiments.

Suggested mechanisms behind the dilution effect include 1) that diluter species (i.e. not the focal host) reduce parasite encounters for focal hosts, with little or no risk to themselves (resistant); and 2) diluters may compete for resources or space against the focal host and so reduce the host population, which should in turn reduce density dependent disease risk. But, if these are the mechanisms, there are a number of corollaries that should not be ignored. For example, what if the diluter species is the poorer competitor and so competition reduces diluter populations? What if diluter species aren't completely resistant to disease and at large populations are susceptible? The cost/benefit analysis of having additional species present may differ depending on any number of factors in a system.

The authors focus on a relatively simple system - a host species Daphnia dentifera, a virulent fungus Metschnikowia bicuspidata, and a competitor species Ceriodaphnia sp.. Observations suggest that epidemics in the Daphnia species may smaller where the second species occurs - Ceriodaphnia removes spores when filter feeding and also competes for food. By measuring a variety of traits, they could estimate the R* and R0 values - roughly, low R* values indicated strong competitors and high  Rvalues indicated groups that have high disease transmission rates. Context dependence is introduced by considering three different genotypes of the Daphnia: these genotypes varied in R* and Rvalues, allowing them to test whether changing competitive ability and disease transmission in the Daphnia might alter the strength or even presence of a dilution effect. Model predictions were then tested directly against matching mesocosm experiments.

The results show clear evidence of context dependence in the dilution effect (and rather nice matches between model expectations and mesocosm data). Three possible scenarios are compared, which differ in the Daphnia host genotype and its competitive and transmission characteristics. 
  1. Dilution failure: the result of a host genotype that is a strong competitor, and a large epidemic (low R*, high R0). 
  2. Dilution success: the result of a host that is a weak competitor and a moderate epidemic (host has high R*, moderate R0). 
  3. Dilution irrelevance: the outcome of a host that is a weak competitor, and a small epidemic (high R*, low R0). 

From Strauss et al. 2015. The y-axis shows percent host population infected, solid lines show the disease prevalence without the diluter; dashed show host infection when diluter is present.

Of course, all models are simplifications of the real world, and it is possible that in more diverse systems the dilution effect might be more difficult to predict. However, as competition is a component of most natural systems, its inclusion may better inform models of disease risk. Other models for other systems might suggest different outcomes, but this one provides a robust jumping off point for future research into the dilution effect.

Friday, September 18, 2015

Post at Oikos + why do papers take so long?

This is mostly a shameless cross-post to a blog post I wrote for the Oikos blog. It's about an upcoming paper in Oikos that asks whether beta-diversity null deviation measures, which originated in papers like Chase 2010 and Chase et al. 2011, can be interpreted and applied as a measure of community assembly. These measures were originally used as null models for beta-diversity (i.e. to control for the effects of alpha diversity, etc), but increasingly in the literature they are used to indicate niche vs. neutral assembly processes. For anyone interested, the post is at the Oikos blog: http://www.oikosjournal.org/blog/v-diversity-metacommunities.

What I found most amusing, or sad, depending on your perspective was that I wrote a blog post about some of the original conversations I had with co-authors about this subject. I looked it up the other day and was shocked that the post was from 2013 (http://evol-eco.blogspot.com/2013/11/community-structure-what-are-we-missing.html). It's amazing how long the process of idea to final form actually takes. (No one phase took that long either - just idea + writing + coauthor edits + rewriting + submit + revise + coauthors + revise = long time...)


Wednesday, September 9, 2015

Predictable predator prey scaling - an ecological law?

Some ecologists react with skepticism about the idea of true laws in ecology. So when anything provides a strong and broad relationship between ecological variables, the response is often some combination of surprise, excitement, and disbelief. It’s not unexpected then that a new paper in Science - The predator-prey power law: Biomass scaling across terrestrial and aquatic biomes – has received that reaction, and a fair amount of media coverage too.
Figure 1 from Hatton et al. 2015. "Predators include lion, hyena, and other large carnivores (20 to 140 kg), which compete for large herbivore prey from dik-dik to buffalo (5 to 500 kg). Each point is a protected area, across which the biomass pyramid becomes three times more bottom-heavy at higher biomass. This near ¾ scaling law is found to recur across ecosystems globally."
Ian Hatton and co-authors present robust evidence that across multiple ecosystems, predator biomass scales with prey biomass by a power law with an exponent of ~0.75. This suggests that ecosystems are typically bottom heavy, with decreasing amounts of predator biomass added as more prey biomass is added. The paper represents a huge amount of work (and is surprisingly long as Science papers typically go): the authors compiled a huge database from 2260 communities, representing multiple ecosystems (mammals, plants, protists, ectotherm, and more)(Figure below). Further, the same scaling relationship exists between community biomass and production, suggesting that production drops off as communities increase in density. This pattern appears consistently across each dataset.


Figure 5 from Hatton et al. 2015. "Similar scaling links trophic structure and production.
Each point is an ecosystem at a period in time (n = 2260 total from 1512 locations) along a biomass gradient. (A toP) An exponent k in bold (with 95% CI) is the least squares slope fit to all points n in each row of plots..."
Their analysis is classic macroecology, with all the strengths and weaknesses implicit. The focus is unapologetically on identifying general ecological patterns, with the benefit of large sample sizes, cross system analysis, and multiple or large spatial scales. It surpasses this focus only on patterns by exploring how this pattern might arise from simple predator-prey models. They demonstrate that, broadly, predator biomass can have the same scaling as prey production, which they show follows the 3/4 power law relationship. As for why prey production follows this rule, they acknowledge uncertainty as to the exact explanation, but suggest density dependence may be important.

Their finding is perhaps more remarkable because the scaling exponent has similarities to another possible law, metabolic scaling theory, particularly the ~0.75 exponent (or perhaps ~2/3, depending on who you talk to). It’s a bit difficult for me, particularly as someone biased towards believing in the complexities of nature, to explain how such a pattern could emerge from vastly different systems, different types of predators, and different abiotic conditions. The model they present is greatly 
Peter Yodzis' famous food web for
the Benguela ecosystem.
simplified, and ignores factors often incorporated into these models, such as migration between systems (and connectivity), non-equilibrium (such as disturbance), and prey refuges. There is variation in the scaling exponent, but it is not clear how to evaluate a large vs. small difference (for example, they found (section M1B) that different ways of including data produced variation of +/- 0.1 in the exponent. That sounds high, but it’s hard to evaluate). Trophic webs are typically considered complicated – there are parasites, disease, omnivores, cannibalism, changes between trophic levels with life stage. How do these seemingly relevant details appear to be meaningless?

There are multiple explanations to be explored. First, perhaps these consistent exponents represent a stable arrangement for such complex systems or consistency in patterns of density dependence. Consistent relationships sometimes are concluded to be statistical artefacts rather than actually driven by ecological processes (e.g. Taylor’s Law). Perhaps most interestingly, in such a general pattern, we can consider the values that don’t occur in natural systems. Macroecology is particularly good at highlighting the boundaries on relationships that are observed in natural systems, rather than always identifying predictable relationships. The biggest clues to understanding this pattern maybe in finding when (or if) systems diverge from the 0.75 scaling rule and why.

Ian A. Hatton, Kevin S. McCann, John M. Fryxell, T. Jonathan Davies, Matteo Smerlak, Anthony R. E. Sinclair, Michel Loreau. The predator-prey power law: Biomass scaling across terrestrial and aquatic biomes. Science. Vol. 349 no. 6252. DOI: 10.1126/science.aac6284

Wednesday, August 26, 2015

Science is a maze

If you want to truly understand how scientific progress works, I suggest fitting mathematical models to dynamical data (i.e. population or community time series) for a few days.
map for science?

You were probably told sometime early on about the map for science: the scientific method. It was probably displayed for your high school class as a tidy flowchart showing how a hypothetico-deductive approach allows scientists to solve problems. Scientists make observations about the natural world, gather data, and come up with a possible explanation or hypothesis. They then deduce the predictions that follow, and design experiments to test those predictions. If you falsify the predictions you then circle back and refine, alter, or eventually reject the hypothesis. Scientific progress arises from this process. Sure, you might adjust your hypothesis a few times, but progress is direct and straightforward. Scientists aren’t shown getting lost.

Then, once you actively do research, you realize that formulation-reformulation process dominates. But because for most applications the formulation-reformulation process is slow – that is, each component takes time (e.g. weeks or months to redo experiments and analyses and work through reviews) – you only go through that loop a few times. So you usually still feel like you are making progress and moving forward.

But if you want to remind yourself just how twisting and meandering science actually is, spend some time fitting dynamic models. Thanks to Ben Bolker’s indispensible book, this also comes with a map, which shows how closely the process of model fitting mirrors the scientific method. The modeller has some question they wish to address, and experimental or observational data they hope to use to answer it. By fitting or selecting the best model for they data, they can obtain estimates for different parameters and so hopefully test predictions from they hypothesis. Or so one naively imagines.
From Bolker's Ecological Models and Data in R,
a map for model selection. 
The reality, however, is much more byzantine. Captured well in Vellend (2010)
“Consider the number of different models that can be constructed from the simple Lotka-Volterra formulation of interactions between two species, layering on realistic complexities one by one. First, there are at least three qualitatively distinct kinds of interaction (competition, predation, mutualism). For each of these we can have either an implicit accounting of basal resources (as in the Lotka-Volterra model) or we can add an explicit accounting in one particular way. That gives six different models so far. We can then add spatial heterogeneity or not (x2), temporal heterogeneity or not (x2), stochasticity or not (x2), immigration or not (x2), at least three kinds of functional relationship between species (e.g., predator functional responses, x3), age/size structure or not (x2), a third species or not (x2), and three ways the new species interacts with one of the existing species (x3 for the models with a third species). Having barely scratched the surface of potentially important factors, we have 2304 different models. Many of them would likely yield the same predictions, but after consolidation I suspect there still might be hundreds that differ in ecologically important ways.”
Model fitting/selection, can actually be (speaking for myself, at least) repetitive and frustrating and filled with wrong turns and dead ends. And because you can make so many loops between formulation and reformulation, and the time penalty is relatively low, you experience just how many possible paths forward there to be explored. It’s easy to get lost and forget which models you’ve already looked at, and keeping detailed notes/logs/version control is fundamental. And since time and money aren’t (as) limiting, it is hard to know/decide when to stop - no model is perfect. When it’s possible to so fully explore the path from question to data, you get to suffer through realizing just how complicated and uncertain that path actually is. 
What model fitting feels like?

Bolker hints at this (but without the angst):
“modeling is an iterative process. You may have answered your questions with a single pass through steps 1–5, but it is far more likely that estimating parameters and confidence limits will force you to redefine your models (changing their form or complexity or the ecological covariates they take into account) or even to redefine your original ecological questions.”
I bet there are other processes that have similar aspects of endless, frustrating ability to consider every possible connection between question and data (building a phylogenetic tree, designing a simulation?). And I think that is what science is like on a large temporal and spatial scale too. For any question or hypothesis, there are multiple labs contributing bits and pieces and manipulating slightly different combinations of variables, and pushing and pulling the direction of science back and forth, trying to find a path forward.

(As you may have guessed, I spent far too much time this summer fitting models…)

Friday, August 21, 2015

#ESA100: The next dimension in functional ecology

The third day of ESA talks saw an interesting session on functional ecology (Functional Traits in Ecological Research: What Have We Learned and Where Are We Going?), organized by Matt Aiello-Lammens and John Silander Jr.

As outlined by McGill and colleagues (2006), a functional trait-based approach can help us move past idiosyncrasies of species to understand more general patterns of species interactions and environmental tolerances. Despite our common conceptual framework that traits influence fitness in a given environment, many functional ecology studies have been challenged to explain much variation in measured functional traits using underlying environmental gradients. We might attribute this to a) measuring the ‘wrong’ traits or gradients, b) several trait values or syndromes being equally advantageous in a given environment, or c) limitations in our statistical approaches. Several talks in this organized session built up a nuanced story of functional trait diversity in the Cape Floristic Region (CFR) of South Africa. Communities are characterized by high species but low functional turnover (Matt Aiello-Lammens; Jasper Slingsby), and only in some genera do we see strong relationships between trait values and environments (Matt Aiello-Lammens; Nora Mitchell). Nora Mitchell presented a novel Bayesian approach combining trait and environmental information that allowed her to detect trait-environment relationships in about half of the lineages she investigated. These types of approaches that allow us to incorporate phylogenetic relationships and uncertainty may be a useful next step in our quest to understand how environmental conditions may drive trait patterns.

Another ongoing challenge in functional ecology is the mapping of function to traits. This is complicated by the fact that a trait may influence fitness in one environment but not others, and by our common use of ‘soft’ traits, which are more easily measurable correlates of the trait we really think is important. Focusing on a single important drought response trait axis in the same CFR system described above, Kerri Mocko demonstrated that clades of Pelargonium exhibited two contrasting stomatal behaviours under dry conditions: the tendency to favor water balance over carbon dioxide intake (isohydry) and the reverse (anisohydry). More to my point, she was able to link a more commonly measured functional trait (stomatal density) to this drought response behavior.

Turning from the macroevolutionary to the community scale, Ben Weinstein evaluated the classic assumption of trait-matching between consumer (hummingbird beak length) and resource (floral corolla length), exploring how resource availability might shape this relationship. Robert Muscarella then took a community approach to understanding species distributions, testing the idea that we are most likely to find species where their traits match the community average (community weighted mean). He used three traits of woody species to do so, and perhaps what I found most interesting about this approach was his comparison of these traits – if a species is unlike the community average along one trait dimension, are they also dissimilar along the other trait dimensions?


Thinking of trait dimensions, it was fascinating to see several researchers independently touch on this topic. For my talk, I subsampled different numbers and types of traits from a monkeyflower trait dataset to suggest that considering more traits may be our best sampling approach, if we want to understand community processes in complex, multi-faceted environments. Taking trait dimensionality to the extreme, perhaps gene expression patterns can be used to shed light on several important pathways, potentially helping us understand how plants interact with their environments across space and time (Andrew Latimer).

To me, this session highlighted several interesting advances in functional ecology research, and ended with an important ‘big picture’. In the face of another mass extinction, how is biodiversity loss impacting functional diversity (Matthew Davis)?



McGill, B. J., Enquist, B. J., Weiher, E., & Westoby, M. (2006). Rebuilding community ecology from functional traits. Trends in ecology & evolution, 21(4), 178-185.