First field day – THREE-D

A couple of weeks ago, I went to the field for the first time in the THREE-D project. In this project, we want to disentangle the impact of different global change drivers on biodiversity and the carbon cycle. We need to select new sites and set up the whole experiment this summer in western Norway and eastern Himalaya in China.

This first round was to set up grazing exclosures. We want to get and estimate of the grazing intensity along the elevational gradients, to know how much biomass is removed over the growing season. For this, we put up metal cages, that will exclude large herbivores and this will be compared with control plots without cages. The biomass in these plots will be harvested, dried and weighed. A master student will work on this project in Norway this summer.

Reading list 25.3 – 31.3.2019

De Long et al. 2018. Why are plant–soil feedbacks so unpredictable, and what to do about it? Functional Ecology. 33:118–128.
Cameron et al. 2019. Uneven global distribution of food web studies under climate change. Ecosphere. 10(3) e02645.
Classen et al. 2015. Direct and indirect effects of climate change on soil microbial and soil microbial‐plant interactions: What lies ahead? Ecosphere. 6(8) 130.

Reading list 11. – 17. March 2019

Green and MacLeod 2015. SIMR: an R package for power analysis of generalized linear mixed models by simulation. Methods in Ecology and Evolution. 7(4): 493-498.
Defossez et al. 2011. Do interactions between plant and soil biota change with elevation? A study on Fagus sylvatica. Biol. Lett. 7, 699–701.
Giling et al. 2019. Plant diversity alters the representation of motifs in food webs. Nature communitcation. 10, Article number: 1226.
Cameron et al. 2019. Global mismatches in aboveground and belowground biodiversity. Conservation Biology.

Reading list 25.2 – 3.3.2019

Managing 5 papers this time.

Denelle et al. 2019. Distinguishing the signatures of local environmental filtering and regional trait range limits in the study of trait–environment relationships. OIKOS.
Ettinger et al. 2019. How do climate change experiments alter plot-scale climate? Ecology Letters
Sandel et al. 2010. Contrasting trait responses in plant communities to experimental and geographic variation in precipitation. New Phytologist. 188: 565–575.
Fridley et al. 2016. Longer growing seasons shift grassland vegetation towards more-productive species. Nature Climate Change6(9), 865.
Mazziotta et al. 2018. Scaling functional traits to ecosystem processes: Towards a mechanistic understanding in peat mosses. Journal of Ecology. 107:843–859.

Reading list 18. – 24. February 2019

Grafiken von pngtree.com

Cooper. 2014. Warmer Shorter Winters Disrupt Arctic Terrestrial Ecosystems. Annu. Rev. Ecol. Evol. Syst. 45:271–95
Rumpf et al. 2014. Idiosyncratic Responses of High Arctic Plants to Changing Snow Regimes. Plos One. 9(2), e86281
Wilkinson et al. 2016. The FAIR Guiding Principles for scientific data management and stewardship. Scientific data 3.
Grinberg et al. 2016. Fake news on Twitter during the 2016 U.S. presidential election. Science. 363, 374–378.

Part 2 – From chicken foot to raspberry pi

Learning – how to measure plant functional traits – the hard way (a story in 4, or more, parts)

This is the story of how we organized our TraitTrain courses and what we learnt from our mistakes. With TraitTrain we want to strengthen research and educational collaborations over climate change and ecosystem ecology by organizing courses for students on how to measure plant functional traits and at the same time offer the students a relevant research experience. In the first part I explained how we organized the collection of the leaf traits, the so called trait wheel. Here, I want to talk about the 3rd step in the trait wheel: scanning the leaves.

Leaf area is a common measure of leaf size and is usually very plastic to climatic variation and/or stress. Leaf area is also important because it is used to calculate SLA (specific leaf area). To calculate leaf area, a leaf is scanned (for details see Pérez-Harguindeguy et al. 2013) and the scan is then run through a program such as ImageJ, which calculates the area of the leaf. We use ImageJ via the r package LeafArea.

All of this should be easy, except it wasn’t.

On the course each student had a laptop, which we then connected to the scanners. We usually have 4-5 scanners, because this is a time demanding job. On the TraitTrain courses, we have students from all over the world and they come with all sorts of laptops (brands, operating systems and settings). The instructions said to scan the leaves with specific settings (e.g. 300 dpi). For some reason, the scans from people with different settings on their computers (A4, letter,…) resulted in different leaf areas. We are still not sure how this happened, because when you set the resolution, the size of a scan should not make a difference.

The second problem was that many pictures had black edges around them (see picture 1), which was added to the leaf area. We did not understand this until we plotted leaf area against dry mass, which should be a more or less linear relationship. Instead the plot looked like a chicken foot (this expression might also have been inspired by all the chicken feet that were swimming around in our hotpots). It tooks us a while to figure this out and we had to look at each individual scan to make identify several of these problems.

Top: Dry mass (g) plotted agains leaf area (cm2). Bottom: Leaf scan, where the edge has a black line.

How did we solve these issues for the next courses. First of all we wanted to make the scanning a standardized process and not dependent on the settings of peoples laptops. We bought a couple of raspberry pi’s, which operated the scanners. We used the laptops as screens and keyboards to operate the scanner and pi’s.

Second, we implemented a couple of checks. The pi automatically checked if the scanning settings were correct (resolution, size, colour depth, file type), if the person scanning the leaf had typed in the correct LeafID and if the scan was saved in the right place. If you are interested in how we set up the pi and scanner, all the scripts we used is available on github.

We also realized that for checking the quality of the data and finding errors, we need a size reference on each scan. For this we added a ruler to each scanner (ruler glued to each scanner; see picture 2), which allowed us to directly check if the leaf area was calculated approximately right. This turned out to be very useful. One problem that occurs is of course you do not want to have the ruler on the scan when you calculate leaf area. The LeafArea package allows to cut a certain amounts of pixels on each side of the scan. This is very useful and also solves the problem with the black lines. The problem was now that we could only cut the same number of pixels on each side. But what we wanted was to cut more on the side where the ruler was added and less on all the other sides of the scan. For this we customize the run.ij function in the LeafArea package. If you want to use our customized run.ij function, run this code first in r devtools::install_github(„richardjtelford/LeafArea“). Then the run.ij function has an argument “trim.pixel2”, which allows to cut more pixels on the right side of the scan.

Leaf scan including a ruler as reference on the top.

Visual inspection of each scan turned out to be essential even by optimizing the scanning process. It is important to check the right number of pixels are cropped before calculating the leaf area, you can check that the full leaf is scanned and you can detect folded leaves. By checking each individual scan, we also found that some of the scans were really dirty. This happens, when working with plant material that comes from the field. And now, we tell the students to check and clean the scanner often. And finally, grasses can be difficult, because they tend not to lie flat on the scanner. They curl. Here, the answer was to use transparent tape to glue the leaves onto the scanner.