Show simple item record

dc.contributor.authorEgna, Nicole
dc.contributor.authorO'Connor, David
dc.contributor.authorStacy-Dawes, Jenna
dc.contributor.authorTobler, Mathias W.
dc.contributor.authorPilfold, Nicholas W.
dc.contributor.authorNeilson, Kristin
dc.contributor.authorSimmons, Brooke
dc.contributor.authorDavis, Elizabeth Oneita
dc.contributor.authorBowler, Mark
dc.contributor.authorFennessy, Julian
dc.contributor.authorGlikman, Jenny A.
dc.contributor.authorLarpei, Lexson
dc.contributor.authorLekalgitele, Jesus
dc.contributor.authorLekupanai, Ruth
dc.contributor.authorLekushan, Johnson
dc.contributor.authorLemingani, Lekuran
dc.contributor.authorLemirgishan, Joseph
dc.contributor.authorLenaipa, Daniel
dc.contributor.authorLenyakopiro, Jonathan
dc.contributor.authorLesipiti, Ranis Lenalakiti
dc.contributor.authorLororua, Masenge
dc.contributor.authorMuneza, Arthur
dc.contributor.authorRabhayo, Sebastian
dc.contributor.authorRanah, Symon Masiaine Ole
dc.contributor.authorRuppert, Kirstie
dc.contributor.authorOwen, Megan A.
dc.date.accessioned2020-12-23T19:19:20Z
dc.date.available2020-12-23T19:19:20Z
dc.date.issued2020
dc.identifier.issn2045-7758
dc.identifier.doi10.1002/ece3.6722
dc.identifier.urihttp://hdl.handle.net/20.500.12634/807
dc.description.abstractScientists are increasingly using volunteer efforts of citizen scientists to classify images captured by motion-activated trail cameras. The rising popularity of citizen science reflects its potential to engage the public in conservation science and accelerate processing of the large volume of images generated by trail cameras. While image classification accuracy by citizen scientists can vary across species, the influence of other factors on accuracy is poorly understood. Inaccuracy diminishes the value of citizen science derived data and prompts the need for specific best-practice protocols to decrease error. We compare the accuracy between three programs that use crowdsourced citizen scientists to process images online: Snapshot Serengeti, Wildwatch Kenya, and AmazonCam Tambopata. We hypothesized that habitat type and camera settings would influence accuracy. To evaluate these factors, each photograph was circulated to multiple volunteers. All volunteer classifications were aggregated to a single best answer for each photograph using a plurality algorithm. Subsequently, a subset of these images underwent expert review and were compared to the citizen scientist results. Classification errors were categorized by the nature of the error (e.g., false species or false empty), and reason for the false classification (e.g., misidentification). Our results show that Snapshot Serengeti had the highest accuracy (97.9%), followed by AmazonCam Tambopata (93.5%), then Wildwatch Kenya (83.4%). Error type was influenced by habitat, with false empty images more prevalent in open-grassy habitat (27%) compared to woodlands (10%). For medium to large animal surveys across all habitat types, our results suggest that to significantly improve accuracy in crowdsourced projects, researchers should use a trail camera set up protocol with a burst of three consecutive photographs, a short field of view, and determine camera sensitivity settings based on in situ testing. Accuracy level comparisons such as this study can improve reliability of future citizen science projects, and subsequently encourage the increased use of such data.
dc.language.isoen
dc.relation.urlhttps://onlinelibrary.wiley.com/doi/abs/10.1002/ece3.6722
dc.rights© 2020 The Authors. Ecology and Evolution published by John Wiley & Sons Ltd. https://creativecommons.org/licenses/by/4.0/
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/
dc.subjectCITIZEN SCIENCE
dc.subjectEAST AFRICA
dc.subjectCAMERA TRAPS
dc.subjectTECHNOLOGY
dc.subjectDATA PROCESSING
dc.subjectRESEARCH
dc.titleCamera settings and biome influence the accuracy of citizen science approaches to camera trap image classification
dc.typeArticle
dc.source.journaltitleEcology and Evolution
dc.source.volume10
dc.source.issue21
dc.source.beginpage11954
dc.source.endpage11965
dcterms.dateAccepted2020
refterms.dateFOA2021-01-14T22:44:05Z
html.description.abstractScientists are increasingly using volunteer efforts of citizen scientists to classify images captured by motion-activated trail cameras. The rising popularity of citizen science reflects its potential to engage the public in conservation science and accelerate processing of the large volume of images generated by trail cameras. While image classification accuracy by citizen scientists can vary across species, the influence of other factors on accuracy is poorly understood. Inaccuracy diminishes the value of citizen science derived data and prompts the need for specific best-practice protocols to decrease error. We compare the accuracy between three programs that use crowdsourced citizen scientists to process images online: Snapshot Serengeti, Wildwatch Kenya, and AmazonCam Tambopata. We hypothesized that habitat type and camera settings would influence accuracy. To evaluate these factors, each photograph was circulated to multiple volunteers. All volunteer classifications were aggregated to a single best answer for each photograph using a plurality algorithm. Subsequently, a subset of these images underwent expert review and were compared to the citizen scientist results. Classification errors were categorized by the nature of the error (e.g., false species or false empty), and reason for the false classification (e.g., misidentification). Our results show that Snapshot Serengeti had the highest accuracy (97.9%), followed by AmazonCam Tambopata (93.5%), then Wildwatch Kenya (83.4%). Error type was influenced by habitat, with false empty images more prevalent in open-grassy habitat (27%) compared to woodlands (10%). For medium to large animal surveys across all habitat types, our results suggest that to significantly improve accuracy in crowdsourced projects, researchers should use a trail camera set up protocol with a burst of three consecutive photographs, a short field of view, and determine camera sensitivity settings based on in situ testing. Accuracy level comparisons such as this study can improve reliability of future citizen science projects, and subsequently encourage the increased use of such data.


Files in this item

Thumbnail
Name:
Egna_2020_EcologyandEvolution.pdf
Size:
924.1Kb
Format:
PDF

This item appears in the following Collection(s)

  • SDZWA Research Publications
    Peer reviewed and scientific works by San Diego Zoo Wildlife Alliance staff. Includes books, book sections, articles and conference publications and presentations.

Show simple item record

© 2020 The Authors. Ecology and Evolution published by John Wiley & Sons Ltd. https://creativecommons.org/licenses/by/4.0/
Except where otherwise noted, this item's license is described as © 2020 The Authors. Ecology and Evolution published by John Wiley & Sons Ltd. https://creativecommons.org/licenses/by/4.0/