Since this app is exclusively being used for EXPECT’s AEP copper data extraction right now, I’m writing these instructions for myself, Pierre, Camden, Li, Knut Erik, and anyone else I’ve roped in to help with data extraction.
If something breaks, email saw@niva.no or log an issue here with screenshots and your extraction file.
First select a species group, then choose specific species for your study. Selected species will be available in the sample table below.
Complete measurement data for all sample-parameter combinations below. All fields marked as required must be filled.
This module provides a Criteria for Reporting and Evaluating Exposure Datasets (CREED) assessment (version 1), based on Di Paolo et al., 2024. The Dataset Details section summarises key characteristics of your dataset that are relevant for quality evaluation. Fields are auto-populated where possible from your imported data. This module is currently designed for exposure assessment of copper, and the data summarised may not be relevant for other pollutants (especially non-metals).
The CREED assessment process assumes that exposure data are being assessed in the context of a broader chemical/ecological assessment, and thus not all criteria will necessarily be relevant to you the user’s needs. In this case, you are encouraged to record this in the relevant fields.
CREED assigns a dataset Gold status if all Recommended criteria (11 reliability, 4 relevance) are met, or Silver status if all Required criteria (7 reliability, 7 relevance are met). Once you complete this module, you will be able to mark your dataset as Gold or Silver if it meets appropriate criteria.
Auto-population: Fields marked with this icon are auto-populated from data entered in earlier modules. This data can be overwritten as needed, but note that if you populate fields from data again, your changes will not be saved.
Describe the objective for which the usability of the dataset is assessed, including any required dataset thresholds. CREED Relevance Questions (RV01 - RV11) are based on the study purpose. Reliability Questions (RB01 - RB19) are common across all studies.
Enter thresholds for 'Partly Met' (minimum requirements) and 'Fully Met' (optimal requirements). Leave fields blank if no specific threshold is needed.
CREED's gateway criteria are designed to allow for the easy rejection of a study without requiring methodical examination.
Most studies processed using this tool can be expected to pass these criteria without issue.
Nevertheless they are included for the sake of completeness.
Each criterion is auto-evaluated based on your entered data, but can be manually overridden.
Assess how reliable the dataset is for answering your assessment questions.
Criterion: Was the sampling medium/matrix reported in detail (for water: dissolved fraction or whole water; for sediment: sieved or whole; for soil, grain size; for biota, species, age, sex, tissue type), and was the matrix appropriate for the analyte of interest?
Criterion: Was the sample collection method reported? Examples include grab, depth- and width-integrated, discrete, composite, or time-integrated samples, or continuous monitoring.
Criterion: Was information reported on sample handling (transport conditions, preservation, filtration, storage)? Was the type of container suitable for use with the analyte of interest (i.e., no loss or contamination)?
Criterion: Were the site locations reported?
Criterion: Were the date and time of sample collection reported?
Criterion: Was/were the analyte(s) of interest suitably and definitively identified?
Criterion: Were limits of detection and/or quantification provided?
Criterion: Were the laboratory and method accredited for all or almost all samples? Several national and international accreditation bodies are available (e.g., ISO, UKAS); was that laboratory and/or method certified to these standards? Was a quality system (such as, e.g., ISO 17025) adopted?
Shortcut Criterion: If you answer 'Fully Met' to this question, you may skip questions RB9-RB12. If not, please complete those questions.
Criterion: Was the method sufficiently described or referenced, such that it can be reproduced if necessary? Was method validation included?
Criterion: Was method blank contamination assessed with laboratory blanks?
Criterion: Were method recovery/accuracy and/or uncertainty assessed by recovery of standard reference material (SRM) and/or were lab spike samples assessed?
Criterion: Were method reproducibility and/or uncertainty assessed with lab replicates and long-term control recoveries?
Criterion: Were quality control (QC) samples collected during field sampling (such as field blanks, spikes, replicates) to demonstrate the method performance for a given field study?
Criterion: If chemical concentrations were normalised or adjusted (e.g., to represent bioavailability or toxicity), then were the calculations explained and were they appropriate?
Criterion: During calculations, were data reported to the appropriate number of significant figures or decimal places?
Criterion: For any outliers deleted from the data set, was evidence provided that these outliers were due to an error in measurement or contamination?
Criterion: Were censored data reported correctly (e.g., as a numerical value plus a less-than sign or another indicator of a nondetect)? If a substitution method was used for nondetects (e.g., censored data were replaced by zero, or by 1/2 or another fraction of the LOD/LOQ), then can the original censored data be restored by back-calculation using the reported LOD/LOQ?
Criterion: Were summary statistics calculated appropriately? If the dataset contained censored data, then were censored data included and were appropriate procedures used to determine summary statistics?
Criterion: If any supporting parameters are required for the assessment purpose, then were the supporting parameter data provided, and were their methods and data quality addressed?
Assess how relevant the dataset is to the purpose as described in your Purpose Statement.
Criterion: Was the sampling medium/matrix appropriate for the given purpose?
Criterion: Was the sample collection method adequate for the given purpose?
Criterion: Were the study area and number of locations sampled suitable for the given purpose?
Criterion: Was the rationale for selection of sampling locations provided and was it suitable for the given purpose?
Criterion: Were the samples collected over a time scale that was appropriate for the given purpose?
Criterion: Over the timespan, was the sampling frequency appropriate for the given purpose?
Criterion: Were conditions during sampling events documented and relevant for the given purpose?
Criterion: Was/were the reported analyte(s) appropriate for the given purpose?
Criterion: The method was sensitive enough for the given purpose
Criterion: The summary statistics provided were appropriate for the given purpose
Criterion: All supporting parameters that were needed to achieve the given purpose were provided
Extraction reporting currently isn’t very good. Check your internet connection, whether your PDF is corrupted, and if the Claude service is up. If the Raw Data Extraction is showing a lot of NULL and NA values, this may mean that the paper is too big for the extraction token limit (how much information the LLM can take on at one time). Let me know if that’s the case.
Use the most appropriate available option for now, and let me know.
This is something I’m working on, but it’s harder than simple data like the above.
Send me a screenshot + a copy of your files.
My mistake! But I’m very interested in hearing how this happened and what you were doing when it did. Please let me know ASAP.
I’m not really sure. It seems there are methods, protocols and techniques. LibreTexts Chemistry says:
A technique is any chemical or physical principle that we can use to study an analyte A method is the application of a technique for a specific analyte in a specific matrix. A procedure is a set of written directions that tell us how to apply a method to a particular sample. Finally, a protocol is a set of stringent guidelines that specify a procedure that an analyst must follow if an agency is to accept the results.
So I am certainly using the term “protocol” wrong here. But I haven’t reworked this section yet because, frankly, I don’t know who will be using the app. But I’m open to input.
The following packages, software, and data resources were used in this application:
Olker, J. H., Elonen, C. M., Pilli, A., Anderson, A., Kinziger, B., Erickson, S., Skopinski, M., Pomplun, A., LaLone, C. A., Russom, C. L., & Hoff, D. (2022). The ECOTOXicology Knowledgebase: A Curated Database of Ecologically Relevant Toxicity Tests to Support Environmental Research and Risk Assessment. Environmental Toxicology and Chemistry, 41(6):1520-1539. https://doi.org/10.1002/etc.5324
Curated toxicity data (species, chemicals) were retrieved from the ECOTOXicology Knowledgebase, U.S. Environmental Protection Agency. http:/www.epa.gov/ecotox/ (2025.06.12).
Djoumbou Feunang Y, Eisner R, Knox C, Chepelev L, Hastings J, Owen G, Fahy E, Steinbeck C, Subramanian S, Bolton E, Greiner R, and Wishart DS. ClassyFire: Automated Chemical Classification With A Comprehensive, Computable Taxonomy. Journal of Cheminformatics, 2016, 8:61. DOI: 10.1186/s13321-016-0174-y
Carolina Di Paolo, Irene Bramke, Jenny Stauber, Caroline Whalley, Ryan Otter, Yves Verhaegen, Lisa H. Nowell, Adam C. Ryan, Implementation of the CREED approach for environmental assessments, Integrated Environmental Assessment and Management, Volume 20, Issue 4, 1 July 2024, Pages 1019–1034, https://doi.org/10.1002/ieam.4909
Flanders Marine Institute (2018). IHO Sea Areas, version 3. Available online at https://www.marineregions.org/. https://doi.org/10.14284/323.
ISO 3166: Country Codes (https://www.iso.org/iso-3166-country-codes.html) Imported via the R Package ISOcodes (https://cran.r-project.org/web/packages/ISOcodes/index.html)
Beep: https://pixabay.com/sound-effects/beep-329314/, u_edtmwfwu7c
Allaire, JJ. 2023. config: Manage Environment Specific Configuration Values. https://doi.org/10.32614/CRAN.package.config.
Atkins, Aron, Toph Allen, Hadley Wickham, Jonathan McPherson, and JJ Allaire. 2025. rsconnect: Deploy Docs, Apps, and APIs to “Posit Connect,” “shinyapps.io,” and “RPubs”. https://doi.org/10.32614/CRAN.package.rsconnect.
Attali, Dean. 2021. shinyjs: Easily Improve the User Experience of Your Shiny Apps in Seconds. https://doi.org/10.32614/CRAN.package.shinyjs.
Barbone, Jordan Mark, and Jan Marvin Garbuszus. 2025. Openxlsx2: Read, Write and Edit “xlsx” Files. https://janmarvin.github.io/openxlsx2/.
Buchta, Christian, and Kurt Hornik. 2025. ISOcodes: Selected ISO Codes. https://doi.org/10.32614/CRAN.package.ISOcodes.
Chamberlain, Scott, Hao Zhu, Najko Jahn, Carl Boettiger, and Karthik Ram. 2025. rcrossref: Client for Various “CrossRef” “APIs”. https://doi.org/10.32614/CRAN.package.rcrossref.
Chang, Winston, Joe Cheng, JJ Allaire, Carson Sievert, Barret Schloerke, Yihui Xie, Jeff Allen, Jonathan McPherson, Alan Dipert, and Barbara Borges. 2025. shiny: Web Application Framework for r. https://doi.org/10.32614/CRAN.package.shiny.
Cheng, Joe, Winston Chang, Steve Reid, James Brown, Bob Trower, and Alexander Peslyak. 2025. httpuv: HTTP and WebSocket Server Library. https://doi.org/10.32614/CRAN.package.httpuv.
Cheng, Joe, Barret Schloerke, Bhaskar Karambelkar, and Yihui Xie. 2024. leaflet: Create Interactive Web Maps with the JavaScript “Leaflet” Library. https://doi.org/10.32614/CRAN.package.leaflet.
Cheng, Joe, Carson Sievert, Barret Schloerke, Winston Chang, Yihui Xie, and Jeff Allen. 2024. htmltools: Tools for HTML. https://doi.org/10.32614/CRAN.package.htmltools.
Csárdi, Gábor, Kirill Müller, and Jim Hester. 2023. desc: Manipulate DESCRIPTION Files. https://doi.org/10.32614/CRAN.package.desc.
Eddelbuettel, Dirk. 2024. digest: Create Compact Hash Digests of r Objects. https://doi.org/10.32614/CRAN.package.digest.
Fay, Colin. 2020. attempt: Tools for Defensive Programming. https://doi.org/10.32614/CRAN.package.attempt.
Fay, Colin, Vincent Guyader, Sébastien Rochette, and Cervan Girard. 2024. golem: A Framework for Robust Shiny Applications. https://doi.org/10.32614/CRAN.package.golem.
Gagolewski, Marek. 2022. “stringi: Fast and Portable Character String Processing in R.” Journal of Statistical Software 103 (2): 1–59. https://doi.org/10.18637/jss.v103.i02.
Garbett, Shawn P, Jeremy Stephens, Kirill Simonov, Yihui Xie, Zhuoer Dong, Hadley Wickham, Jeffrey Horner, et al. 2024. yaml: Methods to Convert r Data to YAML and Back. https://doi.org/10.32614/CRAN.package.yaml.
Guyader, Vincent, Sébastien Rochette, Murielle Delmotte, and Swann Floc’hlay. 2025. attachment: Deal with Dependencies. https://doi.org/10.32614/CRAN.package.attachment.
Hester, Jim, and Jennifer Bryan. 2024. glue: Interpreted String Literals. https://doi.org/10.32614/CRAN.package.glue.
Hester, Jim, Lionel Henry, Kirill Müller, Kevin Ushey, Hadley Wickham, and Winston Chang. 2024. withr: Run Code “With” Temporarily Modified Global State. https://doi.org/10.32614/CRAN.package.withr.
Müller, Kirill. 2020. here: A Simpler Way to Find Your Files. https://doi.org/10.32614/CRAN.package.here.
Ottolinger, Philipp. 2024. Bib2df: Parse a BibTeX File to a Data Frame. https://doi.org/10.32614/CRAN.package.bib2df.
Owen, Jonathan. 2021. rhandsontable: Interface to the “Handsontable.js” Library. https://doi.org/10.32614/CRAN.package.rhandsontable.
Perrier, Victor, Fanny Meyer, and David Granjon. 2025. shinyWidgets: Custom Inputs Widgets for Shiny. https://doi.org/10.32614/CRAN.package.shinyWidgets.
R Core Team. 2025a. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.
———. 2025b. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.
Richardson, Neal, Ian Cook, Nic Crane, Dewey Dunnington, Romain François, Jonathan Keane, Dragoș Moldovan-Grünfeld, Jeroen Ooms, Jacob Wujciak-Jens, and Apache Arrow. 2025. arrow: Integration to “Apache” “Arrow”. https://doi.org/10.32614/CRAN.package.arrow.
Rodriguez-Sanchez, Francisco, and Connor P. Jackson. 2024. grateful: Facilitate Citation of R Packages. https://pakillo.github.io/grateful/.
Schloerke, Barret. 2025. Shinytest2: Testing for Shiny Applications. https://doi.org/10.32614/CRAN.package.shinytest2.
Sievert, Carson. 2020. Interactive Web-Based Data Visualization with r, Plotly, and Shiny. Chapman; Hall/CRC. https://plotly-r.com.
———. 2023. bsicons: Easily Work with “Bootstrap” Icons. https://doi.org/10.32614/CRAN.package.bsicons.
Sievert, Carson, Joe Cheng, and Garrick Aden-Buie. 2025. bslib: Custom “Bootstrap” “Sass” Themes for “shiny” and “rmarkdown”. https://doi.org/10.32614/CRAN.package.bslib.
Sievert, Carson, Richard Iannone, and Joe Cheng. 2023. shinyvalidate: Input Validation for Shiny Apps. https://doi.org/10.32614/CRAN.package.shinyvalidate.
Wickham, Hadley. 2011. “testthat: Get Started with Testing.” The R Journal 3: 5–10. https://journal.r-project.org/archive/2011-1/RJournal_2011-1_Wickham.pdf.
Wickham, Hadley, Mara Averick, Jennifer Bryan, Winston Chang, Lucy D’Agostino McGowan, Romain François, Garrett Grolemund, et al. 2019. “Welcome to the tidyverse.” Journal of Open Source Software 4 (43): 1686. https://doi.org/10.21105/joss.01686.
Wickham, Hadley, Jennifer Bryan, Malcolm Barrett, and Andy Teucher. 2024. usethis: Automate Package and Project Setup. https://doi.org/10.32614/CRAN.package.usethis.
Wickham, Hadley, Winston Chang, Jim Hester, and Lionel Henry. 2024. pkgload: Simulate Package Installation and Attach. https://doi.org/10.32614/CRAN.package.pkgload.
Wickham, Hadley, Joe Cheng, Aaron Jacobs, Garrick Aden-Buie, and Barret Schloerke. 2025. ellmer: Chat with Large Language Models. https://doi.org/10.32614/CRAN.package.ellmer.
Wickham, Hadley, Jim Hester, Winston Chang, and Jennifer Bryan. 2022. devtools: Tools to Make Developing r Packages Easier. https://doi.org/10.32614/CRAN.package.devtools.
Xie, Yihui, JJ Allaire, and Jeffrey Horner. 2025. markdown: Render Markdown with “commonmark”. https://doi.org/10.32614/CRAN.package.markdown.
Xie, Yihui, Joe Cheng, and Xianying Tan. 2024. DT: A Wrapper of the JavaScript Library “DataTables”. https://doi.org/10.32614/CRAN.package.DT.