-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathmod_data-disc.qmd
433 lines (288 loc) · 29.6 KB
/
mod_data-disc.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
---
title: "Data Discovery & Management"
code-annotations: hover
code-overflow: wrap
---
## Overview
Synthesis projects often begin with a few datasets that inspire the questions--and end up incorporating dozens or hundreds of others. Researchers may seek out data that resemble their initial datasets, but come from other climates, ecosystems, or cultural settings. Or they may find that they need data of a completely different kind to establish drivers and context. The best synthesizers are resourceful in their search for data, cautious in evaluating data quality and relevance, and meticulous in documenting data sources, treatments, and related analytical decisions. In this workshop, we will cover all these aspects in enough depth for participants to begin finding and assessing their own project data.
## Learning Objectives
After completing this module you will be able to:
- <u>Identify</u> repositories "known for" a particular type of data
- <u>Explain</u> how to effectively search for data outside of specialized repositories
- <u>Create</u> a data inventory for identified data that allows for easy re-finding of those data products
- <u>Plan</u> how to download data in a reproducible way
- <u>Perform</u> checks of the fundamental structure of a dataset
## Preparation
Each Synthesis Fellow should identify five (5) datasets of possible relevance to their project and enter the specified information into the first sheet of the "data-inventory" Google Sheet.
## Networking Session
::: panel-tabset
### 2024 Guests
To motivate this module and provide some beneficial context, we're beginning with a conversation with a panel composed of people who work at various organizations with a focus on data management and production. See the tabs below for each year's panelists and links to their professional sites.
Panelists will briefly introduce themselves and describe their roles. They will then speak to the kinds of data available at their organization and the strengths, limitations, and quirks of those data products from a synthesis lens. Individuals not associated with data repositories will instead share their experience working with specific types of data. Time allowing, panelists will talk about their experiences working at their organizations more broadly.
- Dr. [Greg Maurer](https://greg.pronghorns.net/index.html), Environmental Data Initiative (EDI) and Jornada LTER
- Dr. [Eric Sokol](https://www.neonscience.org/person/eric-sokol), Staff Scientist, Quantitative Ecology, National Ecological Observatory Network (NEON)
- Dr. [Nicole Kaplan](https://www.ars.usda.gov/people-locations/person?person-id=51562), Computational Biologist, U.S. Department of Agriculture-Agricultural Research Service (USDA-ARS)
- Dr. [Steve Formel](https://www.usgs.gov/staff-profiles/stephen-k-formel), Biologist, USGS Science, Analytics, and Synthesis Program and node manager for the Ocean Biodiversity Information System - USA (OBIS-USA) and the Global Biodiversity Information Facility US (GBIF-US)
#### Pre-Prepared Questions
- What policies are in place to ensure responsible use of your data?
- What challenges (technical and scientific) do you see in integrating data across platforms and organizations?
- Are you aware of any open sources of code useful for downloading, wrangling, or analyzing data in your repository?
- How can young scientists and data professionals contribute to the work being done by your organizations?
:::
## Data Repositories
There are *a lot* of specialized data repositories out there. These organizations are either primarily dedicated to storing and managing data or those operations constitute a substantive proportion of their efforts. In synthesis work, you may already have some datasets in-hand at the outset but it likely that **you will need to find more data to test your hypotheses**. Data repositories are a great way of finding/accessing data that are relevant to your questions.
You'll become familiar with many of these when you need a particular type of data and go searching for it but to help speed you along, see the list below for a non-exhaustive set of some that have proved useful to other synthesis projects in the past. They are in alphabetical order. If the "{{< fa brands r-project >}} Package" column contains the GitHub logo ({{< fa brands github >}}) then the package is available on GitHub but is not available on CRAN (or not available at time of writing).
| **Name** | **Description** | {{< fa brands r-project >}} **Package** |
|:------------------------:|:--------------------|:------------------------:|
| [AmeriFlux](https://ameriflux.lbl.gov/data/data-policy/) | Provides data on carbon, water, and energy fluxes in ecosystems across the Americas, aiding in climate change and carbon cycle research. | [`amerifluxr`](https://cran.r-project.org/web/packages/amerifluxr/index.html) |
| [DataONE](https://www.dataone.org/) | Aggregates environmental and ecological data from global sources, focusing on biodiversity, climate, and ecosystem research. | [`dataone`](https://cran.r-project.org/web/packages/dataone/index.html) |
| [EDI](https://edirepository.org/) | Contains a wide range of ecological and environmental datasets, including long-term observational data, experimental results, and field studies from diverse ecosystems. | [`EDIutils`](https://cran.r-project.org/web/packages/EDIutils/index.html) |
| [EES-DIVE](https://ess-dive.lbl.gov/) | The Environmental System Science Data Infrastructure for a Virtual Ecosystem (ESS-DIVE) includes a variety of observational, experimental, modeling and other data products from a wide range of ecological and urban systems. | -- |
| [GBIF](https://www.gbif.org/) | The Global Biodiversity Information Facility (GBIF) aggregates global species occurrence data and biodiversity records, supporting research in species distribution and conservation. | [`rgbif`](https://cran.r-project.org/web/packages/rgbif/index.html) |
| [Google Earth Engine](https://earthengine.google.com/) | Google Earth Engine is a cloud-based geospatial analysis platform that provides access to vast amounts of satellite imagery and environmental data for monitoring and understanding changes in the Earth's surface. | {{< fa brands github >}} [`rgee`](https://github.com/r-spatial/rgee) |
| [Microsoft Planetary Computer](https://planetarycomputer.microsoft.com/) | The Microsoft Planetary Computer is a cloud-based platform that combines global environmental datasets with advanced analytical tools to support sustainability and ecological research. | {{< fa brands github >}} [`rstac`](https://github.com/brazil-data-cube/rstac) |
| [NASA](https://data.nasa.gov/) | Provides data on earth science, space exploration, and climate, including satellite imagery and observational data for both terrestrial and extraterrestrial studies. Nice GUI-based data download via [AppEEARS](https://appeears.earthdatacloud.nasa.gov/). | [`nasadata`](https://cran.r-project.org/web/packages/nasadata/index.html) |
| [NCBI](https://www.ncbi.nlm.nih.gov/) | Hosts genomic and biological data, including DNA, RNA, and protein sequences, supporting genomics and molecular biology research. | [`rentrez`](https://cran.r-project.org/web/packages/rentrez/index.html) |
| [NEON](https://data.neonscience.org/) | Provides ecological data from U.S. field sites, covering biodiversity, ecosystems, and environmental changes, supporting large-scale ecological research. | [`neonUtilities`](https://cran.r-project.org/web/packages/neonUtilities/index.html) |
| [NOAA](https://data.noaa.gov/onestop/) | Offers meteorological, oceanographic, and climate data, essential for understanding atmospheric conditions, marine environments, and long-term climate trends. | {{< fa brands github >}} [`EpiNOAA-R`](https://github.com/NOAA-Big-Data-Program/EpiNOAA-R) |
| [Open Traits Network](https://opentraits.org/datasets.html) | While not a repository *per se*, the Open Traits Network has compiled an extensive lists of repositories for trait data. Check out their repository inventory for trait data | -- |
| [USGS](https://www.usgs.gov/products/data/all-data) | Hosts data on geology, hydrology, biology, and geography, including topographical maps and natural resource assessments. | [`dataRetrieval`](https://cran.r-project.org/web/packages/dataRetrieval/index.html) |
## General Data Searches
If you don't find what you're looking for in a particular data repository (or want to look for data not included in one of those platforms), you might want to consider a broader search. For instance, [Google](https://www.google.com) is a suprisingly good resource for finding data and--for those familiar with Google Scholar for peer reviewed literature-specific Googling--there is a dataset-specific variant of Google called [Google Dataset Search](https://datasetsearch.research.google.com/).
#### Search Operators
Virtually all search engines support "operators" to create more effective queries (i.e., search parameters). If you don't use operators, most systems will just return results that have any of the words in your search which is non-ideal, especially when you're looking for very specific criteria in candidate datasets.
See the tabs below for some useful operators that might help narrow your dataset search even when using more general platforms.
::: panel-tabset
#### Quotes
Use quotation marks (`""`) to **search for an exact phrase**. This is particularly useful when you need specific data points or exact wording.
Example: `"reef biodiversity"`
#### Wildcard
Use an asterisk (`*`) to **search using a placeholder for any word or phrase in the query**. This is useful for finding variations of a term.
Example: `Pinus * data`
#### Plus
Use a plus sign (`+`) to **search using more than one query *at the same time***. This is useful when you need combinations of criteria to be met.
Example: `bat + cactus`
#### OR
Use the 'or' operator (`OR`) operator to **search for either one term *or* another**. It broadens your search to include multiple terms.
Example: `"prairie pollinator" OR "grassland pollinator"`
#### Minus
Use a minus sign (`-`; a.k.a. "hyphen") to **exclude certain words from your search**. Useful to filter out irrelevant results.
Example: `marine biodiversity data -fishery`
#### Site
Use the site operator (`site:`) to **search within a specific website or domain**. This is helpful when you're looking for data from a particular source.
Example: `site:.gov bird data`
#### File Type
Use the file type operator (`filetype:`) to **search for data with a specific file extension**. Useful to make sure the data you find is already in a format you can intteract with.
Example: `filetype:tif precipitation data`
#### In Title
Use the 'in title' operator (`intitle:`) to **search for pages that have a specific word in the title**. This can narrow down your results to more relevant pages.
Example: `intitle:"lithology"`
#### In URL
Use the 'in URL' operator (`inurl:`) to **search for pages that have a specific word in the URL**. This can help locate data repositories or specific datasets.
Example: `inurl:data soil chemistry`
:::
## Data Inventory Value
Documenting potential datasets (and their metadata) thoroughly in a data inventory provides numerous benefits! These include:
- Well-documented data inventory table make it easier for researchers to find and access specific data for reproducible research
- Documentation will help researchers to quickly understand the context, scope, and limitations of the data, reducing the time spent on preliminary data assessment
- Detailed documentation will speed up the data publication process (e.g., data provenance, the difference among methods, etc.)
::: {.callout-note icon="false"}
#### Activity: Data Inventory
**Part 1** (\~25 min)
In your project groups:
- Review your data inventory Google Sheet and discuss why you chose the datasets included.
- Discuss what key information is needed to determine if each dataset is useful for your project. The columns from the detailed data inventory sheet (Data_Inventory_V2) might help to guide the conversation.
- Decide on the datasets you want to use in your analysis, ensuring there are at least as many as group members. Each group member will self-assign at least one dataset.
- *Later, each person will download their assigned dataset*
- After identifying the necessary information, begin filling out the detailed data inventory Google Sheet. Focus on completing the columns that are relevant for making informed decisions.
- *This sheet will be shared with another group in Part 2*
**Part 2** (\~10 min)
- You are assigned to review the detailed data inventory table (Data_Inventory_V2) from another project group
- Each group member self-assigns one dataset that you want to review.
- Try to find the exact data file to which you were assigned
- Do you agree with the information entered in the data inventory?
- Is there any information you think should be in the data inventory that wasn’t?
:::
::: {.callout-warning icon="false"}
#### Discussion: Data Inventory
Return to the main room and let's discuss (some of) the following questions:
- Which elements of the data inventory table made it easier or more difficult to find the data?
- What challenges did you encounter while searching for the datasets?
- What is your plan for downloading the data?
:::
## Downloading Data
Once you've found data, filled out your data inventory, and decided which datasets you actually want, it's time to download some of them! There are several methods you can use and it's possible that each won't work in all cases so it's important to be at least somewhat familiar with several of these tools.
Most of these methods will work regardless of the format of the data (i.e., its file extension) but the format of the data will be important when you want to 'read in' the data and begin to work with it.
Below are some example code chunks for five methods of downloading data in a scripted way. There will be contexts where only a <u>G</u>raphical <u>U</u>ser <u>I</u>nterface ("GUI"; \[GOO-ee\]) is available but the details of that method of downloading are usually specific to the portal you're accessing so we won't include an artificial general case.
::: panel-tabset
### Data Entity URL
Sometimes you might have a URL directly to a particular dataset (usually one hosted by a data repository). If you copy/paste this URL into your browser the download would automatically begin. However, we want to make our workflows entirely scripted (or close to it) so see the example below for how to download data via a data entity URL.
The dataset we download below is one collected at the Santa Barbara Coastal (SBC) LTER on [California spiny lobster (*Panulirus interruptus*) populations](https://portal.edirepository.org/nis/mapbrowse?packageid=knb-lter-sbc.77.10).
```{r download-entity-url}
#| eval: false
# Define URL as an object
dt_url <- "https://pasta.lternet.edu/package/data/eml/knb-lter-sbc/77/10/f32823fba432f58f66c06b589b7efac6" #<1>
# Read it into R
lobster_df <- read.csv(file = dt_url)
```
1. You can typically find this URL in the repository where you found the dataset
### R Package
If you're quite lucky, the data you want might be stored in a repository that developed (and maintains!) an {{< fa brands r-project >}} R package. These packages may or may not be on CRAN (packages can often also be found on GitHub or Bioconductor). Typically these packages have a "vignette" that demonstrates how their functions should be used.
Consider the following example adapted from the USGS `dataRetrieval` [package vignette](https://cran.r-project.org/web/packages/dataRetrieval/vignettes/dataRetrieval.html). Visit [USGS National Water Dashboard interactive map](https://dashboard.waterdata.usgs.gov/app/nwd/en/) to find site numbers and check data availability.
```{r download-r-pkg}
#| eval: false
# Load needed packages
## install.packages("librarian")
librarian::shelf(dataRetrieval)
# Set up the parameters for the Santa Ynez River site
siteNumber <- "11133000" # USGS site number for Santa Ynez River at Narrows
parameterCd <- "00060" # Parameter code for discharge (cubic feet per second)
startDate <- "2024-01-01" # Start date
endDate <- "2024-01-31" # End date
# Retrieve daily discharge data
dischargeData <- readNWISdv(siteNumber, parameterCd, startDate, endDate)
# View the first few rows of the data
head(dischargeData)
```
### Batch Download
You may want to download several data files hosted in the same repository online for different spatial/temporal replicates. You *could* try to use the data entity URL or an associated {{< fa brands r-project >}} package (if one exists) or you could write code to do a "batch download" where you'd just download each file using a piece of code that repeats itself as much as needed.
The dataset we demonstrate downloading below is [NOAA weather station data](https://www1.ncdc.noaa.gov/pub/data/gsod/). Specifically it is the <u>I</u>ntegrated <u>S</u>urface <u>D</u>ata (ISD).
```{r download-batch}
#| eval: false
# Specify the start/end years for which you want to download data
target_years <- 2000:2005
# Loop across years
for(focal_year in target_years){
# Message a progress note
message("Downloading data for ", focal_year) # <1>
# Assemble the URL manually
focal_url <- paste0( "https://www1.ncdc.noaa.gov/pub/data/gsod/", focal_year, "/gsod_", focal_year, ".tar") # <2>
# Assemble your preferred file name once it's downloaded
focal_file <- paste0("gsod_", focal_year, ".tar") # <3>
# Download the data
utils::download.file(url = focal_url, destfile = focal_file, method = "curl")
}
```
1. This message isn't required but can be nice! Downloading data can take a *long* time so including a progress message can re-assure you that your R session hasn't crashed
2. To create a working URL you'll likely need to click an example data file URL and try to *exactly* mimic its format
3. This step again isn't required but can let you exert a useful level of control over the naming convention of your data file(s)
### API Call
In slightly more complicated contexts, you'll need to make a request via an <u>A</u>pplication <u>P</u>rogramming <u>I</u>nterface ("API"). As the name might suggest, these platforms serve as a bridge between some application and code. Using such a method to download data is a--relatively--frequent occurrence in synthesis work.
Here we'll demonstrate an API call for NOAA's [Tides and Currents](https://tidesandcurrents.noaa.gov/) data.
```{r download-api}
#| eval: false
# Load needed packages
## install.packages("librarian")
librarian::shelf(httr, jsonlite)
# Define a 'custom function' to fetch desired data
fetch_tide <- function(station_id, product = "predictions", datum = "MLLW", time_zone = "lst_ldt", units = "english", interval = "h", format = "json"){ # <1>
# Custom error flags # <2>
# Get a few key dates (relative to today)
yesterday <- Sys.Date() - 1
two_days_from_now <- Sys.Date() + 2
# Adjust begin/end dates
begin_date <- format(yesterday, "%Y%m%d")
end_date <- format(two_days_from_now, "%Y%m%d")
# Construct the API URL
tide_url <- paste0( # <3>
"https://api.tidesandcurrents.noaa.gov/api/prod/datagetter?",
"product=", product,
"&application=NOS.COOPS.TAC.WL",
"&begin_date=", begin_date,
"&end_date=", end_date,
"&datum=", datum,
"&station=", station_id,
"&time_zone=", time_zone,
"&units=", units,
"&interval=", interval,
"&format=", format)
# Make the API request
response <- httr::GET(url = tide_url)
# If the request is successful...
if(httr::status_code(response) == 200){
# Parse the JSON response
tide_data <- jsonlite::fromJSON(httr::content(response, "text", encoding = "UTF-8"))
# And return it
return(tide_data)
# Otherwise...
} else {
# Pass the error message back to the user
stop("Failed to fetch tide data\nStatus code: ", httr::status_code(response))
}
}
# Invoke the function
tide_df <- fetch_tide(station_id = "9411340")
```
1. When you do need to make an API call, a custom function is a great way of standardizing your entries. This way you only need to figure out how to do the call once and from then on you can lean on the (likely more familiar) syntax of the language in which you wrote the function!
2. We're excluding error checks for simplicity's sake but **you will want to code informative error checks**. Basically you want to consider inputs to the function that would break it and pre-emptively stop the function (with an informative message) when those malformed inputs are received
3. Just like the batch download, we need to assemble the URL that the API is expecting
### Command Line
While many ecologists are trained in programming languages like R or Python, some operations require the <u>C</u>ommand <u>L</u>ine <u>I</u>nterface ("CLI"; a.k.a. "shell", "bash", "terminal", etc.). **Don't worry if you're new to this language!** There are a lot of good resources for learning the fundamentals, including The Carpentries' workshop "[The Unix Shell](https://swcarpentry.github.io/shell-novice/)".
Below we demonstrate download via command line for NASA [OMI/Aura Sulfur Dioxide (SO2)](https://disc.gsfc.nasa.gov/datasets/OMSO2e_003/summary?keywords=AURA_OMI_LEVEL3). The OMI science team produces this Level-3 Aura/OMI Global OMSO2e Data Products (0.25 degree Latitude/Longitude grids) for atmospheric analysis.
> Step 1: Using the "subset/Get Data" tab on the right-hand side of the [data page](https://disc.gsfc.nasa.gov/datasets/OMSO2e_003/summary?keywords=AURA_OMI_LEVEL3), generate a list of file names for your specified target area and time period. Download the list of links as a TXT file named "list.txt". Be sure to document the target area and temporal coverage you selected in your data inventory table.
```{bash download-cli-1}
#| eval: false
https://acdisc.gesdisc.eosdis.nasa.gov/opendap/HDF-EOS5/ncml/Aura_OMI_Level3/OMSO2e.003/2023/OMI-Aura_L3-OMSO2e_2023m0802_v003-2023m0804t120832.he5.ncml.nc4?ColumnAmountSO2[119:659][0:1439],lat[119:659],lon[0:1439]
https://acdisc.gesdisc.eosdis.nasa.gov/opendap/HDF-EOS5/ncml/Aura_OMI_Level3/OMSO2e.003/2023/OMI-Aura_L3-OMSO2e_2023m0805_v003-2023m0807t093718.he5.ncml.nc4?ColumnAmountSO2[119:659][0:1439],lat[119:659],lon[0:1439]
https://acdisc.gesdisc.eosdis.nasa.gov/opendap/HDF-EOS5/ncml/Aura_OMI_Level3/OMSO2e.003/2023/OMI-Aura_L3-OMSO2e_2023m0806_v003-2023m0809t092629.he5.ncml.nc4?ColumnAmountSO2[119:659][0:1439],lat[119:659],lon[0:1439]
https://acdisc.gesdisc.eosdis.nasa.gov/opendap/HDF-EOS5/ncml/Aura_OMI_Level3/OMSO2e.003/2023/OMI-Aura_L3-OMSO2e_2023m0807_v003-2023m0809t092635.he5.ncml.nc4?ColumnAmountSO2[119:659][0:1439],lat[119:659],lon[0:1439]
https://acdisc.gesdisc.eosdis.nasa.gov/opendap/HDF-EOS5/ncml/Aura_OMI_Level3/OMSO2e.003/2023/OMI-Aura_L3-OMSO2e_2023m0808_v003-2023m0810t092721.he5.ncml.nc4?ColumnAmountSO2[119:659][0:1439],lat[119:659],lon[0:1439]
https://acdisc.gesdisc.eosdis.nasa.gov/opendap/HDF-EOS5/ncml/Aura_OMI_Level3/OMSO2e.003/2023/OMI-Aura_L3-OMSO2e_2023m0809_v003-2023m0811t101920.he5.ncml.nc4?ColumnAmountSO2[119:659][0:1439],lat[119:659],lon[0:1439]
```
> Step 2: Open a command line window and execute the wget command. Replace the placeholder for username and password with your EarthData login credentials.
```{bash download-cli-2}
#| eval: false
wget -nc --load-cookies ..\.urs_cookies --save-cookies ..\.urs_cookies --keep-session-cookies --user=XXX --password=XXX
--content-disposition -i list.txt
```
> If you encounter any issue, follow this step-by-step guide on [using wget and curl specifically with the GES DISC data system](https://disc.gsfc.nasa.gov/information/howto?title=How%20to%20Access%20GES%20DISC%20Data%20Using%20wget%20and%20curl).
:::
::: {.callout-note icon="false"}
#### Activity: Data Download (~25 mins)
- Each member work on the data that you have been assigned.
- Discuss with your group how to collaborate on coding without creating merge conflicts
- *Many right answers here so discuss the pros/cons of each and pick one that feels best for your group!*
- Write a script **for your group** to download data using your chosen method
- Zoom rooms for each download method will be available. You are encouraged to join the room that corresponds to your chosen method to discuss with others working on the same approach.
- If no datasets in your group's inventory need the download method you chose, try to run the example codes provided
:::
## Data Format and Structure
CSV and TXT are common formats for data storage. In addition, formats like NetCDF, HDF5, Matlab, and Rdata/RDS are frequently used in research, along with spatial datasets such as geotiff, shapefiles, and raster files (refer to the [spatial module](https://lter.github.io/ssecr/mod_spatial.html) for more details).
In R, information about data's structure can be retrieved in a variety of ways. We'll go ahead and load the `dplyr` package as well to be able to include one Tidyverse option for evaluating structure.
```{r data-check}
#| eval: false
# Load needed libraries
library(dplyr)
# Define URL as an object
dt_url <- "https://pasta.lternet.edu/package/data/eml/knb-lter-sbc/77/10/f32823fba432f58f66c06b589b7efac6"
# Read it into R
lobster_df <- read.csv(file = dt_url)
# Displays the first few rows of the dataset to provide a quick overview
head(lobster_df)
#Provides a summary of the dataset, including basic statistics for each variable
summary(lobster_df)
# Displays the structure of the dataset, including data types and a preview of the data
str(lobster_df)
# Offers a transposed, compact overview of the dataset's structure, data types, and values
dplyr::glimpse(lobster_df) #<1>
# Checks if there are any missing values (NA) in the dataset
anyNA(lobster_df)
```
1. This is the Tidyverse equivalent of `str`. The information looks different because the data are coerced into a "tibble" behind the scenes
## Additional Resources
### Papers & Documents
- British Ecological Society (BES). [Better Science Guides: Data Management Guide](https://www.britishecologicalsociety.org/publications/better-science/). **2024**.
### Workshops & Courses
- LTER Scientific Computing Team. [Data Acquisition Guide for TRY and AppEEARS](https://lter.github.io/scicomp/internal_get-data.html). **2024**.
- National Center for Ecological Analysis and Synthesis (NCEAS) Learning Hub. [coreR: Data Management Essentials](https://learning.nceas.ucsb.edu/2023-10-coreR/session_14.html). **2023**.
- NCEAS Learning Hub. [UCSB Faculty Seminar Series: Data Management Essentials and the FAIR & CARE Principles](https://learning.nceas.ucsb.edu/2023-09-ucsb-faculty/session_04.html). **2023**.
- NCEAS Learning Hub. [UCSB Faculty Seminar Series: Writing Data Management Plans](https://learning.nceas.ucsb.edu/2023-09-ucsb-faculty/session_05.html). **2023**.
### Websites
- Environmental Data Initiative (EDI) [Data Portal](https://portal.edirepository.org/nis/advancedSearch.jsp)
- DataONE [Data Catalog](https://search.dataone.org/data)
- Ocean Observatories Initiative (OOI) [Data Explorer](https://dataexplorer.oceanobservatories.org/)
- Global Biodiversity Information Facility (GBIF) [Data Portal](https://www.gbif.org/)
- iDigBio Digitized [Specimen Portal](https://www.idigbio.org/portal)
- [LTAR Data Dashboards and Visualizations](https://ltar.ars.usda.gov/data/data-dashboards/)
- [LTAR Group Data](https://agdatacommons.nal.usda.gov/Long_Term_Agroecosystem_Research/groups) within the Ag Data Commons, the digital repository of the National Agricultural Library
- [Data Is Plural](https://www.data-is-plural.com/) and its [data list](https://docs.google.com/spreadsheets/d/1wZhPLMCHKJvwOkP4juclhjFgqIY8fQFMemwKL2c64vk/edit?gid=0#gid=0) for exploring the cool datasets in various domains