|
105 | 105 | },
|
106 | 106 | "outputs": [],
|
107 | 107 | "source": [
|
108 |
| - "region_a.show_custom_options(dictview=True)" |
| 108 | + "region_a.show_custom_options()" |
109 | 109 | ]
|
110 | 110 | },
|
111 | 111 | {
|
|
153 | 153 | },
|
154 | 154 | {
|
155 | 155 | "cell_type": "markdown",
|
156 |
| - "metadata": { |
157 |
| - "user_expressions": [] |
158 |
| - }, |
| 156 | + "metadata": {}, |
159 | 157 | "source": [
|
160 |
| - "### Determine what variables are available for your data product\n", |
161 |
| - "There are multiple ways to get a complete list of available variables.\n", |
162 |
| - "To increase readability, some display options (2 and 3, below) show the 200+ variable + path combinations as a dictionary where the keys are variable names and the values are the paths to that variable.\n", |
| 158 | + "## _Why not just download all the data and subset locally? What if I need more variables/granules?_\n", |
163 | 159 | "\n",
|
164 |
| - "1. `region_a.order_vars.avail`, a list of all valid path+variable strings\n", |
165 |
| - "2. `region_a.show_custom_options(dictview=True)`, all available subsetting options\n", |
166 |
| - "3. `region_a.order_vars.parse_var_list(region_a.order_vars.avail)`, a dictionary of variable:paths key:value pairs" |
| 160 | + "_Taking advantage of the NSIDC subsetter is a great way to reduce your download size and thus your download time and the amount of storage required, especially if you're storing your data locally during analysis. By downloading your data using icepyx, it is easy to go back and get additional data with the same, similar, or different parameters (e.g. you can keep the same spatial and temporal bounds but change the variable list). Related tools (e.g. [`captoolkit`](https://github.com/fspaolo/captoolkit)) will let you easily merge files if you're uncomfortable merging them during read-in for processing._" |
167 | 161 | ]
|
168 | 162 | },
|
169 | 163 | {
|
170 | 164 | "cell_type": "code",
|
171 | 165 | "execution_count": null,
|
172 |
| - "metadata": { |
173 |
| - "tags": [] |
174 |
| - }, |
| 166 | + "metadata": {}, |
175 | 167 | "outputs": [],
|
176 | 168 | "source": [
|
177 |
| - "region_a.order_vars.avail()" |
178 |
| - ] |
179 |
| - }, |
180 |
| - { |
181 |
| - "cell_type": "markdown", |
182 |
| - "metadata": { |
183 |
| - "user_expressions": [] |
184 |
| - }, |
185 |
| - "source": [ |
186 |
| - "By passing the boolean `options=True` to the `avail` method, you can obtain lists of unique possible variable inputs (var_list inputs) and path subdirectory inputs (keyword_list and beam_list inputs) for your data product. These can be helpful for building your wanted variable list." |
| 169 | + "short_name = 'ATL06'\n", |
| 170 | + "spatial_extent = './supporting_files/simple_test_poly.gpkg'\n", |
| 171 | + "date_range = ['2019-10-01','2019-10-05']" |
187 | 172 | ]
|
188 | 173 | },
|
189 | 174 | {
|
190 | 175 | "cell_type": "code",
|
191 | 176 | "execution_count": null,
|
192 |
| - "metadata": { |
193 |
| - "tags": [] |
194 |
| - }, |
195 |
| - "outputs": [], |
196 |
| - "source": [ |
197 |
| - "region_a.order_vars.avail(options=True)" |
198 |
| - ] |
199 |
| - }, |
200 |
| - { |
201 |
| - "cell_type": "markdown", |
202 | 177 | "metadata": {},
|
| 178 | + "outputs": [], |
203 | 179 | "source": [
|
204 |
| - "## _Why not just download all the data and subset locally? What if I need more variables/granules?_\n", |
| 180 | + "region_a = ipx.Query(short_name, spatial_extent\n", |
| 181 | + ", \n", |
| 182 | + " cycles=['03','04','05','06'], tracks=['0849','0902'])\n", |
205 | 183 | "\n",
|
206 |
| - "_Taking advantage of the NSIDC subsetter is a great way to reduce your download size and thus your download time and the amount of storage required, especially if you're storing your data locally during analysis. By downloading your data using icepyx, it is easy to go back and get additional data with the same, similar, or different parameters (e.g. you can keep the same spatial and temporal bounds but change the variable list). Related tools (e.g. [`captoolkit`](https://github.com/fspaolo/captoolkit)) will let you easily merge files if you're uncomfortable merging them during read-in for processing._" |
| 184 | + "print(region_a.product)\n", |
| 185 | + "print(region_a.product_version)\n", |
| 186 | + "print(region_a.cycles)\n", |
| 187 | + "print(region_a.tracks)\n", |
| 188 | + "print(region_a.spatial_extent)" |
207 | 189 | ]
|
208 | 190 | },
|
209 | 191 | {
|
210 |
| - "cell_type": "markdown", |
| 192 | + "cell_type": "code", |
| 193 | + "execution_count": null, |
211 | 194 | "metadata": {},
|
| 195 | + "outputs": [], |
212 | 196 | "source": [
|
213 |
| - "### Building the default wanted variable list" |
| 197 | + "region_a.visualize_spatial_extent()" |
214 | 198 | ]
|
215 | 199 | },
|
216 | 200 | {
|
217 |
| - "cell_type": "code", |
218 |
| - "execution_count": null, |
| 201 | + "cell_type": "markdown", |
219 | 202 | "metadata": {},
|
220 |
| - "outputs": [], |
221 | 203 | "source": [
|
222 |
| - "region_a.order_vars.wanted" |
| 204 | + "We can still print a list of available granules for our query" |
223 | 205 | ]
|
224 | 206 | },
|
225 | 207 | {
|
|
228 | 210 | "metadata": {},
|
229 | 211 | "outputs": [],
|
230 | 212 | "source": [
|
231 |
| - "region_a.order_vars.append(defaults=True)\n", |
232 |
| - "pprint(region_a.order_vars.wanted)" |
| 213 | + "region_a.avail_granules(cloud=True)" |
233 | 214 | ]
|
234 | 215 | },
|
235 | 216 | {
|
|
247 | 228 | "metadata": {},
|
248 | 229 | "outputs": [],
|
249 | 230 | "source": [
|
250 |
| - "region_a.subsetparams(Coverage=region_a.order_vars.wanted)" |
| 231 | + "order = region_a.order_granules(subset=True) \n", |
| 232 | + "order" |
251 | 233 | ]
|
252 | 234 | },
|
253 | 235 | {
|
254 | 236 | "cell_type": "markdown",
|
255 | 237 | "metadata": {},
|
256 | 238 | "source": [
|
257 |
| - "Or, you can put the `Coverage` parameter directly into `order_granules`:\n", |
258 |
| - "`region_a.order_granules(Coverage=region_a.order_vars.wanted)`\n", |
259 |
| - "\n", |
260 |
| - "However, then you cannot view your subset parameters (`region_a.subsetparams`) prior to submitting your order." |
| 239 | + "### Checking an order status" |
261 | 240 | ]
|
262 | 241 | },
|
263 | 242 | {
|
|
266 | 245 | "metadata": {},
|
267 | 246 | "outputs": [],
|
268 | 247 | "source": [
|
269 |
| - "region_a.order_granules()# <-- you do not need to include the 'Coverage' kwarg to\n", |
270 |
| - " # order if you have already included it in a call to subsetparams" |
| 248 | + "order.status()" |
| 249 | + ] |
| 250 | + }, |
| 251 | + { |
| 252 | + "cell_type": "markdown", |
| 253 | + "metadata": {}, |
| 254 | + "source": [ |
| 255 | + "### Downloading subsetted granules" |
271 | 256 | ]
|
272 | 257 | },
|
273 | 258 | {
|
|
276 | 261 | "metadata": {},
|
277 | 262 | "outputs": [],
|
278 | 263 | "source": [
|
279 |
| - "region_a.download_granules('/home/jovyan/icepyx/dev-notebooks/vardata') # <-- you do not need to include the 'Coverage' kwarg to\n", |
280 |
| - " # download if you have already submitted it with your order" |
| 264 | + "files = order.download_granules(\"./data\")" |
281 | 265 | ]
|
282 | 266 | },
|
283 | 267 | {
|
|
300 | 284 | "Compare the available variables associated with the full product relative to those in your downloaded data file."
|
301 | 285 | ]
|
302 | 286 | },
|
303 |
| - { |
304 |
| - "cell_type": "code", |
305 |
| - "execution_count": null, |
306 |
| - "metadata": {}, |
307 |
| - "outputs": [], |
308 |
| - "source": [ |
309 |
| - "# put the full filepath to a data file here. You can get this in JupyterHub by navigating to the file,\n", |
310 |
| - "# right clicking, and selecting copy path. Then you can paste the path in the quotes below.\n", |
311 |
| - "fn = ''" |
312 |
| - ] |
313 |
| - }, |
314 | 287 | {
|
315 | 288 | "cell_type": "markdown",
|
316 | 289 | "metadata": {},
|
317 | 290 | "source": [
|
318 | 291 | "## Check the downloaded data\n",
|
| 292 | + "\n", |
| 293 | + "### Note: this needs to be updated\n", |
319 | 294 | "Get all `latitude` variables in your downloaded file:"
|
320 | 295 | ]
|
321 | 296 | },
|
|
341 | 316 | " if vn==varname: print(tvar) "
|
342 | 317 | ]
|
343 | 318 | },
|
344 |
| - { |
345 |
| - "cell_type": "markdown", |
346 |
| - "metadata": {}, |
347 |
| - "source": [ |
348 |
| - "### Compare to the variable paths available in the original data" |
349 |
| - ] |
350 |
| - }, |
351 |
| - { |
352 |
| - "cell_type": "code", |
353 |
| - "execution_count": null, |
354 |
| - "metadata": {}, |
355 |
| - "outputs": [], |
356 |
| - "source": [ |
357 |
| - "region_a.order_vars.parse_var_list(region_a.order_vars.avail)[0][varname]" |
358 |
| - ] |
359 |
| - }, |
360 | 319 | {
|
361 | 320 | "cell_type": "markdown",
|
362 | 321 | "metadata": {},
|
|
369 | 328 | ],
|
370 | 329 | "metadata": {
|
371 | 330 | "kernelspec": {
|
372 |
| - "display_name": "icepyx-dev", |
| 331 | + "display_name": "Python 3 (ipykernel)", |
373 | 332 | "language": "python",
|
374 |
| - "name": "icepyx-dev" |
| 333 | + "name": "python3" |
375 | 334 | },
|
376 | 335 | "language_info": {
|
377 | 336 | "codemirror_mode": {
|
|
383 | 342 | "name": "python",
|
384 | 343 | "nbconvert_exporter": "python",
|
385 | 344 | "pygments_lexer": "ipython3",
|
386 |
| - "version": "3.11.4" |
| 345 | + "version": "3.11.11" |
387 | 346 | }
|
388 | 347 | },
|
389 | 348 | "nbformat": 4,
|
|
0 commit comments