Skip to content
This repository was archived by the owner on Feb 25, 2025. It is now read-only.

Commit 01a66f5

Browse files
committed
initial version of docs
0 parents  commit 01a66f5

15 files changed

+364
-0
lines changed

.github/workflows/publishdocs.yaml

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
name: Publish docs via GitHub Pages
2+
on:
3+
push:
4+
branches:
5+
- master
6+
7+
jobs:
8+
build:
9+
name: Deploy docs
10+
runs-on: ubuntu-latest
11+
steps:
12+
- name: Checkout master
13+
uses: actions/checkout@v1
14+
15+
- name: Deploy docs
16+
uses: mhausenblas/mkdocs-deploy-gh-pages@master
17+
env:
18+
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

.gitignore

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
.DS_Store
2+
.idea/
3+
activities/.DS_Store
4+
protocols/.DS_Store
5+
node_modules
6+
local_data

.pre-commit-config.yaml

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
# See https://pre-commit.com for more information
2+
# See https://pre-commit.com/hooks.html for more hooks
3+
repos:
4+
- repo: https://github.com/pre-commit/pre-commit-hooks
5+
rev: v2.0.0
6+
hooks:
7+
- id: trailing-whitespace
8+
- id: end-of-file-fixer
9+
- id: check-yaml
10+
- id: check-added-large-files
11+
- repo: https://github.com/psf/black
12+
rev: 19.3b0
13+
hooks:
14+
- id: black

docs/01_introduction.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
# Introduction
2+
3+
## Advantages of using DANDI
4+
- A open data archive to submit cellular neurophysiology data.
5+
- A persistent, versioned and growing collection of standardized cellular
6+
neurophysiology data.
7+
- Rich metadata to support search across data.
8+
- A place to house data to collaborate across research sites.
9+
- Consistent and transparent data standards to simplify software development.
10+
- Supported by the BRAIN Initiative and the AWS Public dataset programs.
11+
12+
## The challenges
13+
14+
1. To know which data are useful, data has to be accessible.
15+
1. Non standardized datasets lead to significant resources to needed to undestand
16+
and adapt code to these datasets.
17+
1. Many different hardware platforms and custom binary formats requires significant
18+
effort to consolidate into reusable datasets.
19+
1. Many domain general places to house data (e.g., Open Science Framework,
20+
G-Node, Dropbox, Google drive), but difficult to find relevant datasets.
21+
1. Datasets are growing larger requiring compute services to be closer to data.
22+
1. Neurotechnology is evolving and requires flexible extensions to metadata and
23+
data storage requirements.
24+
1. Consolidating and creating robust algorithms (e.g., spike sorting) requires
25+
varied data sources.
26+
27+
## Our solution
28+
29+
We have developed a [FAIR -Findable.Accessible.Interoperable.Reusable](https://www.force11.org/group/fairgroup/fairprinciples)
30+
data archive to house standardized cellular neurophysiology and associated data.
31+
We use the [Neurodata Without Borders](https://nwb.org), [Brain Imaging Data Structure](BIDS),
32+
[Neuroimaging Data Model](NIDM) and other [BRAIN Initiative](https://braininitiative.nih.gov/)
33+
standards to organize and search the data. A Jupyterhub-based analysis platform
34+
provides easy access to the data. The data can be accessed programmatically
35+
allowing for new software and tools to be built. The archive itself is built on
36+
a software stack of opensource products, thus enriching the ecosystem.
37+
38+
The archive provides persistent identifiers for versioned datasets thus improving
39+
reproducibility of neurophysiology research.

docs/100_about_this_doc.md

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
# About this documentation
2+
3+
This documentation is a work in progress and we wellcome any input: if something
4+
is missing or unclear, let us know by [opening an issue on our repository](https://github.com/dandi/handbook).
5+
6+
## Serving the doc locally
7+
8+
This project uses [MkDocs](https://www.mkdocs.org/) tool with [Material theme](https://squidfunk.github.io/mkdocs-material/)
9+
and extra plugins to generate the website.
10+
11+
To test locally, you will need to install the Python dependencies. To do that, type the following commands:
12+
13+
```
14+
git clone https://github.com/dandi/handbook.git
15+
cd handbook
16+
pip install -r requirements.txt
17+
```
18+
19+
If you are working on your *fork*, simply replace `https://github.com/dandi/handbook.git`
20+
by `git clone [email protected]/<username>/handbook.git` where `<username>` is your
21+
GitHub username
22+
23+
Once done, you need to run MkDocs. Simply type:
24+
25+
```
26+
mkdocs serve
27+
```
28+
29+
Finally, open up [`http://127.0.0.1:8000/`](http://127.0.0.1:8000/) in your
30+
browser, and you should see the default home page of the being displayed.

docs/10_using_dandi.md

Lines changed: 117 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,117 @@
1+
# Working with DANDI
2+
3+
DANDI provides access to and an archive to submit cellular neurophysiology
4+
datasets. We refer to such datasets as a `Dandiset`.
5+
6+
1. A `Dandiset` is organized in a structured manner to help improve users and
7+
software tools can interact with it.
8+
1. Each `Dandiset` has a unique persistent identifier that you can use to go directly
9+
to the `Dandiset` (e.g., [https://identifiers.org/DANDI:000004](https://identifiers.org/DANDI:000004)).
10+
You can use this identifier to cite the `Dandiset` in your publications or providing
11+
direct access to a `Dandiset`.
12+
13+
## DANDI components
14+
15+
### The DANDI Web application
16+
17+
[DANDI Web application](https://dandiarchive.org/) allows you to:
18+
19+
1. Browse `Dandisets`.
20+
1. Search across `Dandisets`.
21+
1. Create an account to register a new `Dandiset` or gain access to
22+
[the Dandihub analysis platform](#the-dandihub-analysis-platform).
23+
1. Add collaborators to your `Dandiset`.
24+
1. Retrieve an `API key` to perform data upload to your `Dandisets`.
25+
1. Publish versions of your `Dandisets`.
26+
27+
### The DANDI Python client
28+
29+
The [DANDI Python client](https://pypi.org/project/dandi/) allows you to:
30+
31+
1. Download `Danidsets` and individual subject folders or files.
32+
1. Organize your data locally before upload.
33+
1. Upload your `Dandiset`
34+
35+
### The Dandihub analysis platform
36+
37+
[Dandihub](https://hub.dandiarchive.org) provides a Jupyter environment to
38+
interact with the DANDI archive. To use the hub, you will need to register an
39+
account using [the DANDI Web application](#the-dandi-web-application). Please
40+
note that `Dandihub` is not intended for significant computation, but provides a
41+
place to introspect `Dandisets` and files.
42+
43+
## Downloading from DANDI
44+
45+
You can download entire `Dandisets` or single files.
46+
47+
### Downloading a file
48+
49+
#### Using the Web application.
50+
51+
Each `Dandiset` has a `View Data` option. This provides a folder like view to
52+
navigate a `Dandiset`. Any file in the `Dandiset` has a download icon next to it.
53+
You can click this icon to download a file to your device where you are browsing
54+
or right click to get the download URL of the file. You can then use this URL
55+
programmatically or in other applications such as the [NWB Explorer](https://nwbexplorer.opensourcebrain.org/)
56+
or in a [Jupyter notebook on Dandihub](https://hub.dandiarchive.org).
57+
58+
#### Using the Python CLI
59+
60+
First install the Python client using `pip install dandi` in a Python 3.6+
61+
environment.
62+
63+
1. Downloading a `Dandiset`.
64+
1. Downloading a subject.
65+
1. Downloading a file.
66+
67+
## Create an account on DANDI
68+
69+
To create an account on DANDI, you will need to.
70+
71+
1. [Create a Github account](https://github.com/) if you don't have one.
72+
1. Using your Github account [register a DANDI account](https://gui.dandiarchive.org/#/user/register).
73+
**Make sure to use the Register with OAuth option**
74+
1. You will receive an email acknowledging activation of your account within 24
75+
hours. You can now login to DANDI using the Github by clicking on the login
76+
button.
77+
78+
## Uploading a Dandiset
79+
80+
1. Setup
81+
- If you do not have a DANDI account, please [create an account](#create-an-account-on-dandi)
82+
- Log in to DANDI, copy your API key. This is under your user initials on the
83+
top right after logging in.
84+
- Locally
85+
- Create a Python environment (e.g., Miniconda, virtualenv)
86+
- Install the DANDI CLI into your Python environment
87+
88+
`pip install dandi`
89+
90+
1. Data upload/management workflow
91+
1. Register a dandiset to generate an identifier. You will be asked to enter
92+
basic metadata, a name (title) and description (abstract) for your dataset.
93+
Click `New Dataset` in the Web application after logging in. After you are
94+
done, note the dataset identifer. We will call this `<dataset_id>`.
95+
1. Convert your data to NWB 2.1+ in a local folder. Let's call this `<source_folder>`
96+
This step can be complex depending on your data. Feel free to [reach out to
97+
us for help](/#where-to-communicate).
98+
1. Validate the NWB files by running: `dandi validate <source_folder>`
99+
1. Preparing a dataset folder for upload:
100+
1. `dandi download https://dandiarchive.org/<dataset_id>/draft`
101+
1. `cd <dataset_id>`
102+
1. `dandi organize <source_folder> -f dry`
103+
1. `dandi organize <source_folder> -f symlink`
104+
1. `dandi upload`
105+
1. Add metadata on the Web. Click on the `Edit metadata` link by visiting
106+
your dandiset landing page: `https://dandiarchive.org/<dataset_id>/draft`
107+
1. Use the dandiset URL:
108+
1. in your preprint
109+
1. To download, anyone can use the dandi CLI:
110+
`dandi download <dandiset_url>`
111+
112+
## Publish a Dandiset
113+
114+
**🛠 Work in progress 🛠**
115+
116+
117+

docs/20_project_structure.md

Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
# Project structure
2+
3+
The DANDI project is organized around several Github repositories. The
4+
main ones are the following.
5+
6+
1. **The DANDI archive.** This [repository](https://github.com/dandi/dandiarchive)
7+
contains the code for deploying the archive. It includes the client-side Web
8+
application frontend based on the [Vuejs](https://vuejs.org/) framework, the
9+
server backend extensions to [the Girder platform](https://girder.readthedocs.io/en/latest/),
10+
and the deployment code for pushing changes to the archive as they are merged in.
11+
12+
1. **The DANDI Python client.** This [repository](https://github.com/dandi/dandi-cli)
13+
contains the code for the command line tool used to interact with the archive.
14+
It allows you to download data from the archive. It also allows you to locally
15+
organize and validate your data before upload to the archive.
16+
17+
1. **The DANDI Jupyterhub.** This [repository](https://github.com/dandi/dandihub)
18+
contains the code for deploying a Jupyterhub instance to support interaction
19+
with the DANDI archive.
20+
21+
1. **The DANDI API.** This [repository](https://github.com/dandi/dandi-publish)
22+
provides the code for the DANDI API.
23+
24+
1. **The DANDI schema.** This [repostiory](https://github.com/dandi/schema)
25+
provides the details and some supporting code for the DANDI metadata schema.
26+
27+
1. **The DANDI handbook.** This [repository](https://github.com/dandi/handbook)
28+
provides the contents of this Website.
29+
30+
1. **The DANDI Website.** This [repository](https://github.com/dandi/dandi.github.io)
31+
provides an overview of the DANDI project and the team members and collaborators.

docs/30_schema.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
# The metadata schema
2+
3+
The core model of DANDI is based on DATS, Datacite, schema.org, and the C2M2
4+
effort. This page describes the properties of the current objects.
5+
6+
## Common metadata
7+
8+
## Dandiset specific extesnions
9+
10+
## Asset specific extensions

docs/98_FAQ.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
# FAQ
2+
3+
## Who is DANDI for?
4+
5+
DANDI can be useful to any individuals interested in neuroscience and/or large
6+
and diverse datascience challenges.
7+
8+
9+
<!-- **🛠 Work in progress 🛠** -->
10+

docs/99_glossary.md

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
# Glossary
2+
3+
- Asset
4+
- BIDS
5+
- Dandiset
6+
- NIDM
7+
- NWB

0 commit comments

Comments
 (0)