datalad/
is the main Python module where major development is happening, with major submodules being:cmdline/
- helpers for accessinginterface/
functionality from command linecrawler/
- functionality for crawling (online) resources and creating or updating datasets and collections based on the scraped/downloaded datanodes/
processing elements which are used in the pipelinepipelines/
pipelines generators, to produce pipelines to be ranpipeline.py
pipeline runner
customremotes/
- custom special remotes for annex provided by dataladdownloaders/
- support for accessing data from various sources (e.g. http, S3, XNAT) via a unified interface.configs/
- specifications for known data providers and associated credentials
interface/
- high level interface functions which get exposed via command line (cmdline/
) or Python (datalad.api
).tests/
- some unit- and regression- tests (more could be found undertests/
of corresponding submodules)utils.py
provides convenience helpers used by unit-tests such as@with_tree
,@serve_path_via_http
and other decorators
ui/
- user-level interactions, such as messages about errors, warnings, progress reports, AND when supported by available frontend -- interactive dialogssupport/
- various support modules, e.g. for git/git-annex interfaces, constraints for theinterface/
, etc
docs/
- yet to be heavily populated documentationbash-completions
- bash and zsh completion setup for datalad (justsource
it)
fixtures/
currently not under git, contains generated by vcr fixturestools/
contains helper utilities used during development, testing, and benchmarking of DataLad. Implemented in any most appropriate language (Python, bash, etc.)
The preferred way to contribute to the DataLad code base is to fork the main repository on GitHub. Here we outline the workflow used by the developers:
-
Have a clone of our main project repository as
origin
remote in your git:git clone git://github.com/datalad/datalad
-
Fork the project repository: click on the 'Fork' button near the top of the page. This creates a copy of the code base under your account on the GitHub server.
-
Add your forked clone as a remote to the local clone you already have on your local disk:
git remote add gh-YourLogin [email protected]:YourLogin/datalad.git git fetch gh-YourLogin
To ease addition of other github repositories as remotes, here is a little bash function/script to add to your
~/.bashrc
:ghremote () { url="$1" proj=${url##*/} url_=${url%/*} login=${url_##*/} git remote add gh-$login $url git fetch gh-$login }
thus you could simply run:
ghremote [email protected]:YourLogin/datalad.git
to add the above
gh-YourLogin
remote. Additional handy aliases such asghpr
(to fetch existing pr from someone's remote) andghsendpr
could be found at yarikoptic's bash config file -
Create a branch (generally off the
origin/master
) to hold your changes:git checkout -b nf-my-feature
and start making changes. Ideally, use a prefix signaling the purpose of the branch
nf-
for new featuresbf-
for bug fixesrf-
for refactoringdoc-
for documentation contributions (including in the code docstrings). We recommend to not work in themaster
branch!
-
Work on this copy on your computer using Git to do the version control. When you're done editing, do:
git add modified_files git commit
to record your changes in Git. Ideally, prefix your commit messages with the
NF
,BF
,RF
,DOC
similar to the branch name prefixes, but you could also useTST
for commits concerned solely with tests, andBK
to signal that the commit causes a breakage (e.g. of tests) at that point. Multiple entries could be listed joined with a+
(e.g.rf+doc-
). Seegit log
for examples. If a commit closes an existing DataLad issue, then add to the end of the message(Closes #ISSUE_NUMER)
-
Push to GitHub with:
git push -u gh-YourLogin nf-my-feature
Finally, go to the web page of your fork of the DataLad repo, and click 'Pull request' (PR) to send your changes to the maintainers for review. This will send an email to the committers. You can commit new changes to this branch and keep pushing to your remote -- github automagically adds them to your previously opened PR.
(If any of the above seems like magic to you, then look up the Git documentation on the web.)
Although we now support Python 3 (>= 3.3), primarily we still use Python 2.7
and thus instructions below are for python 2.7 deployments. Replace python-{
with python{,3}-{
to also install dependencies for Python 3 (e.g., if you would
like to develop and test through tox).
See README.md:Dependencies for basic information about installation of datalad itself. On Debian-based systems we recommend to enable NeuroDebian since we use it to provide backports of recent fixed external modules we depend upon:
apt-get install -y -q git git-annex-standalone
apt-get install -y -q patool python-scrapy python-{appdirs,argcomplete,git,humanize,keyring,lxml,msgpack,mock,progressbar,requests,setuptools,six}
and additionally, for development we suggest to use tox and new versions of dependencies from pypy:
apt-get install -y -q python-{dev,httpretty,nose,pip,vcr,virtualenv} python-tox
# Some libraries which might be needed for installing via pip
apt-get install -y -q lib{ffi,ssl,curl4-openssl,xml2,xslt1}-dev
some of which you could also install from PyPi using pip (prior installation of those libraries listed above might be necessary)
pip install -r requirements.txt
and you will need to install recent git-annex using appropriate for your OS means (for Debian/Ubuntu, once again, just use NeuroDebian).
We use NumPy standard for the description of parameters docstrings. If you are using
PyCharm, set your project settings (Tools
-> Python integrated tools
-> Docstring format
).
In addition, we follow the guidelines of Restructured Text with the additional features and treatments provided by Sphinx.
-
For merge commits to have more informative description, add to your
.git/config
or~/.gitconfig
following section:[merge] summary = true log = true
and if conflicts occur, provide short summary on how they were resolved in "Conflicts" listing within the merge commit (see example).
It is recommended to check that your contribution complies with the following rules before submitting a pull request:
-
All public methods should have informative docstrings with sample usage presented as doctests when appropriate.
-
All other tests pass when everything is rebuilt from scratch.
-
New code should be accompanied by tests.
datalad/tests
contains tests for the core portion of the project, and
more tests are provided under corresponding submodules in tests/
subdirectories to simplify re-running the tests concerning that portion
of the codebase. To execute many tests, the codebase first needs to be
"installed" in order to generate scripts for the entry points. For
that, the recommended course of action is to use virtualenv
, e.g.
virtualenv --system-site-packages venv-tests
source venv-tests/bin/activate
pip install -r requirements.txt
python setup.py develop
and then use that virtual environment to run the tests, via
python -m nose -s -v datalad
or similarly,
nosetests -s -v datalad
then to later deactivate the virtualenv just simply enter
deactivate
Alternatively, or complimentary to that, you can use tox
-- there is a tox.ini
file which sets up a few virtual environments for testing locally, which you can
later reuse like any other regular virtualenv for troubleshooting.
Additionally, tools/testing/test_README_in_docker script can
be used to establish a clean docker environment (based on any NeuroDebian-supported
release of Debian or Ubuntu) with all dependencies listed in README.md pre-installed.
You can also check for common programming errors with the following tools:
-
Code with good unittest coverage (at least 80%), check with:
pip install nose coverage nosetests --with-coverage path/to/tests_for_package
-
We rely on https://codecov.io to provide convenient view of code coverage. Installation of the codecov extension for Firefox/Iceweasel or Chromium is strongly advised, since it provides coverage annotation of pull requests.
We are not (yet) fully PEP8 compliant, so please use these tools as guidelines for your contributions, but not to PEP8 entire code base.
Sidenote: watch Raymond Hettinger - Beyond PEP 8
-
No pyflakes warnings, check with:
pip install pyflakes pyflakes path/to/module.py
-
No PEP8 warnings, check with:
pip install pep8 pep8 path/to/module.py
-
AutoPEP8 can help you fix some of the easy redundant errors:
pip install autopep8 autopep8 path/to/pep8.py
Also, some team developers use
PyCharm community edition which
provides built-in PEP8 checker and handy tools such as smart
splits/joins making it easier to maintain code following the PEP8
recommendations. NeuroDebian provides pycharm-community-sloppy
package to ease pycharm installation even further.
A great way to start contributing to DataLad is to pick an item from the list of Easy issues in the issue tracker. Resolving these issues allows you to start contributing to the project without much prior knowledge. Your assistance in this area will be greatly appreciated by the more experienced developers as it helps free up their time to concentrate on other issues.
-
While performing IO/net heavy operations use dstat for quick logging of various health stats in a separate terminal window:
dstat -c --top-cpu -d --top-bio --top-latency --net
-
To monitor speed of any data pipelining pv is really handy, just plug it in the middle of your pipe.
-
For remote debugging epdb could be used (avail in pip) by using
import epdb; epdb.serve()
in Python code and then connecting to it withpython -c "import epdb; epdb.connect()".
-
We are using codecov which has extensions for the popular browsers (Firefox, Chrome) which annotates pull requests on github regarding changed coverage.
Refer datalad/config.py for information on how to add these environment variables to the config file and their naming convention
- DATALAD_LOGLEVEL: Used for control the verbosity of logs printed to stdout while running datalad commands/debugging
- DATALAD_TESTS_KEEPTEMP: Function rmtemp will not remove temporary file/directory created for testing if this flag is set
- DATALAD_EXC_STR_TBLIMIT: This flag is used by the datalad extract_tb function which extracts and formats stack-traces. It caps the number of lines to DATALAD_EXC_STR_TBLIMIT of pre-processed entries from traceback.
- DATALAD_TESTS_TEMPDIR: Create a temporary directory at location specified by this flag. It is used by tests to create a temporary git directory while testing git annex archives etc
- DATALAD_TESTS_NONETWORK: Skips network tests completely if this flag is set Examples include test for s3, git_repositories, openfmri etc
- DATALAD_TESTS_SSH: Skips SSH tests if this flag is not set
- DATALAD_LOGTRACEBACK: Runs TraceBack function with collide set to True, if this flag is set to 'collide'. This replaces any common prefix between current traceback log and previous invocation with "..."
- DATALAD_TESTS_NOTEARDOWN: Does not execute teardown_package which cleans up temp files and directories created by tests if this flag is set
- DATALAD_USECASSETTE: Specifies the location of the file to record network transactions by the VCR module. Currently used by when testing custom special remotes
- DATALAD_CMD_PROTOCOL: Specifies the protocol number used by the Runner to note shell command or python function call times and allows for dry runs. 'externals-time' for ExecutionTimeExternalsProtocol, 'time' for ExecutionTimeProtocol and 'null' for NullProtocol. Any new DATALAD_CMD_PROTOCOL has to implement datalad.support.protocol.ProtocolInterface
- DATALAD_CMD_PROTOCOL_PREFIX: Sets a prefix to add before the command call times are noted by DATALAD_CMD_PROTOCOL.
- DATALAD_PROTOCOL_REMOTE: Binary flag to specify whether to test protocol interactions of custom remote with annex
- DATALAD_LOG_TIMESTAMP: Used to add timestamp to datalad logs
- DATALAD_RUN_CMDLINE_TESTS: Binary flag to specify if shell testing using shunit2 to be carried out
- DATALAD_TEMP_FS: Specify the temporary file system to use as loop device for testing DATALAD_TESTS_TEMPDIR creation
- DATALAD_TEMP_FS_SIZE: Specify the size of temporary file system to use as loop device for testing DATALAD_TESTS_TEMPDIR creation
- DATALAD_NONLO: Specifies network interfaces to bring down/up for testing. Currently used by travis.