This repository contains the dataset and sample code for the Getting Started section of Pilosa documentation.
The sample dataset contains stargazer and language data for Github projects which were retrieved for the search keyword "Go". See the Generating the Dataset section below to create other datasets.
languages.txt
: Language name to languageID mapping. The line number corresponds to the languageID.language.csv
: languageID, projectIDstargazer.csv
: stargazerID, projectID, timestamp(starred)
Run the Pilosa Docker image with Getting Started data using:
docker run -it --rm -p 10101:10101 pilosa/getting-started:latest
Continue with Getting Started: Make Some Queries.
- Pilosa server should be running: Starting Pilosa
- The appropriate schema should be initialized: Create the Schema
- Finally, the data can be imported: Import Some Data
Continue with Getting Started: Make Some Queries.
Using a Github token is strongly recommended for avoiding throttling. If you don't already have a token for the GitHub API, see Creating a personal access token for the command line.
A recent version of Python is required. We test the script with 2.7 and 3.5.
Below are the steps to run commands:
- Create a virtual env:
- Using Python 2.7:
virtualenv getting-started
- Using Python 3.5:
python3 -m venv getting-started
- Using Python 2.7:
- Activate the virtual env:
- On Linux, MacOS, other UNIX:
source getting-started/bin/activate
- On Windows:
getting-started\Scripts\activate
- On Linux, MacOS, other UNIX:
- Install requirements:
pip install -r requirements.txt
- If you have a Github token, save it as
token
in the root directory of the project.
The fetch.py
script searches Github for a given keyword and creates the dataset explained in The Dataset section.
Run the script: python fetch.py KEYWORD
.
KEYWORD
is the search term to use for searching repository names.
make docker VERSION=some-version