Skip to content

Commit c48f30d

Browse files
committed
Removed data from repo and modified travis yaml
1 parent 47fbf70 commit c48f30d

37 files changed

+109
-755367
lines changed

.travis.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,8 @@ install:
1111
- pip3 install coverage>=4.4.0
1212
- pip3 install pytest>=3.6.0
1313
- pip3 install pytest-cov
14-
- pip3 install Pillow
14+
- chmod +x tests/data_download.sh
15+
- ./tests/data_download.sh
1516
script:
1617
- coverage run tests/test.py
1718
after_success:

README.md

Lines changed: 36 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
1. [Experiment Design](#experiment-design)
1515
1. [Setup](#setup)
1616
2. [Using PyTrack](#using-pytrack)
17-
3. [Example Use](#example-use)
17+
3. [Example Use](#example-use)
1818
4. [Advanced Functionality](#advanced-functionality)
1919
2. [Stand-alone Design](#stand-alone-design)
2020
4. [Authors](#authors)
@@ -24,10 +24,10 @@
2424

2525
This is a framework to analyse and visualize eye tracking data. It offers 2 designs of analysis:
2626
* **Experiment Design**: Analyse an entire experiment with 1 or more subjects/participants presented with 1 or more stimuli.
27-
* **Stand-alone Design**: Analyse a single stimulus for a single person.
27+
* **Stand-alone Design**: Analyse a single stimulus for a single person.
2828

2929

30-
As of now, it supports data collected using SR Research EyeLink, SMI and Tobii eye trackers. The framework contains a *formatBridge* function that converts these files into a base format and then performs analysis on it.
30+
As of now, it supports data collected using SR Research EyeLink, SMI and Tobii eye trackers. The framework contains a *formatBridge* function that converts these files into a base format and then performs analysis on it.
3131

3232

3333
## Documentation
@@ -54,15 +54,15 @@ pip install PyTrack
5454

5555
## Running the tests
5656

57-
In order to test ***PyTrack***, some sample data files can be found [here](https://drive.google.com/open?id=1N9ZrTO6Bikx3aI7BKivSFAp3vrLxSCM6).
57+
In order to test ***PyTrack***, some sample data files can be found [here](https://drive.google.com/open?id=1tWD69hurELVuVRFzizCbukWnr22RZrnp).
5858

5959
To get started, first you need to choose which design you want to run the framework in. If you wish to use the *Experiment Design*, see [this](#experiment-design). If you wish to use the *Stand-alone Design* see [this](#stand-alone-design).
6060

61-
### Experiment Design
61+
### Experiment Design
6262

6363
#### Setup
6464

65-
Before running the framework, lets setup the folder so ***PyTrack*** can read and save all the generated figures in one central location and things are organised.
65+
Before running the framework, lets setup the folder so ***PyTrack*** can read and save all the generated figures in one central location and things are organised.
6666

6767
Create a directory structure like the one shown below. It is essential for the listed directories to be present for the proper functioning of ***PyTrack***.
6868
```
@@ -72,7 +72,7 @@ Create a directory structure like the one shown below. It is essential for the l
7272
│ │ subject_001.[asc/txt/tsv/...]
7373
│ │ subject_002.[asc/txt/tsv/...]
7474
| |__ ......
75-
75+
7676
└── Stimulus/
7777
│ │ stim_1.[jpg/jpeg]
7878
│ │ stim_2.[jpg/jpeg]
@@ -81,7 +81,7 @@ Create a directory structure like the one shown below. It is essential for the l
8181
└── [Experiment-Name].json
8282
8383
```
84-
*[Experiment-Name]* stands for the name of your experiment. Lets assume that your experiment name is "*NTU_Experiment*". The rest of the steps will use this alias as the *[Experiment-Name]* folder.
84+
*[Experiment-Name]* stands for the name of your experiment. Lets assume that your experiment name is "*NTU_Experiment*". The rest of the steps will use this alias as the *[Experiment-Name]* folder.
8585

8686
Now, follow these steps:
8787

@@ -93,14 +93,14 @@ Now, follow these steps:
9393

9494
eg. *stim_1.jpg* or *random_picture.png*
9595

96-
3. The last and final step to setup the experiment directory is to include the experiment description json file. This file should contain the essential details of your experiment. It contains specifications regarding your experiment suchas the stimuli you wish to analyse or the participants/subjects you wish to include. Mentioned below is the json file structure. The content below can be copied and pasted in a file called *NTU_Experiment*.json (basically the name of your experiment with a json extension).
97-
96+
3. The last and final step to setup the experiment directory is to include the experiment description json file. This file should contain the essential details of your experiment. It contains specifications regarding your experiment suchas the stimuli you wish to analyse or the participants/subjects you wish to include. Mentioned below is the json file structure. The content below can be copied and pasted in a file called *NTU_Experiment*.json (basically the name of your experiment with a json extension).
97+
9898
* "*Experiment_name*" should be the same name as the json file without the extension and "*Path*" should be the absolute path to your experiment directory without the final "/" at the end.
9999
* The subjects should be added under the "*Subjects*" field. You may specify one or more groups of division for your subjects (recommended for aggregate between group statistical analysis). **There must be atleast 1 group**.
100100
* The stimuli names should be added under the "*Stimuli*" field and again you may specify one or more types (recommended for aggregate between stimulus type statistical analysis). **There must be atleast 1 type**.
101-
* The "*Control_Questions*" field is optional. In case you have some stimuli that should be used to standardise/normalise features extracted from all stimuli, sepcify the names here. **These stimuli must be present under the "*Stimuli*" field under one of the types**.
102-
* **The field marked "*Columns_of_interest*" should not be altered**.
103-
* Under "*Analysis_Params*", just change the values of "Sampling_Freq", "Display_height" and "Display_width" to match the values of your experiment.
101+
* The "*Control_Questions*" field is optional. In case you have some stimuli that should be used to standardise/normalise features extracted from all stimuli, sepcify the names here. **These stimuli must be present under the "*Stimuli*" field under one of the types**.
102+
* **The field marked "*Columns_of_interest*" should not be altered**.
103+
* Under "*Analysis_Params*", just change the values of "Sampling_Freq", "Display_height" and "Display_width" to match the values of your experiment.
104104

105105
***Note***: If you wish to analyse only a subset of your stimuli or subjects, specify only the ones of interest in the json file. The analysis and visualization will be done only for the ones mentioned in the json file.
106106

@@ -161,7 +161,7 @@ Now, follow these steps:
161161

162162
#### Using PyTrack
163163

164-
This involves less than 10 lines of python code. However, in case you want to do more detailed analysis, it may involve a few more lines.
164+
This involves less than 10 lines of python code. However, in case you want to do more detailed analysis, it may involve a few more lines.
165165

166166
Using *formatBridge* majorly has 3 cases.:
167167

@@ -176,7 +176,7 @@ Using *formatBridge* majorly has 3 cases.:
176176
3. **Do not sepcify any stimulus order list**. In this case, the output of the statistical analysis will be inconclusive and the visualization of gaze will be on a black screen instead of the stimulus image. The *stim_list_mode* parameter in the *generateCompatibleFormat* function needs to be set as "NA". However, you can still extract the metadata and features extracted for each participant but the names will not make any sense. ***WE DO NOT RECOMMEND THIS***.
177177

178178

179-
#### Example Use
179+
#### Example Use
180180

181181
See [documentation](https://pytrack-ntu.readthedocs.io/en/latest/PyTrack.html) for a detailed understanding of each function.
182182

@@ -187,10 +187,10 @@ from PyTrack.formatBridge import generateCompatibleFormat
187187

188188
# function to convert data to generate database in base format for experiment done using EyeLink on both eyes and the stimulus name specified in the message section
189189
generateCompatibleFormat(exp_path="abcd/efgh/NTU_Experiment/",
190-
device="eyelink",
191-
stim_list_mode='NA',
192-
start='start_trial',
193-
stop='stop_trial',
190+
device="eyelink",
191+
stim_list_mode='NA',
192+
start='start_trial',
193+
stop='stop_trial',
194194
eye='B')
195195

196196
```
@@ -204,14 +204,14 @@ from PyTrack.Experiment import Experiment
204204
exp = Experiment(json_file="abcd/efgh/NTU_Experiment/NTU_Experiment.json")
205205

206206
# Instantiate the meta_matrix_dict of an Experiment to find and extract all features from the raw data
207-
exp.metaMatrixInitialisation(standardise_flag=False,
207+
exp.metaMatrixInitialisation(standardise_flag=False,
208208
average_flag=False)
209209

210210
# Calling the function for the statistical analysis of the data
211211
# file_creation=True. Hence, the output of the data used to run the tests and the output of the tests will be stored in in the 'Results' folder inside your experiment folder
212-
exp.analyse(parameter_list={"all"},
213-
between_factor_list=["Subject_type"],
214-
within_factor_list=["Stimuli_type"], statistical_test="Mixed_anova",
212+
exp.analyse(parameter_list={"all"},
213+
between_factor_list=["Subject_type"],
214+
within_factor_list=["Stimuli_type"], statistical_test="Mixed_anova",
215215
file_creation=True)
216216

217217
```
@@ -239,7 +239,7 @@ The Experiment class contains a function called analyse() which is used to perfo
239239
* For example if Gender is to be considered as an additional between group factor then in the json file, under "Subjects", for each subject, a corresponding dicitionary must be created where you mention the factor name and the corresponding value (eg: Subject_name: {"Gender" : "M"}). Please also note that the square brackets ('[', ']') after group type need to be changed to curly brackets ('{', '}').
240240
* This must be similarly done for Stimuli, if any additional within group factor that describes the stimuli needs to be added. For example, if you are showing WORDS and PICTURES to elicit different responses from a user and you additonally have 2 different brightness levels ("High" and "Low") of the stimuli, you could consider Type1 and Type2 to be the PICTuRE and WORD gropus and mention Brightness as an additional within group factor.
241241

242-
The below code snippet just shows the changes that are to be done for Subject and Stimuli sections of the json file, the other sections remain the same.
242+
The below code snippet just shows the changes that are to be done for Subject and Stimuli sections of the json file, the other sections remain the same.
243243

244244
```json
245245
{
@@ -276,25 +276,25 @@ from PyTrack.Experiment import Experiment
276276
exp = Experiment(json_file="abcd/efgh/NTU_Experiment/NTU_Experiment.json")
277277

278278
# Instantiate the meta_matrix_dict of an Experiment to find and extract all features from the raw data
279-
exp.metaMatrixInitialisation(standardise_flag=False,
279+
exp.metaMatrixInitialisation(standardise_flag=False,
280280
average_flag=False)
281281

282-
# Calling the function for advanced statistical analysis of the data
282+
# Calling the function for advanced statistical analysis of the data
283283
# file_creation=True. Hence, the output of the data used to run the tests and the output of the tests will be stored in in the 'Results' folder inside your experiment folder
284284

285285
#############################################################
286286
## 1. Running anova on advanced between and within factors ##
287287
#############################################################
288-
exp.analyse(parameter_list={"all"},
288+
exp.analyse(parameter_list={"all"},
289289
between_factor_list=["Subject_type", "Gender"],
290290
within_factor_list=["Stimuli_type", "Brightness"],
291-
statistical_test="anova",
291+
statistical_test="anova",
292292
file_creation=True)
293293

294294
#############################################################
295295
## 2. Running no tests. Just storing analysis data in Results folder ##
296296
#############################################################
297-
exp.analyse(statistical_test="None",
297+
exp.analyse(statistical_test="None",
298298
file_creation=True)
299299

300300

@@ -304,11 +304,11 @@ subject_name = "Sub_001" #specify your own subject's name (must be in json file)
304304
stimulus_name = "Stim_1" #specify your own stimulus name (must be in json file)
305305

306306
# Access metadata dictionary for particular subject and stimulus
307-
single_meta = exp.getMetaData(sub=subject_name,
307+
single_meta = exp.getMetaData(sub=subject_name,
308308
stim=stimulus_name)
309309

310310
# Access metadata dictionary for particular subject and averaged for stimulus types
311-
agg_type_meta = exp.getMetaData(sub=subject_name,
311+
agg_type_meta = exp.getMetaData(sub=subject_name,
312312
stim=None)
313313

314314
```
@@ -329,10 +329,10 @@ import numpy as np
329329

330330
# function to convert data to generate csv file for data file recorded using EyeLink on both eyes and the stimulus name specified in the message section
331331
generateCompatibleFormat(exp_path="/path/to/data/file/in/raw/format",
332-
device="eyelink",
333-
stim_list_mode='NA',
334-
start='start_trial',
335-
stop='stop_trial',
332+
device="eyelink",
333+
stim_list_mode='NA',
334+
start='start_trial',
335+
stop='stop_trial',
336336
eye='B')
337337

338338
df = pd.read_csv("/path/to/enerated/data/file/in/csv/format")
@@ -353,7 +353,7 @@ sensor_dict = {
353353

354354
# Creating Stimulus object. See the documentation for advanced parameters.
355355
stim = Stimulus(path="path/to/experiment/folder",
356-
data=df,
356+
data=df,
357357
sensor_names=sensor_dict)
358358

359359
# Some functionality usage. See documentation of Stimulus class for advanced use.

docs/Introduction.rst

Lines changed: 22 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ Running the tests
6565
-----------------
6666

6767
In order to test **PyTrack**, some sample data files can be found
68-
`here <https://drive.google.com/open?id=1N9ZrTO6Bikx3aI7BKivSFAp3vrLxSCM6>`__.
68+
`here <https://drive.google.com/open?id=1tWD69hurELVuVRFzizCbukWnr22RZrnp>`__.
6969

7070
To get started, first you need to choose which design you want to run
7171
the framework in. If you wish to use the *Experiment Design*, see
@@ -94,7 +94,7 @@ for the listed directories to be present for the proper functioning of
9494
│ │ subject_001.[asc/txt/tsv/...]
9595
│ │ subject_002.[asc/txt/tsv/...]
9696
| |__ ......
97-
97+
9898
└── Stimulus/
9999
│ │ stim_1.[jpg/jpeg]
100100
│ │ stim_2.[jpg/jpeg]
@@ -272,10 +272,10 @@ for a detailed understanding of each function.
272272
273273
# function to convert data to generate database in base format for experiment done using EyeLink on both eyes and the stimulus name specified in the message section
274274
generateCompatibleFormat(exp_path="abcd/efgh/NTU_Experiment/",
275-
device="eyelink",
276-
stim_list_mode='NA',
277-
start='start_trial',
278-
stop='stop_trial',
275+
device="eyelink",
276+
stim_list_mode='NA',
277+
start='start_trial',
278+
stop='stop_trial',
279279
eye='B')
280280
281281
**Running the analysis or extracting data:**
@@ -288,14 +288,14 @@ for a detailed understanding of each function.
288288
exp = Experiment(json_file="abcd/efgh/NTU_Experiment/NTU_Experiment.json")
289289
290290
# Instantiate the meta_matrix_dict of an Experiment to find and extract all features from the raw data
291-
exp.metaMatrixInitialisation(standardise_flag=False,
291+
exp.metaMatrixInitialisation(standardise_flag=False,
292292
average_flag=False)
293293
294294
# Calling the function for the statistical analysis of the data
295295
# file_creation=True. Hence, the output of the data used to run the tests and the output of the tests will be stored in in the 'Results' folder inside your experiment folder
296-
exp.analyse(parameter_list={"all"},
297-
between_factor_list=["Subject_type"],
298-
within_factor_list=["Stimuli_type"], statistical_test="Mixed_anova",
296+
exp.analyse(parameter_list={"all"},
297+
between_factor_list=["Subject_type"],
298+
within_factor_list=["Stimuli_type"], statistical_test="Mixed_anova",
299299
file_creation=True)
300300
301301
**Visualizing the data:**
@@ -376,25 +376,25 @@ the same.
376376
exp = Experiment(json_file="abcd/efgh/NTU_Experiment/NTU_Experiment.json")
377377
378378
# Instantiate the meta_matrix_dict of an Experiment to find and extract all features from the raw data
379-
exp.metaMatrixInitialisation(standardise_flag=False,
379+
exp.metaMatrixInitialisation(standardise_flag=False,
380380
average_flag=False)
381381
382-
# Calling the function for advanced statistical analysis of the data
382+
# Calling the function for advanced statistical analysis of the data
383383
# file_creation=True. Hence, the output of the data used to run the tests and the output of the tests will be stored in in the 'Results' folder inside your experiment folder
384384
385385
#############################################################
386386
## 1. Running anova on advanced between and within factors ##
387387
#############################################################
388-
exp.analyse(parameter_list={"all"},
388+
exp.analyse(parameter_list={"all"},
389389
between_factor_list=["Subject_type", "Gender"],
390390
within_factor_list=["Stimuli_type", "Brightness"],
391-
statistical_test="anova",
391+
statistical_test="anova",
392392
file_creation=True)
393393
394394
#############################################################
395395
## 2. Running no tests. Just storing analysis data in Results folder ##
396396
#############################################################
397-
exp.analyse(statistical_test="None",
397+
exp.analyse(statistical_test="None",
398398
file_creation=True)
399399
400400
@@ -404,11 +404,11 @@ the same.
404404
stimulus_name = "Stim_1" #specify your own stimulus name (must be in json file)
405405
406406
# Access metadata dictionary for particular subject and stimulus
407-
single_meta = exp.getMetaData(sub=subject_name,
407+
single_meta = exp.getMetaData(sub=subject_name,
408408
stim=stimulus_name)
409409
410410
# Access metadata dictionary for particular subject and averaged for stimulus types
411-
agg_type_meta = exp.getMetaData(sub=subject_name,
411+
agg_type_meta = exp.getMetaData(sub=subject_name,
412412
stim=None)
413413
414414
Stand-alone Design
@@ -431,10 +431,10 @@ data for only 1 subject on a particular stimulus. If not, look at
431431
432432
# function to convert data to generate csv file for data file recorded using EyeLink on both eyes and the stimulus name specified in the message section
433433
generateCompatibleFormat(exp_path="/path/to/data/file/in/raw/format",
434-
device="eyelink",
435-
stim_list_mode='NA',
436-
start='start_trial',
437-
stop='stop_trial',
434+
device="eyelink",
435+
stim_list_mode='NA',
436+
start='start_trial',
437+
stop='stop_trial',
438438
eye='B')
439439
440440
df = pd.read_csv("/path/to/enerated/data/file/in/csv/format")
@@ -455,7 +455,7 @@ data for only 1 subject on a particular stimulus. If not, look at
455455
456456
# Creating Stimulus object. See the documentation for advanced parameters.
457457
stim = Stimulus(path="path/to/experiment/folder",
458-
data=df,
458+
data=df,
459459
sensor_names=sensor_dict)
460460
461461
# Some functionality usage. See documentation of Stimulus class for advanced use.

requirements.txt

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
numpy==1.16.2
2-
scipy==1.2.1
3-
pingouin==0.2.2
1+
numpy
2+
scipy
3+
pingouin
44
sqlalchemy
5-
pandas==0.23.4
6-
matplotlib==3.0.2
7-
statsmodels==0.9.0
5+
pandas
6+
matplotlib
7+
statsmodels
88
Pillow

0 commit comments

Comments
 (0)