It is the implementation of Supervised embedding models from [Learning End-to-End Goal-Oriented Dialog] paper.
Results almost the same as in the paper.
Here you can find Russian paper-note of the paper: link.
- Python 3.6.0
- tensorflow 1.0.0
- Dialog bAbI Tasks Data 1-6 corpus, download by the link. This corpus should be placed in data/dialog-bAbI-tasks directory.
All packages are listed in requirements.txt.
- Setup the environment.
- Run:
bin/train_all.sh
- After approx. 1 hour run it in test set:
bin/test_all.sh
16.03.17.
In the table per-response accuracy is shown.
Task | Supervised Embedding (Article) | Supervised Embedding (Ours) |
T1: Issuing API calls | 100 | 99.6 |
T2: Updating API calls | 68.4 | 68.4 |
T3: Displaying options | 64.9 | 56.9 |
T4: Providing information | 57.2 | 57.1 |
T5: Full dialogs | 75.4 | 62.1 |
T6: Dialog state tracking 2 | 22.6 | 10.8 |
Open question:
- When we training with use_history=True should we test on pre-processed dataset as in train? Or should we concat each output in test and build history on the fly?