Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updating "Classifying Names with a Character-Level RNN" #2954

Merged
merged 51 commits into from
Dec 11, 2024

Conversation

mgs28
Copy link
Contributor

@mgs28 mgs28 commented Jun 24, 2024

Fixes #1166

Description

Updating Sean's excellent RNN classification tutorial that is now 8 years old and missing some newer pytorch functionality.

  • Cannot use default Dataloader to select batch sizes because "stack expects each tensor to be of equal size" but each of the names are of different length. However, updated the code to use mini batches without Dataloader functionality.
  • Introducing pytorch's Datasets class, we show how to split the data into train and test datasets which changes the training explanation.
  • Rewrote pieces of the tutorial to use three classes to improve re-use (Data, DataSet and RNN).
  • Added a little more explanation to how RNNs score multi-character strings and their 2D matrix of tensors.
  • Changed evaluation from random training examples to an entire the test set.
  • removed some of the command line explanations since notebooks are used more often.
  • tried to preserve as much of the original text, functions and style as possible.

Checklist

  • The issue that is being fixed is referred in the description (see above "Fixes #ISSUE_NUMBER")
  • Only one issue is addressed in this pull request
  • Labels from the issue that this PR is fixing are added to this pull request
  • No unnecessary issues are included into this pull request.

cc @albanD

Copy link

pytorch-bot bot commented Jun 24, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/2954

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit e31a141 with merge base a91f631 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

Hi @mgs28!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@mgs28 mgs28 closed this Jun 24, 2024
@mgs28 mgs28 reopened this Jun 24, 2024
@mgs28 mgs28 marked this pull request as ready for review June 24, 2024 19:08
@mgs28
Copy link
Contributor Author

mgs28 commented Jun 26, 2024

@svekars - it looks like you are active a lot in this repo, any chance you could help me with this? Thanks!

@mgs28
Copy link
Contributor Author

mgs28 commented Jun 28, 2024

Added functionality to process training data in mini batches to satisfy original story. However, had to use numpy + random to create batch indices from a given dataset.

Also, simplified training so it was a closer match to https://pytorch.org/tutorials/beginner/basics/optimization_tutorial.html

@mgs28
Copy link
Contributor Author

mgs28 commented Jul 8, 2024

@svekars - would you please help me with this tutorial update pull request or point me to someone who could?

@svekars
Copy link
Contributor

svekars commented Jul 8, 2024

cc: @spro

@mgs28
Copy link
Contributor Author

mgs28 commented Jul 10, 2024

Sorry about the spelling errors! I ran pyspelling and re-ran make html for the tutorial. This should pass those CI steps now.

I also added a story for me to come back and update the CONTRIBUTING.md to include some of these checks. (#2969)

Thanks @spro @svekars !

@mgs28
Copy link
Contributor Author

mgs28 commented Jul 11, 2024

@spro and @svekars - I significantly cut the training time although it is faster on my CPU than GPU. It runs in 72 seconds on my local CPU. I also added some default device so it looks for your CUDA build machines to hopefully make it faster.

Thanks!

@@ -3,6 +3,7 @@
NLP From Scratch: Classifying Names with a Character-Level RNN
**************************************************************
**Author**: `Sean Robertson <https://github.com/spro>`_
**Updated**: `Matthew Schultz <https://github.com/mgs28>`_
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we typically don't add Update. Please remove this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@svekars - anything else I can do to make this better? thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@svekars - hello, are there other items I should address here? I appreciate your help with this!

@svekars svekars added the core Tutorials of any level of difficulty related to the core pytorch functionality label Sep 4, 2024
@mgs28
Copy link
Contributor Author

mgs28 commented Oct 31, 2024

@svekars - Sorry on two accounts:

  1. Sorry for the delay... Some personal stuff come up and tonight was the earliest I could address.
  2. I made review of your editing changes harder. I manually made some of your changes via a commit and accepted some here in the UI. I apologize for that. I was not thinking tonight.

I made one departure from your edits. You suggested removing a comma in "train text and validate our models" but when I re-read it, I actually meant "train, test and validate our models". I updated the text to read as such.

I apologize for my delay and muddying up this review but greatly appreciate your feedback on this. All your points were great. Thanks!

@mgs28
Copy link
Contributor Author

mgs28 commented Dec 3, 2024

@svekars - Happy Tuesday. Is there anything else I need to do to help merge this tutorial in?

Thanks!

@svekars
Copy link
Contributor

svekars commented Dec 3, 2024

Looks good - would be great to also fix these user warnings:

Screenshot 2024-12-03 at 12 13 09 PM

@mgs28
Copy link
Contributor Author

mgs28 commented Dec 3, 2024

@svekars - Fixed it. Sorry I missed that warning! Thanks

@svekars
Copy link
Contributor

svekars commented Dec 6, 2024

@mgs28 I really want to merge this but I wonder what happened in the update that increased the time of the build for this tutorial from ~6 minutes to 22 minutes

@mgs28
Copy link
Contributor Author

mgs28 commented Dec 6, 2024

@svekars - I also want you to merge it in. :)

History: The training is different (batches instead of random samples) and if you look at the comments around July 11th, we tinkered with the parameters to make the confusion matrix better.

My best suggestion is to set device = cpu at the beginning of the script. With 10% of my laptop CPU, it takes about 4 minutes. Can you share some of the specs of the deployment machine? That might give me a bit more to dig into. Thanks!

@svekars
Copy link
Contributor

svekars commented Dec 7, 2024

@svekars - I also want you to merge it in. :)

History: The training is different (batches instead of random samples) and if you look at the comments around July 11th, we tinkered with the parameters to make the confusion matrix better.

My best suggestion is to set device = cpu at the beginning of the script. With 10% of my laptop CPU, it takes about 4 minutes. Can you share some of the specs of the deployment machine? That might give me a bit more to dig into. Thanks!

It's a 1 GPU machine. Can you try with device = cpu?

@mgs28
Copy link
Contributor Author

mgs28 commented Dec 9, 2024

@svekars - good idea. I removed the device options at the top of the script. pyspelling passed and the build html looks good. Please let me know if I missed something. Thanks!

@svekars
Copy link
Contributor

svekars commented Dec 9, 2024

well, now its43 minutes...

@mgs28
Copy link
Contributor Author

mgs28 commented Dec 9, 2024

@svekars - well... clearly that's the wrong direction... I submitted a change to speed it back up

What's your goal?

In July and more recently, I compared the old code to this new addition.

  • I added an optimizer (SGD) instead of manually updating weights in the prior tutorial
  • I am using datasets (original reason for the update) instead of randomly selecting 100,000 examples
  • Batches require training over 9.8x data (55 iterations through 17,063 points = 938,465 examples). We're about 3x faster per example but training more examples.
  • The original article tests on the training data and we changed to withhold a separate dataset. This makes the performance look worse so we increased training time.

All of these changes make this tutorial more in line with best practices. To make the time goal, I can reduce training by a percentage.

  • Original training (estimate 22 minutes)
    accuracy = 0.796745240688324
    confusion matrix has a strong diagonal

  • Half training (estimate 11 minutes)
    accuracy = 0.7974094748497009
    confusion matrix has a less strong diagonal

  • Quarter training (estimate 5 minutes)
    accuracy = 0.8003985285758972
    confusion matrix has no diagonal

I checked in the 11 minute version for you which I think is a good compromise. Do you agree?

@svekars
Copy link
Contributor

svekars commented Dec 10, 2024

Agree on the 11 minutes compromise - thank you for adjusting! Please add a note that for the purpose of example the number of epochs is reduced and we recommend to set it to the a larger number for higher level of accuracy.

@mgs28
Copy link
Contributor Author

mgs28 commented Dec 10, 2024

@svekars - good idea and done. I added a comment where training is specified. The first item in the end exercises also mentions hyperparameter tuning as a good place to start. Does that work? Thanks!

@svekars svekars merged commit bdd15ed into pytorch:main Dec 11, 2024
20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed core Tutorials of any level of difficulty related to the core pytorch functionality
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Char RNN classification with batch size
4 participants