Skip to content

adibyte95/Text-Generation-using-LSTM

Repository files navigation

HitCount

Text-Generation-using-LSTM

TOPIC:

To develop a simple a simple LSTM network to learn sequences of characters from Alice in Wonderland.These experiments are not limited to text, you can also experiment with other ASCII data, such as computer source code, marked up documents in LaTeX, HTML or Markdown and more.

Files Included:

1. saved weights large network: this contains saved models for different epochs for the bigger slightly more complex model
2. saved weights small network: this contains saved models for different epochs for the smaller and simpler model
3. shakerpere corpus : this folder contains differnt acts by shakespere classified by different genres in txt format. feel free to use this text to generate text in shakespere language
4. wonderland.txt : this is text format of the famous child novell alice in the wonder land. we had used this book for our predictive text generation

EXAMPLE:

here is an example of the text generated by the neural network.
‘what is a larter ’ said the ming. ‘i dan to the thing the was to the thing the was to the thing the was a little said the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with the was oot a little sabbit with

here we can see that the network is able to write well formed sequenses of words sometimes like "to the thing " but also makes spelling mistakes like rabbit is written as "sabbit" and too is witten as "oot"

Tips to improve network performance:

1.Predict fewer than 1,000 characters as output for a given seed.
2.Remove all punctuation from the source text, and therefore from the models’ vocabulary.
3.Try a one hot encoded for the input sequences.
4.Train the model on padded sentences rather than random sequences of characters.
5.Increase the number of training epochs to 100 or many hundreds.
6.Add dropout to the visible input layer and consider tuning the dropout percentage.
7.Tune the batch size, try a batch size of 1 as a (very slow) baseline and larger sizes from there.
8.Add more memory units to the layers and/or more layers.
9.Experiment with scale factors (temperature) when interpreting the prediction probabilities.
10.Change the LSTM layers to be “stateful” to maintain state across batches.

Credits:

This repo is inspired by this post
Shakespeare corpus is downloaded from here
alice in the wonderland is downloaded from here

Releases

No releases published

Packages

No packages published