Skip to content

Commit

Permalink
Merge pull request #93 from KellenSunderland/P05_C01
Browse files Browse the repository at this point in the history
Fixed some typos and a bug in P05-C01
  • Loading branch information
zackchase authored Aug 7, 2017
2 parents 1dcd2cf + 92ad07c commit 0536450
Showing 1 changed file with 15 additions and 15 deletions.
30 changes: 15 additions & 15 deletions P05-C01-simple-rnn.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@
"source": [
"## Dataset: \"The Time Machine\" \n",
"\n",
"Now mess with some data. I grabbed a copy of the ``Time Machine``, mostly because it's available freely thanks to the good people at [Project Gutenberg](http://www.gutenberg.org) and a lot of people are tired of seeing RNNs generate Shakespeare. In case you prefer to torturing Shakespeare to torturing H.G. Wells, I've also included Andrej Karpathy's tinyshakespeare.txt in the data folder. Let's get started by reading in the data."
"Now mess with some data. I grabbed a copy of the ``Time Machine``, mostly because it's available freely thanks to the good people at [Project Gutenberg](http://www.gutenberg.org) and a lot of people are tired of seeing RNNs generate Shakespeare. In case you prefer torturing Shakespeare to torturing H.G. Wells, I've also included Andrej Karpathy's tinyshakespeare.txt in the data folder. Let's get started by reading in the data."
]
},
{
Expand Down Expand Up @@ -263,7 +263,7 @@
"source": [
"## One-hot representations\n",
"\n",
"We can use NDArray's one_hot() render a one-hot representation of each character. But frack it, since this is the from scratch tutorial, let's right this ourselves."
"We can use NDArray's one_hot() operation to render a one-hot representation of each character. But frack it, since this is the from scratch tutorial, let's write this ourselves."
]
},
{
Expand Down Expand Up @@ -431,7 +431,7 @@
"sequences_per_batch_row = int(np.floor(len(dataset))/batch_size)\n",
"print(sequences_per_batch_row)\n",
"data_rows = [dataset[i*sequences_per_batch_row:i*sequences_per_batch_row+sequences_per_batch_row] \n",
" for i in range(batch_size)]"
" for i in range(batch_size)]"
]
},
{
Expand All @@ -440,7 +440,7 @@
"collapsed": true
},
"source": [
"Let's sanity check that everything went the way we hop. For each data_row, the second sequence should follow the first:"
"Let's sanity check that everything went the way we hope. For each data_row, the second sequence should follow the first:"
]
},
{
Expand Down Expand Up @@ -497,9 +497,9 @@
" # iterate over the sequences\n",
" for s in range(len(datasets[0])):\n",
" sequence = []\n",
" # iterate over the elements of the seqeunce\n",
" # iterate over the elements of the sequence\n",
" for elem in range(len(datasets[0][0])):\n",
" sequence.append(nd.concatenate([ds[s][elem].reshape((1,-1)) for ds in datasets], axis=0))\n",
" sequence.append(nd.concatenate([ds[s][elem].reshape((1, -1)) for ds in datasets], axis=0))\n",
" full_dataset.append(sequence)\n",
" return(full_dataset)\n",
" "
Expand Down Expand Up @@ -542,8 +542,8 @@
],
"source": [
"print(training_data[0][0].shape)\n",
"print(\"Seq 0, Batch 0 \\\"%s\\\"\" % textify([training_data[0][i][0].reshape((1,-1)) for i in range(seq_length)]))\n",
"print(\"Seq 1, Batch 0 \\\"%s\\\"\" % textify([training_data[1][i][0].reshape((1,-1)) for i in range(seq_length)]))"
"print(\"Seq 0, Batch 0 \\\"%s\\\"\" % textify([training_data[0][i][0].reshape((1, -1)) for i in range(seq_length)]))\n",
"print(\"Seq 1, Batch 0 \\\"%s\\\"\" % textify([training_data[1][i][0].reshape((1, -1)) for i in range(seq_length)]))"
]
},
{
Expand Down Expand Up @@ -601,8 +601,8 @@
}
],
"source": [
"print(textify([training_data[0][i][2].reshape((1,-1)) for i in range(seq_length)]))\n",
"print(textify([training_labels[0][i][2].reshape((1,-1)) for i in range(seq_length)]))"
"print(textify([training_data[0][i][2].reshape((1, -1)) for i in range(seq_length)]))\n",
"print(textify([training_labels[0][i][2].reshape((1, -1)) for i in range(seq_length)]))"
]
},
{
Expand Down Expand Up @@ -737,9 +737,9 @@
],
"source": [
"####################\n",
"# With a temperature of 1 (always 1 during training), we get back some set of proabilities\n",
"# With a temperature of 1 (always 1 during training), we get back some set of probabilities\n",
"####################\n",
"softmax(nd.array([[1,-1],[-1,1]]), temperature=1000.0)"
"softmax(nd.array([[1, -1], [-1, 1]]), temperature=1.0)"
]
},
{
Expand All @@ -751,8 +751,8 @@
"data": {
"text/plain": [
"\n",
"[[ 0.50049996 0.49949998]\n",
" [ 0.49949998 0.50049996]]\n",
"[[ 0.88079703 0.11920292]\n",
" [ 0.11920292 0.88079703]]\n",
"<NDArray 2x2 @cpu(0)>"
]
},
Expand Down Expand Up @@ -789,7 +789,7 @@
],
"source": [
"####################\n",
"# Often we want to sample with low temperatures to produce sharp proababilities\n",
"# Often we want to sample with low temperatures to produce sharp probabilities\n",
"####################\n",
"softmax(nd.array([[10,-10],[-10,10]]), temperature=.1)"
]
Expand Down

0 comments on commit 0536450

Please sign in to comment.