Skip to content

Commit 0a8a9d1

Browse files
committed
updatehw01
1 parent d889028 commit 0a8a9d1

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

hw01.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -153,7 +153,7 @@
153153
"hidden": true
154154
},
155155
"source": [
156-
"Our library will assume that you have *train* and *valid* directories. It also assumes that each dir will have subdirs for each class you wish to recognize (in this case, 'cats' and 'dogs')."
156+
"Our library will assume that you have *train* and *valid* directories. It also assumes that each dir will have subdirs for each class you wish to recognize."
157157
]
158158
},
159159
{
@@ -341,7 +341,7 @@
341341
"\n",
342342
"We will be using the <b>resnet34</b> model. resnet34 is a version of the model that won the 2015 ImageNet competition. Here is more info on [resnet models](https://github.com/KaimingHe/deep-residual-networks). We'll be studying them in depth later, but for now we'll focus on using them effectively.\n",
343343
"\n",
344-
"Here's how to train and evalulate a *dogs vs cats* model in 3 lines of code, and under 20 seconds:"
344+
"Here's how to train and evalulate a *plastic vs glass* model in 3 lines of code, and under 20 seconds:"
345345
]
346346
},
347347
{
@@ -485,7 +485,7 @@
485485
}
486486
],
487487
"source": [
488-
"# from here we know that 'cats' is label 0 and 'dogs' is label 1.\n",
488+
"# from here we know that 'glass' is label 0 and 'plastic' is label 1.\n",
489489
"data.classes"
490490
]
491491
},
@@ -1022,7 +1022,7 @@
10221022
"source": [
10231023
"If you try training for more epochs, you'll notice that we start to *overfit*, which means that our model is learning to recognize the specific images in the training set, rather than generalizaing such that we also get good results on the validation set. One way to fix this is to effectively create more data, through *data augmentation*. This refers to randomly changing the images in ways that shouldn't impact their interpretation, such as horizontal flipping, zooming, and rotating.\n",
10241024
"\n",
1025-
"We can do this by passing `aug_tfms` (*augmentation transforms*) to `tfms_from_model`, with a list of functions to apply that randomly change the image however we wish. For photos that are largely taken from the side (e.g. most photos of dogs and cats, as opposed to photos taken from the top down, such as satellite imagery) we can use the pre-defined list of functions `transforms_side_on`. We can also specify random zooming of images up to specified scale by adding the `max_zoom` parameter."
1025+
"We can do this by passing `aug_tfms` (*augmentation transforms*) to `tfms_from_model`, with a list of functions to apply that randomly change the image however we wish. For photos that are largely taken from the side (e.g. most photos of plastic and glass, as opposed to photos taken from the top down, such as satellite imagery) we can use the pre-defined list of functions `transforms_side_on`. We can also specify random zooming of images up to specified scale by adding the `max_zoom` parameter."
10261026
]
10271027
},
10281028
{

0 commit comments

Comments
 (0)