diff --git a/subtitles/en/41_text-embeddings-&-semantic-search.srt b/subtitles/en/41_text-embeddings-&-semantic-search.srt index 51c9d9b29..5fc6dc369 100644 --- a/subtitles/en/41_text-embeddings-&-semantic-search.srt +++ b/subtitles/en/41_text-embeddings-&-semantic-search.srt @@ -194,12 +194,12 @@ average the token embeddings 44 00:01:49,650 --> 00:01:52,500 -which is called mean pooling +which is called mean_pooling and this is what we do here. 45 00:01:53,370 --> 00:01:55,800 -With mean pooling the only +With mean_pooling the only thing we need to make sure 46 @@ -210,7 +210,7 @@ padding tokens in the average, 47 00:01:58,410 --> 00:02:01,860 which is why you can see the -attention mask being used here. +attention_mask being used here. 48 00:02:01,860 --> 00:02:05,100 @@ -313,7 +313,7 @@ we take a small sample 70 00:02:56,070 --> 00:02:57,780 -from the SQUAD dataset and apply +from the squad dataset and apply 71 00:02:57,780 --> 00:03:00,180