diff --git a/chapters/en/chapter4/classification_models.mdx b/chapters/en/chapter4/classification_models.mdx index f1dfb3f7..27e24318 100644 --- a/chapters/en/chapter4/classification_models.mdx +++ b/chapters/en/chapter4/classification_models.mdx @@ -147,6 +147,7 @@ and verify this is correct: ``` from IPython.display import Audio +classifier(sample["audio"].copy()) Audio(sample["audio"]["array"], rate=sample["audio"]["sampling_rate"]) ``` @@ -289,7 +290,7 @@ take that as our prediction. Let's confirm whether we were right by listening to volume too high or else you might get a jump!): ```python -Audio(audio, rate=16000) +Audio(audio_sample, rate=16000) ``` Perfect! We have the sound of a dog barking 🐕, which aligns with the model's prediction. Have a play with different audio diff --git a/chapters/en/chapter4/fine-tuning.mdx b/chapters/en/chapter4/fine-tuning.mdx index d9748a36..e6b0508a 100644 --- a/chapters/en/chapter4/fine-tuning.mdx +++ b/chapters/en/chapter4/fine-tuning.mdx @@ -40,7 +40,7 @@ GTZAN doesn't provide a predefined validation set, so we'll have to create one o genres, so we can use the `train_test_split()` method to quickly create a 90/10 split as follows: ```python -gtzan = gtzan["train"].train_test_split(seed=42, shuffle=True, test_size=0.1) +gtzan = gtzan.train_test_split(seed=42, shuffle=True, test_size=0.1) gtzan ``` @@ -109,6 +109,9 @@ This label looks correct, since it matches the filename of the audio file. Let's using Gradio to create a simple interface with the `Blocks` API: ```python +import gradio as gr + + def generate_audio(): example = gtzan["train"].shuffle()[0] audio = example["audio"]