Skip to content

Commit a076263

Browse files
authored
Merge pull request #87 from susnato/patch-5
Fix broken links in Chapter 4
2 parents c6b20d5 + 25f6549 commit a076263

File tree

3 files changed

+4
-4
lines changed

3 files changed

+4
-4
lines changed

chapters/en/chapter4/classification_models.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ models for audio classification.
1717
Decoder-only models introduce unnecessary complexity to the task, since they assume that the outputs can also be a _sequence_
1818
of predictions (rather than a single class label prediction), and so generate multiple outputs. Therefore, they have slower
1919
inference speed and tend not to be used. Encoder-decoder models are largely omitted for the same reason. These architecture
20-
choices are analogous to those in NLP, where encoder-only models such as [BERT]((https://huggingface.co/blog/bert-101))
20+
choices are analogous to those in NLP, where encoder-only models such as [BERT](https://huggingface.co/blog/bert-101)
2121
are favoured for sequence classification tasks, and decoder-only models such as GPT reserved for sequence generation tasks.
2222

2323
Now that we've recapped the standard transformer architecture for audio classification, let's jump into the different

chapters/en/chapter4/demo.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
# Build a demo with Gradio
22

3-
In this final section on audio classification, we'll build a [Gradio]((https://gradio.app)) demo to showcase the music
3+
In this final section on audio classification, we'll build a [Gradio](https://gradio.app) demo to showcase the music
44
classification model that we just trained on the [GTZAN](https://huggingface.co/datasets/marsyas/gtzan) dataset. The first
55
thing to do is load up the fine-tuned checkpoint using the `pipeline()` class - this is very familiar now from the section
6-
on [pre-trained models](../classification_models). You can change the `model_id` to the namespace of your fine-tuned model
6+
on [pre-trained models](classification_models). You can change the `model_id` to the namespace of your fine-tuned model
77
on the Hugging Face Hub:
88

99
```python

chapters/en/chapter4/fine-tuning.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ gtzan["train"][0]
8888
}
8989
```
9090

91-
As we saw in [Unit 1](chapter1/audio_data), the audio files are represented as 1-dimensional NumPy arrays,
91+
As we saw in [Unit 1](../chapter1/audio_data), the audio files are represented as 1-dimensional NumPy arrays,
9292
where the value of the array represents the amplitude at that timestep. For these songs, the sampling rate is 22,050 Hz,
9393
meaning there are 22,050 amplitude values sampled per second. We'll have to keep this in mind when using a pretrained model
9494
with a different sampling rate, converting the sampling rates ourselves to ensure they match. We can also see the genre

0 commit comments

Comments
 (0)