Skip to content

Commit 7300472

Browse files
committed
chapter 11
1 parent a84b014 commit 7300472

96 files changed

Lines changed: 2377 additions & 82 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.github/FUNDING.yml

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
github: [HenryNdubuaku]

README.md

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -24,13 +24,14 @@ Over the past years working in AI/ML, I filled notebooks with intuition first, r
2424
| 08 | [Computer Vision](chapter%2008%3A%20computer%20vision/01.%20image%20fundamentals.md) | image processing, object detection, segmentation, video processing, SLAM, CNNs, vision transformers, diffusion, flow matching, VR/AR | Available |
2525
| 09 | [Audio & Speech](chapter%2009%3A%20audio%20and%20speech/01.%20digital%20signal%20processing.md) | DSP, ASR, TTS, voice & acoustic activity detection, diarisation, source separation, active noise cancellation, wavenet, conformer | Available |
2626
| 10 | [Multimodal Learning](chapter%2010%3A%20multimodal%20learning/01.%20multimodal%20representations.md) | fusion strategies, contrastive learning, CLIP, VLMs, image/video tokenisation, cross-modal generation, unified architectures, world models | Available |
27-
| 11 | Autonomous Systems | perception, robot learning, VLAs, self-driving cars, space robots | Coming |
28-
| 12 | Computing & OS | discreet maths, computer architecture, operating systems, RAM, concurrency, parallelism, programming languages | Coming |
29-
| 13 | Data Structures & Algorithms | arrays, trees, graph, search, sorting, hashmaps | Coming |
30-
| 14 | SIMD & GPU Programming | ARM & NEON, X86 chips, RISC ships, GPUs, TPUs, triton, CUDA, Vulkan | Coming |
31-
| 15 | Systems Design | systems design fundamentals, cloud computing, large scale infra, ML systems design examples | Coming |
32-
| 16 | Inference | quantisation, streamingLLMs, continuous batching, edge inference, | Coming |
33-
| 17 | Intersecting Fields | quantum ML, neuromorphic ML, AI for finace, AI for bio | Coming |
27+
| 11 | [Autonomous Systems](chapter%2011%3A%20autonomous%20systems/01.%20perception.md) | perception, robot learning, VLAs, self-driving cars, space robots | Available |
28+
| 12 | Graph Neural Networks | discreet maths, computer architecture, operating systems, RAM, concurrency, parallelism, programming languages | Coming |
29+
| 13 | Computing & OS | discreet maths, computer architecture, operating systems, RAM, concurrency, parallelism, programming languages | Coming |
30+
| 14 | Data Structures & Algorithms | arrays, trees, graph, search, sorting, hashmaps | Coming |
31+
| 15 | SIMD & GPU Programming | ARM & NEON, X86 chips, RISC ships, GPUs, TPUs, triton, CUDA, Vulkan | Coming |
32+
| 16 | ML Systems Design | systems design fundamentals, cloud computing, large scale infra, ML systems design examples | Coming |
33+
| 17 | AI Inference | quantisation, streamingLLMs, continuous batching, edge inference, | Coming |
34+
| 18 | Intersecting Fields | quantum ML, neuromorphic ML, AI for finace, AI for bio | Coming |
3435

3536
## Citation
3637
```bibtex

chapter 01: vectors/02. vector properties.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Vector Properties
22

3-
*Vector properties describe the geometric and algebraic characteristics that define how vectors behave. This file covers magnitude, direction, unit vectors, equality, parallelism, orthogonality, and linear independence -- the building blocks of every ML feature space.*
3+
*Vector properties describe the geometric and algebraic characteristics that define how vectors behave. This file covers magnitude, direction, unit vectors, equality, parallelism, orthogonality, and linear independence, the building blocks of every ML feature space.*
44

55
- The **magnitude** (or length) of a vector tells you *how far* it reaches. Think of it as the length of the arrow. For a vector $\mathbf{a} = (a_1, a_2, a_3)$, its magnitude is:
66

chapter 01: vectors/03. norms and metrics.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Metrics and Norms
22

3-
*Norms measure the size of a vector; metrics measure the distance between two vectors. This file covers L1, L2, and L-infinity norms, Euclidean and cosine distance, and why choosing the right distance function is critical for k-NN, clustering, and retrieval in ML.*
3+
*Norms measure the size of a vector; metrics measure the distance between two vectors. This file covers L1, L2, and L-infinity norms, Euclidean and cosine distance, and why choosing the right distance function is critical for kNN, clustering, and retrieval in ML.*
44

55
- We know vectors have magnitude and direction. But how do we actually measure "how big" a single vector is, or "how far apart" two vectors are? This is where **norms** and **metrics** come in.
66

chapter 01: vectors/04. products.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Vector Products
22

3-
*Vector products are the fundamental operations for measuring similarity and computing projections. This file covers inner products, the dot product, cosine similarity, the cross product, and outer products -- operations that power attention mechanisms, embeddings, and geometric reasoning in AI.*
3+
*Vector products are the fundamental operations for measuring similarity and computing projections. This file covers inner products, the dot product, cosine similarity, the cross product, and outer products, operations that power attention mechanisms, embeddings, and geometric reasoning in AI.*
44

55
- We have seen how to add and scale vectors. But can we *multiply* two vectors together? It turns out there is more than one way to do it, and each answers a different question.
66

chapter 01: vectors/05. basis and duality.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Basis and Duality
22

3-
*Bases define the coordinate systems of vector spaces, and duality reveals how linear functions act on vectors. This file covers linear independence, spanning sets, change of basis, dual spaces, and covectors -- concepts behind PCA, feature transforms, and attention queries in ML.*
3+
*Bases define the coordinate systems of vector spaces, and duality reveals how linear functions act on vectors. This file covers linear independence, spanning sets, change of basis, dual spaces, and covectors, concepts behind PCA, feature transforms, and attention queries in ML.*
44

55
- We have seen that vectors live in spaces with a certain number of dimensions. But what defines those dimensions? This is where **basis vectors** come in.
66

chapter 02: matrices/01. matrix properties.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Matrix Properties
22

3-
*Matrices are the data structures that store datasets, encode transformations, and define every neural network layer. This file covers matrix dimensions, elements, transpose, trace, determinant, inverse, rank, and null space -- the foundational properties used throughout linear algebra and ML.*
3+
*Matrices are the data structures that store datasets, encode transformations, and define every neural network layer. This file covers matrix dimensions, elements, transpose, trace, determinant, inverse, rank, and null space, the foundational properties used throughout linear algebra and ML.*
44

55
- At its core, a **matrix** is a rectangular grid of numbers arranged in rows and columns. If a vector is a single list of numbers, a matrix is a table of them.
66

chapter 02: matrices/02. matrix types.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Matrix Types
22

3-
*Special matrix structures unlock computational shortcuts and mathematical guarantees. This file covers identity, diagonal, symmetric, triangular, orthogonal, positive definite, sparse, and stochastic matrices -- types that appear in covariance estimation, graph algorithms, regularisation, and Markov chains.*
3+
*Special matrix structures unlock computational shortcuts and mathematical guarantees. This file covers identity, diagonal, symmetric, triangular, orthogonal, positive definite, sparse, and stochastic matrices, types that appear in covariance estimation, graph algorithms, regularisation, and Markov chains.*
44

55
- Not all matrices are the same. Different structures give matrices special properties that make them faster to compute with, easier to reason about, or both. Here are the types you will encounter most.
66

chapter 02: matrices/03. operations.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Matrix Operations
22

3-
*Matrix operations are the computational engine of deep learning. This file covers matrix addition, scalar multiplication, matrix-vector products, matrix multiplication, element-wise operations, Kronecker products, and broadcasting -- the operations behind every forward pass and gradient update.*
3+
*Matrix operations are the computational engine of deep learning. This file covers matrix addition, scalar multiplication, matrix-vector products, matrix multiplication, element-wise operations, Kronecker products, and broadcasting, the operations behind every forward pass and gradient update.*
44

55
- Matrices can be added and scaled just like vectors.
66

chapter 02: matrices/04. linear transformations.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Linear Transformations
22

3-
*Every matrix multiplication is a linear transformation -- a function that reshapes, rotates, or projects vectors while preserving linearity. This file covers rotation, reflection, scaling, shearing, projection, the kernel and image of a map, and how neural network layers chain these transformations.*
3+
*Every matrix multiplication is a linear transformation, a function that reshapes, rotates, or projects vectors while preserving linearity. This file covers rotation, reflection, scaling, shearing, projection, the kernel and image of a map, and how neural network layers chain these transformations.*
44

55
- A **linear transformation** (or linear map) is a function that takes a vector and produces another vector, while preserving addition and scaling. If $T$ is linear, then:
66

0 commit comments

Comments
 (0)