You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: chapter 01: vectors/02. vector properties.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Vector Properties
2
2
3
-
*Vector properties describe the geometric and algebraic characteristics that define how vectors behave. This file covers magnitude, direction, unit vectors, equality, parallelism, orthogonality, and linear independence -- the building blocks of every ML feature space.*
3
+
*Vector properties describe the geometric and algebraic characteristics that define how vectors behave. This file covers magnitude, direction, unit vectors, equality, parallelism, orthogonality, and linear independence, the building blocks of every ML feature space.*
4
4
5
5
- The **magnitude** (or length) of a vector tells you *how far* it reaches. Think of it as the length of the arrow. For a vector $\mathbf{a} = (a_1, a_2, a_3)$, its magnitude is:
Copy file name to clipboardExpand all lines: chapter 01: vectors/03. norms and metrics.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Metrics and Norms
2
2
3
-
*Norms measure the size of a vector; metrics measure the distance between two vectors. This file covers L1, L2, and L-infinity norms, Euclidean and cosine distance, and why choosing the right distance function is critical for k-NN, clustering, and retrieval in ML.*
3
+
*Norms measure the size of a vector; metrics measure the distance between two vectors. This file covers L1, L2, and L-infinity norms, Euclidean and cosine distance, and why choosing the right distance function is critical for kNN, clustering, and retrieval in ML.*
4
4
5
5
- We know vectors have magnitude and direction. But how do we actually measure "how big" a single vector is, or "how far apart" two vectors are? This is where **norms** and **metrics** come in.
Copy file name to clipboardExpand all lines: chapter 01: vectors/04. products.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Vector Products
2
2
3
-
*Vector products are the fundamental operations for measuring similarity and computing projections. This file covers inner products, the dot product, cosine similarity, the cross product, and outer products -- operations that power attention mechanisms, embeddings, and geometric reasoning in AI.*
3
+
*Vector products are the fundamental operations for measuring similarity and computing projections. This file covers inner products, the dot product, cosine similarity, the cross product, and outer products, operations that power attention mechanisms, embeddings, and geometric reasoning in AI.*
4
4
5
5
- We have seen how to add and scale vectors. But can we *multiply* two vectors together? It turns out there is more than one way to do it, and each answers a different question.
Copy file name to clipboardExpand all lines: chapter 01: vectors/05. basis and duality.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Basis and Duality
2
2
3
-
*Bases define the coordinate systems of vector spaces, and duality reveals how linear functions act on vectors. This file covers linear independence, spanning sets, change of basis, dual spaces, and covectors -- concepts behind PCA, feature transforms, and attention queries in ML.*
3
+
*Bases define the coordinate systems of vector spaces, and duality reveals how linear functions act on vectors. This file covers linear independence, spanning sets, change of basis, dual spaces, and covectors, concepts behind PCA, feature transforms, and attention queries in ML.*
4
4
5
5
- We have seen that vectors live in spaces with a certain number of dimensions. But what defines those dimensions? This is where **basis vectors** come in.
Copy file name to clipboardExpand all lines: chapter 02: matrices/01. matrix properties.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Matrix Properties
2
2
3
-
*Matrices are the data structures that store datasets, encode transformations, and define every neural network layer. This file covers matrix dimensions, elements, transpose, trace, determinant, inverse, rank, and null space -- the foundational properties used throughout linear algebra and ML.*
3
+
*Matrices are the data structures that store datasets, encode transformations, and define every neural network layer. This file covers matrix dimensions, elements, transpose, trace, determinant, inverse, rank, and null space, the foundational properties used throughout linear algebra and ML.*
4
4
5
5
- At its core, a **matrix** is a rectangular grid of numbers arranged in rows and columns. If a vector is a single list of numbers, a matrix is a table of them.
Copy file name to clipboardExpand all lines: chapter 02: matrices/02. matrix types.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Matrix Types
2
2
3
-
*Special matrix structures unlock computational shortcuts and mathematical guarantees. This file covers identity, diagonal, symmetric, triangular, orthogonal, positive definite, sparse, and stochastic matrices -- types that appear in covariance estimation, graph algorithms, regularisation, and Markov chains.*
3
+
*Special matrix structures unlock computational shortcuts and mathematical guarantees. This file covers identity, diagonal, symmetric, triangular, orthogonal, positive definite, sparse, and stochastic matrices, types that appear in covariance estimation, graph algorithms, regularisation, and Markov chains.*
4
4
5
5
- Not all matrices are the same. Different structures give matrices special properties that make them faster to compute with, easier to reason about, or both. Here are the types you will encounter most.
Copy file name to clipboardExpand all lines: chapter 02: matrices/03. operations.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Matrix Operations
2
2
3
-
*Matrix operations are the computational engine of deep learning. This file covers matrix addition, scalar multiplication, matrix-vector products, matrix multiplication, element-wise operations, Kronecker products, and broadcasting -- the operations behind every forward pass and gradient update.*
3
+
*Matrix operations are the computational engine of deep learning. This file covers matrix addition, scalar multiplication, matrix-vector products, matrix multiplication, element-wise operations, Kronecker products, and broadcasting, the operations behind every forward pass and gradient update.*
4
4
5
5
- Matrices can be added and scaled just like vectors.
Copy file name to clipboardExpand all lines: chapter 02: matrices/04. linear transformations.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# Linear Transformations
2
2
3
-
*Every matrix multiplication is a linear transformation -- a function that reshapes, rotates, or projects vectors while preserving linearity. This file covers rotation, reflection, scaling, shearing, projection, the kernel and image of a map, and how neural network layers chain these transformations.*
3
+
*Every matrix multiplication is a linear transformation, a function that reshapes, rotates, or projects vectors while preserving linearity. This file covers rotation, reflection, scaling, shearing, projection, the kernel and image of a map, and how neural network layers chain these transformations.*
4
4
5
5
- A **linear transformation** (or linear map) is a function that takes a vector and produces another vector, while preserving addition and scaling. If $T$ is linear, then:
0 commit comments