Skip to content

Commit c60c217

Browse files
committedJan 14, 2025·
Merge branch 'master' of github.com:jkitchin/pycse
2 parents 6bcc6a0 + b033cf2 commit c60c217

File tree

1 file changed

+2
-2
lines changed
  • pycse-jb/pycse___python_computations_in_science_and_engineering/book

1 file changed

+2
-2
lines changed
 

‎pycse-jb/pycse___python_computations_in_science_and_engineering/book/23-gp.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@
119119
"\n",
120120
"Alternatively, consider how you might estimate it in your head. You would look at the data, and use points that are close to the data point you want to estimate to form the estimate. Points that are far from the data point would have less influence on your estimate. You are implicitly weighting the value of the known points in doing this. In GPR we make this idea quantitative.\n",
121121
"\n",
122-
"The key concept to quantify this is called *covariance*, which is how are two variables correlated with each other. Intuitively, if two x-values are close together, then we anticipate that the corresponding $f(x$ values are also close together. We can say that the values \"co-vary\", i.e. they are not independent. We use this fact when we integrate an ODE and estimate the next point, or in root solving when we iteratively find the next steps. We will use this idea to compute the weights that we need. The covariance is a matrix and each element of the matrix defines the covariance between two data points. To compute this, *we need to make some assumptions* about the data. A common assumption is that the covariance is Gaussian with the form:\n",
122+
"The key concept to quantify this is called *covariance*, which is how are two variables correlated with each other. Intuitively, if two x-values are close together, then we anticipate that the corresponding $f(x)$ values are also close together. We can say that the values \"co-vary\", i.e. they are not independent. We use this fact when we integrate an ODE and estimate the next point, or in root solving when we iteratively find the next steps. We will use this idea to compute the weights that we need. The covariance is a matrix and each element of the matrix defines the covariance between two data points. To compute this, *we need to make some assumptions* about the data. A common assumption is that the covariance is Gaussian with the form:\n",
123123
"\n",
124124
"$K_{ij} = \\sigma_f \\exp\\left(-\\frac{(x_i - x_j)^2}{2 \\lambda^2}\\right)$\n",
125125
"\n",
@@ -892,7 +892,7 @@
892892
"\n",
893893
"$p$ is the periodicity and $l$ is the lengthscale. A key feature of GPR is you can add two kernel functions together and get a new kernel. Here we combine the linear kernel with the periodic kernel to represent data that is periodic and which increases (or decreases) with time.\n",
894894
"\n",
895-
"As before we use the log likeliehood to find the hyperparameters that best fit this data.\n",
895+
"As before we use the log likelihood to find the hyperparameters that best fit this data.\n",
896896
"\n"
897897
]
898898
},

0 commit comments

Comments
 (0)
Please sign in to comment.