Skip to content

Commit

Permalink
modified to use intercept as well and rewritten for clarity
Browse files Browse the repository at this point in the history
  • Loading branch information
nhauber99 committed Aug 27, 2023
1 parent f2eba2e commit 944eb6c
Show file tree
Hide file tree
Showing 6 changed files with 48 additions and 32 deletions.
32 changes: 23 additions & 9 deletions docs/_posts/2023-08-26-ConvolutionPadding.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: "Fast Slope Based Extrapolation for Convolutions"
title: "Local Linear Regression Based Extrapolation for Convolutions"
date: 2023-08-26
---

Expand All @@ -14,7 +14,7 @@ MathJax.Hub.Config({
<script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.0/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>

## The Problem
When applying high-pass filters to images, such as height maps, a common issue is edge artifacts. These are caused by having to extrapolate pixels and finding a good solution to this is actually not that trivial.
When applying high-pass filters to images, such as height maps, a common issue is edge artifacts. These are caused by having to extrapolate pixels.

Even Photoshop suffers from the same issue as shown below:

Expand All @@ -25,34 +25,48 @@ High-Pass Filtered Image with Enhanced Contrast
<img src="/Blog/assets/convolution/ps_output.jpg" style="display: block; margin-left: auto; margin-right: auto; width: 100%" />


Ideally, the high-pass filter should produce a uniform output. However, the edge pixels show a gradient. This is caused by having to extrapolate pixels outside the image when convolving it, which is often done with a replicate or reflection padding.
Ideally, the high-pass filter should produce a uniform output. However, the edge pixels show a gradient. This is caused by having to extrapolate pixels outside the image when convolving it, which is often achieved using replicate or reflection padding.
This would be a plot of the pixel intensities for the same example:

<img src="/Blog/assets/convolution/linear_no_extrapolation.jpg" style="display: block; margin-left: auto; margin-right: auto; width: 100%" />


## The Solution
I've developed a method to solve this issue. A naive implementation for 1 dimension would look like this:
1. Calculate the slope $$s$$ of the neighborhood of each pixel weighted by the convolution kernel
2. Apply the kernel to the image and extrapolate missing samples by $$ v_i + d_{ij} * s $$. Here $$v_i$$ is the value of the current pixel and $$d_{ij}$$ is the distance to the pixel being extrapolated.
1. Calculate the slope $$ b $$ and intercept $$ a $$ of the neighborhood of each pixel weighted by the convolution kernel using linear regression
2. Apply the kernel to the image and extrapolate missing samples by $$ a + b*d_{ij} $$. Here $$d_{ij}$$ is the distance from the current pixel to the pixel being extrapolated.

This means that pixels outside of the boundary are extrapolated using the slope of the local neighborhood of each pixel. The only challenge remaining is to find an algorithm which can do this in a single pass, otherwise the memory accesses would need to be doubled or tripled, which would significantly affect the performance of a GPU implementation. Luckily, this is possible.

Firstly, finding the local slope requires calculating a weighted linear regression for the neighborhood of each pixel. I got the algorithm for the single pass weighted linear regression from ChatGPT and sadly can't find the where it supposedly got this from. However, I've compared it to the Matlab implementation of weighted linear regression and the results match up perfectly.
Finding the local slope requires calculating a weighted linear regression for the neighborhood of each pixel. I got the formulas for single-pass weighted linear regression from ChatGPT. Unfortunately, I could not find the original reference for these formulas. However, I've compared it to the Matlab implementation of weighted linear regression and the results match up perfectly - that's basically a valid proof of correctness for a software developer.

This is the formula I used for calculating the slope, it needs the implementation to maintain 5 running sums:

$$ s = \frac{\sum w_ix_iy_i-\frac{(\sum w_ix_i)(\sum w_iy_i)}{\sum w_i}}{\sum w_ix_i^2-\frac{(\sum w_ix_i)^2}{\sum w_i}} $$
1. Calculate weighted sums:

With the extrapolation applied to the same linear gradient, the blurred version is a perfect match of the original and the difference is zero:
$$ S_w = \sum{w_i} $$\\
$$ S_x = \sum{w_i \cdot x_i} $$\\
$$ S_y = \sum{w_i \cdot y_i} $$\\
$$ S_{xx} = \sum{w_i \cdot x_i^2} $$\\
$$ S_{xy} = \sum{w_i \cdot x_i \cdot y_i} $$

2. Compute the slope $$ b $$ and intercept $$ a $$ using:

$$ b = \frac{S_w \cdot S_{xy} - S_x \cdot S_y}{S_w \cdot S_{xx} - S_x^2} $$

$$ a = \frac{S_y - b \cdot S_x}{S_w} $$

## Results

With the extrapolation applied to the same linear gradient, the blurred version perfectly matches the original and the difference is zero:

<img src="/Blog/assets/convolution/linear_extrapolation.jpg" style="display: block; margin-left: auto; margin-right: auto; width: 100%" />

Here is a more complex example with random noise added to a sine wave:

<img src="/Blog/assets/convolution/sine_extrapolation.jpg" style="display: block; margin-left: auto; margin-right: auto; width: 100%" />

The same convolution without extrapolation would result in this:
The same example without extrapolation yields the following result:

<img src="/Blog/assets/convolution/sine_no_extrapolation.jpg" style="display: block; margin-left: auto; margin-right: auto; width: 100%" />

Expand Down
Binary file modified docs/assets/convolution/linear_extrapolation.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/assets/convolution/linear_no_extrapolation.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/assets/convolution/sine_extrapolation.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/assets/convolution/sine_no_extrapolation.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
48 changes: 25 additions & 23 deletions docs/assets/convolution/slopeExtrapolation.m
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@

% AUTHOR: Niklas Hauber
% example script showing slope extrapolation
% example script showing linear regression based extrapolation

useSlopeExtrapolation = true;
useSlopeExtrapolation = true;
n = 500; % Size of the array
sigma = 10; % gaussian std dev
kernel_size = 20; % must be odd
sigma = 15; % gaussian std dev
kernel_size = 50; % must be odd

% init data
%data = (1:n)/n-0.5;
data = sin((1:n)/80);
data = sin((1:n)/90);
data = data + 0.01*randn(1,n);
%data(10:20) = nan;
%data(50:60) = nan;
Expand All @@ -29,12 +29,13 @@

valueSum = 0;
distSum = 0;

weight_sum = 0;
wsum_xy = 0;
wsum_x = 0;
wsum_y = 0;
wsum_x2 = 0;

s_winv = 0;
s_w = 0;
s_xy = 0;
s_x = 0;
s_y = 0;
s_x2 = 0;

for x = -half_size:half_size
idx = i + x;
Expand All @@ -43,27 +44,27 @@
if valid_sample
y = data(idx);
valueSum = valueSum + y * k;
weight_sum=weight_sum+k;
wsum_xy = wsum_xy+k*x*y;
wsum_x = wsum_x+k*x;
wsum_y = wsum_y+k*y;
wsum_x2 = wsum_x2+k*x*x;

s_w=s_w+k;
s_xy = s_xy+k*x*y;
s_x = s_x+k*x;
s_y = s_y+k*y;
s_x2 = s_x2+k*x*x;
elseif useSlopeExtrapolation
valueSum = valueSum+self*k;
s_winv = s_winv+k;
distSum = distSum+k*x;
end
end

if useSlopeExtrapolation
slope = (wsum_xy- wsum_x*wsum_y/weight_sum) / (wsum_x2 - wsum_x*wsum_x/weight_sum);
blurred(i) = valueSum + distSum*slope;
slope = (s_w*s_xy - s_x*s_y) / (s_w*s_x2 - s_x*s_x);
intercept = (s_y-slope*s_x)/s_w;
blurred(i) = valueSum + s_winv*intercept + distSum*slope;
else
blurred(i) = valueSum / weight_sum;
blurred(i) = valueSum / s_w;
end
end

% Plot the original and blurred arrays
%figure
subplot(2,1,1);
hold off; % to replace last figure
Expand All @@ -72,5 +73,6 @@
plot(blurred, 'r', 'DisplayName', 'Blurred');
legend;
subplot(2,1,2);
plot(blurred-data, 'r', 'DisplayName', 'Difference');
plot(data-blurred, 'r', 'DisplayName', 'Difference');
%ylim([-0.1 0.11])
legend;

0 comments on commit 944eb6c

Please sign in to comment.