- Neurological inspiration for mathematical modeling of artificial neural networks
- Perceptrons (inputs, weights, threshold)
- Geometric interpretation of weights vector and threshold scalar as decision boundary
- Read Jakob Janecek’s notes on simple perceptrons and follow the learning example (slides 16-25), but stop at the perceptron convergence theorem. Pay attention to the diagram on slide 12 which explains how to geometrically interpret the weights and bias.
- Read Mark Humphry’s excellent notes on single-layer neural networks
- Watch Udacity’s video on perceptron training
- Write a
Perceptron
class with the following instance attributes and methods:weights
: list of floatsthreshold
: float__init__()
: initializer method to create instance attributesactivate(inputs)
: classify single instance (inputs
: list of features, return: int)predict(X)
: classify many instances (return: list of n ints, given n instances inX
)
- Create several small 2-dimensional datasets like the examples from class and initialize a
Perceptron
with manually-chosenweights
andthreshold
that can classify all inputs. - Plot the hyperplane decision boundary represented by your Perceptron’s weights and threshold along with the data it was trained on. If you’re stuck, then follow this example. (Slide 12 of the simple perceptrons notes will help you interpret and plot the weights.)