CS486 Lecture Notes - Lecture 12: Supervised Learning, Overfitting, Unsupervised Learning

45 views2 pages

Document Summary

Learning 11. 12. 18: we want agents to learn from experiences to improve their performance over time, problem from a collection of input-output pairs, learn a function that predicts the output for new inputs. This means we do better in training but worse in test. History of deep learning 11. 21. 18: a perceptron takes binary inputs (either data or another perceptron"s output) and models a neuron in our brain. The brain learns by changing the strength of the synapses: outputs depends only on inputs (1 if weighted sum is above threshold, 0 otherwise) Simple algorithm to learn a perception: start with random weights in a perceptron, for a training example, compute the output of the perceptron. If the output does not match the correct input: If the correct output was 0 but the actual was 1, decrease the weights that has an input of 1.

Get access

Grade+
$40 USD/m
Billed monthly
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
10 Verified Answers
Class+
$30 USD/m
Billed monthly
Class+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
7 Verified Answers

Related Documents