ADMS 4562 Lecture Notes - Lecture 5: Ordinary Least Squares, Supervised Learning, Search Algorithm

68 views30 pages
CS229 Lecture notes
Andrew Ng
Supervised learning
Let’s start by talking about a few examples of supervised learning problems.
Suppose we have a dataset giving the living areas and prices of 47 houses
from Portland, Oregon:
Living area (feet2)Price (1000$s)
2104 400
1600 330
2400 369
1416 232
3000 540
.
.
..
.
.
We can plot this data:
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
0
100
200
300
400
500
600
700
800
900
1000
housing prices
square feet
price (in $1000)
Given data like this, how can we learn to predict the prices of other houses
in Portland, as a function of the size of their living areas?
1
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 30 pages and 3 million more documents.

Already have an account? Log in
CS229 Fall 2012 2
To establish notation for future use, we’ll use x(i)to denote the “input”
variables (living area in this example), also called input features, and y(i)
to denote the “output” or target variable that we are trying to predict
(price). A pair (x(i), y(i)) is called a training example, and the dataset
that we’ll be using to learn—a list of mtraining examples {(x(i), y(i)); i=
1,...,m}—is called a training set. Note that the superscript “(i)” in the
notation is simply an index into the training set, and has nothing to do with
exponentiation. We will also use Xdenote the space of input values, and Y
the space of output values. In this example, X=Y=R.
To describe the supervised learning problem slightly more formally, our
goal is, given a training set, to learn a function h:X 7→ Y so that h(x) is a
“good” predictor for the corresponding value of y. For historical reasons, this
function his called a hypothesis. Seen pictorially, the process is therefore
like this:
Training
set
house.)
(living area of
Learning
algorithm
hpredicted yx (predicted price)
of house)
When the target variable that we’re trying to predict is continuous, such
as in our housing example, we call the learning problem a regression prob-
lem. When ycan take on only a small number of discrete values (such as
if, given the living area, we wanted to predict if a dwelling is a house or an
apartment, say), we call it a classification problem.
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 30 pages and 3 million more documents.

Already have an account? Log in
3
Part I
Linear Regression
To make our housing example more interesting, let’s consider a slightly richer
dataset in which we also know the number of bedrooms in each house:
Living area (feet2)#bedrooms Price (1000$s)
2104 3 400
1600 3 330
2400 3 369
1416 2 232
3000 4 540
.
.
..
.
..
.
.
Here, the x’s are two-dimensional vectors in R2. For instance, x(i)
1is the
living area of the i-th house in the training set, and x(i)
2is its number of
bedrooms. (In general, when designing a learning problem, it will be up to
you to decide what features to choose, so if you are out in Portland gathering
housing data, you might also decide to include other features such as whether
each house has a fireplace, the number of bathrooms, and so on. We’ll say
more about feature selection later, but for now let’s take the features as
given.)
To perform supervised learning, we must decide how we’re going to rep-
resent functions/hypotheses hin a computer. As an initial choice, let’s say
we decide to approximate yas a linear function of x:
hθ(x) = θ0+θ1x1+θ2x2
Here, the θi’s are the parameters (also called weights) parameterizing the
space of linear functions mapping from Xto Y. When there is no risk of
confusion, we will drop the θsubscript in hθ(x), and write it more simply as
h(x). To simplify our notation, we also introduce the convention of letting
x0= 1 (this is the intercept term), so that
h(x) =
n
X
i=0
θixi=θTx,
where on the right-hand side above we are viewing θand xboth as vectors,
and here nis the number of input variables (not counting x0).
Unlock document

This preview shows pages 1-3 of the document.
Unlock all 30 pages and 3 million more documents.

Already have an account? Log in

Document Summary

Let"s start by talking about a few examples of supervised learning problems. Suppose we have a dataset giving the living areas and prices of 47 houses from portland, oregon: To establish notation for future use, we"ll use x(i) to denote the input variables (living area in this example), also called input features, and y(i) to denote the output or target variable that we are trying to predict (price). A pair (x(i), y(i)) is called a training example, and the dataset that we"ll be using to learn a list of m training examples {(x(i), y(i)); i = Note that the superscript (i) in the notation is simply an index into the training set, and has nothing to do with exponentiation. We will also use x denote the space of input values, and y the space of output values. In this example, x = y = r.

Get access

Grade+
$40 USD/m
Billed monthly
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
10 Verified Answers
Class+
$30 USD/m
Billed monthly
Class+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
7 Verified Answers