6.01 Lecture Notes - Lecture 1: Markov Model

50 views1 pages

Document Summary

The game: there are four lego bricks in a bag, the lego bricks are either white or red, you get to pull one lego brick out of the bag. You get if the brick is red, and sh otherwise. Belief: {0: 0. 2, 1: 0. 2, 2: 0. 2, 3: 0. 2, 4: 0. 2} and e($|s = s) = [sh. 00, . 00, . 00, . 00, . 00], so you should pay . Assume that a red lego is pulled from the bag and then returned. Posterior belief: {0: 0, 1: 0. 25, 2: 0. 5, 3: 0. 75, 4: 1} and e($|s = s) = [sh. 00, . 00, . 00, . 00, . 00], so you should pay . Now a white lego is drawn and returned. My previous posterior belief is my new prior belief. Posterior belief: {0: 1, 1: 0. 75, 2: 0. 5, 3: 0. 25, 4: 0} and e($|s = s) = [sh. 00, . 00, . 00, . 00, . 00], so you should pay . Using observations to improve estimates of state probabilities.

Get access

Grade+
$40 USD/m
Billed monthly
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
10 Verified Answers
Class+
$30 USD/m
Billed monthly
Class+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
7 Verified Answers

Related Documents