POLI 3000 Lecture Notes - Lecture 2: Observational Error, Content Validity, Face Validity

22 views3 pages
I. What akes a easureet good?’
a. Clarity
b. Reliability
i. Strategies for reliability:
1. Test-retest relability
2. Interrater reliability
3. Reliability of scales
a. Split half reliability
i. Take all of the components, break them in half and
reshuffle the order
b. Internal reliability
i. If we got rid of one question out of the multiple
questions used to measure one, would you get the
same results?
II. Measurement error
a. Random error- noise
i. Example: radar guns. This is okay, because if the error is truly random, it will
cancel each other out
b. Bias
i. Systematic measurement error
ii. Eaple: askig aout people’s eight
iii. Another example: interviewer effects on surveys how do you think about
trumps polciies?
iv. Unexpected measurement error: interviewer effects
1. Race and surveys during WW2
2. ould blacks be treated better or worse if the Japanese conquered the
U“A
3. some interviewers were black, and some interviewers were Japanese
a. 25% said wrose when answering a black interviewer
b. 45% said worse when answering a white interviewer
v. Do you thik it’s ore important to concentrate on beating the axis or to make
democracy work better at home?
1. 39% said beat axis when answering a black interviewer
2. 62% said beat axis when answering a white interviewer
vi. When you have multiple indicators, there is a potential for error in each of the
questions used to measure one construct
c. Validity
i. Whether or not a measure is valid is dependent on what your questions is and
what you are trying to measure
ii. 3 types of validity
1. Face validity: does the data look like what it should be? Fundamentally
subjective.
2. Content validity: asks two questions
a. Does it capture all of the relevant dimensions?
find more resources at oneclass.com
find more resources at oneclass.com
Unlock document

This preview shows page 1 of the document.
Unlock all 3 pages and 3 million more documents.

Already have an account? Log in

Document Summary

What (cid:373)akes a (cid:373)easure(cid:373)e(cid:374)t good?": clarity, reliability, strategies for reliability, test-retest relability. Interrater reliability: reliability of scales, split half reliability, take all of the components, break them in half and reshuffle the order. Measurement error: random error- noise, example: radar guns. Also, this says nothing about participation, elections, etc: construct validity, does it behave as expected, a little more objective. Is it correlated with stuff is should be correlated with, and is it (cid:374)ot (cid:272)orrelated (cid:449)ith thi(cid:374)gs it should(cid:374)"t (cid:271)e (cid:272)orrelated (cid:449)ith: alternative type: predictive validity. You need your measurement to be valid and reliable: limitations of identifying validity, validity is not all or nothing, typically no clear criterion of validity, validity is purpose specific, validity requires subjective judgment. Types of numbers we can use: you need to establish your units, which city had a worse murder problem in 2011, new york had 525 murders, neward had 94 murders, this shows that context would be helpful.

Get access

Grade+
$40 USD/m
Billed monthly
Grade+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
10 Verified Answers
Class+
$30 USD/m
Billed monthly
Class+
Homework Help
Study Guides
Textbook Solutions
Class Notes
Textbook Notes
Booster Class
7 Verified Answers