COMM 88 Lecture Notes - Lecture 7: Content Validity, Predictive Validity, Face Validity
Comm 88 Lecture 7
April 24, 2018
How Good Is your Measurement? Reliability and Validity cont.
•Assessing reliability
•What’s wrong with the “funny” item? Doesn’t go with rest, will lower the reliability of the score
•You want all items to be indications of the SAME variable (e.g., the candidate’s credibility)
•If so, you get high internal consistency (high Cronbach’s alpha)
•A good “unidimensional” variable/concept
•What if credibility involves more than just trustworthiness? Would need “multi-dimensional”
scal
•Evaluate reliability separately for each subscale
•For measures using coders (e.g., behavioral observations)
•Inter-coder reliability - compare multiple coders
•Intra-coder reliability - compare multiple observations of same coder
•Validity of measurement
•Does your measure really capture the concept you intended to be measuring?
•Good fit of measure with concept
•Assessing validity
•Subjective types of validity
•Face validity - the mature looks/sounds good “on the face of it”
•Content validity - the measure captures the full range of meanings/dimensions of the
concept (like a table of contents in a book, bucket has contents)
•Predictive validity - the measure is shown to predict scores on an appropriate future
measure
•Ex: SAT scores (your “potential” to achieve) → college GPA (your achievement)
•Convergence validity - the measure is shown to get same result as another measure of the
same thing
•Construct validity - the measure is shown to be related to measures of other concepts that
should be related (and not to ones that shouldn’t)
•Ex: agression scale ←→ hostility scale
•Relationship between validity and reliability
•Can a measure be relative but not valid?
•Can a measure be valid but not reliable?
•Triangulation of measurement - use several different measures of one variable, then compare
them