Internal consistency: measures the average correlation between all items on a tool
Intrarater reliability: an indicator of the tests' stability overtime when it is administered by the same rater
Interrater reliability: indicates the consistency of a tool when it is administered by different raters
Construct validity: investigates whether the tool correlates with a theorized construct
Criterion validity: can be divided into two categories; concurrent and predictive. Concurrent criterion validity measures the correlation of the tool with other tools that measure the same concepts, preferably a "gold standard" when it exists. Predictive criterion validity examines whether the tool can predict future outcomes.
Content validity: assesses whether the tool targets all of the relevant topics related to the concept being measured and that there are no irrelevant items
Face validity: an assessment of whether the tool appears to measure the intended concept