This is most used when looking at the psychometric properties of multi-item scales and it refers to the correlation between scores on one item on a scale and the total scores for that page but the “corrected” bit is about not looking at the simple correlation between the item score and the scale score but instead looking at the correlation between the item score and the scale score minus the contribution of that item to the score. This ensures that correlation is unbiased: it will be zero if there is no relationship between scoring on that item and that on the other items of the scale.
This ensures that if an item contributes no reliable variance to the scale, has no valid contribution, the CITC will be zero (if the sample is large). Had the correlation not been “corrected”, the correlation would have been greater than zero because the item’s score contributes to the scale score.
See my Rblog post: subscal/total correlations, for more detail and a simulation. As it happens, this was prompted by the Authenticity Scale (AS) and makes links to subscale/total correlations for other measures including the CORE-OM. However, the issues are general.
Try also #
Online applications #
Nice to have shiny applications that would let the user set the number of items and simulate the confounding and both the uncorrected and corrected item/total correlations and to be able put in real data and get the both correlations ideally with confidence intervals. Not likely any time soon!