Interview measures

What it says: an interview that produces a measure score, or, reifying things a bit, a measure(ment) based on an interview. Very like “rating scales/measures” but the latter don’t have to have an interview: a rating scale might be purely based on observations with no interview. A rating scale of behaviour could be one element of an interview.

Structured and semi-structured interviews and pure observational data collection are obviously used in purely qualitative research but then there is no quantification: no “measure”.

Some interview measures are restricted to use only by particular professions, others aren’t.

Details #

Interview measures are sometimes divided into “structured” and “semi-structured” which are pretty much as the names say: they differ in how tightly the interviewer’s behaviour is defined/restricted by the interview design. It’s also possible to have “open interviews” with little defined structure but after which the interviewer or observer will make ratings (shading the distinction between interview and rating measures).

All interview measures are distinguished from self-report measures, mostly but not exclusively, questionnaire measures. The obvious difference between interviewer/observer rating and self-rating measures is the issue of “inter-rater” reliability: that different interviewers may, even with a highly structured interview, conduct it differently and their non-verbal language is hard to control. This may lead to responses from interviewees to the same interview differing across interviewers. This can, and should be, explored by having the same interviewees interviewed by different interviewers but then we have to recognise that the differences in scores across interviewers are, to some extent, confounded by simple test-retest unreliability (i.e. random fluctuations in scores) or by true changes in the variable of interest (for instance if self-confidence is rated it might change between the first and second interview, particularly if life events have impinged on interviewees). Another source of inter-rater unreliability is that even with the same interview different observers/raters might rate things differently. Here, if the interview can be recorded the issues of test-retest unreliability and true change in the variable can be removed. However, for some ratings it may still be that recordings can give quite the experience of being life in the room, particularly of being the interviewer.

A final challenge even when funds have been found for good inter-rater unreliability studies is that of generalising from the sample of interviewers/raters involved in those studies to any potential future interviewers/raters. Measures and ratings vary greatly in how much training is recommended or required for the use of the measures.

There are no perfect measures of any sort and recognising these issues and being wary of biased presentations of these measures minimising the issues is wise. Even with seemingly good such studies, it’s always necessary to think whether either or both the interviewees or the interviewers in the studies might differ from those you might use. Interview measures with no inter-rater agreement/reliability studies at all should be treated very warily.

Try also #

Cohen’s kappa
Inter-rater agreement/reliability

Chapters #

Chapter 3.

Online resources #

None currently.

Dates #

First created 8.iv.24.

Powered by BetterDocs