These include any ways of exploring how robust findings may be to misfit between real life and the simplifying models that underlie essentially all our statistical analyses; they explore the non-robustness of our analyses.
Simple sensitivity analyses can sometimes be no more (or less) than thought experiments in which we think through what the effects of the misfit might be. There are some issues, notably misfit to Gaussian distributions, where “analytic”, i.e. algebraic, mathematical understanding of modified models throw light on the sensitivity of findings to the misfit. Some of the “corrections” to the simpler parametric analyses arose from this sort of advanced maths.
However, most sensitivity analyses now will involve modelling, simulating models that might be more plausible and closer fits to real life to see how, when we know what the analyses should say, given that we created the model/simulation, we can see how the analyses differ from that. Modern computer power of even cheap personal computers can make such sensitivity analyses feasible for individual researchers where previously mainframe computer power would have been needed to conduct them. The challenge now tends not to be computer power and time available but rather whether there are statistical systems that can do the simulations and whether the necessary skills and experience to use that software is present in the research team. This is where systems like R and Stata, and even more general languages like Python move statistical explorations way beyond the conventions of SPSS.
Try also #
Not really touched on in the book.
Online resources #
First created 28.viii.23.