Yes, it’s about having “good enough” precision. Numeric, or numerical, approximation: the way much software (mostly) reaches good enough answers to quantitative statistical questions is a separate entry here.
Of course everyone knows what approximation is, why bother with an entry here? A bit more next!
Details #
Well, it’s not going to be a big entry but I do think our quantitative therapy/intervention literature has been more and more caught up in the reassuring sense of having definitive, precise answers to questions and I feel I am seeing more and more papers in the literature on evaluation of interventions lauding finding really very small differences between different interventions justifying this, not implausibly, by saying that on the industrial scale of modern psychological interventions even small differences, if they generalise across the millions being offered interventions represent cost savings and better delivery for the majority.
(This is, of course, a phenomenon of the Global North (take UK’s NHS IAPT as an example, and yes, I know it’s no longer “Improving Access to Psychological Therapies” but “NHS Talking Therapies, for anxiety and depression“, much of the world is still struggling to get any genuine availability of therapies beyond private therapies for the wealthy.)
We have, across the 40 years (argh!) I’ve been in this business gradually seen reporting of very precise reports of mean differences with asterixes, or again very precise p values, being replaced by reporting of 95% confidence intervals for those mean differences and that is a huge and real step away from a delusional love affair with unnecessary precision, but we are still leaving things to researchers reporting lots of precise looking numbers rather than thinking, about our local data, for all that generally its relatively small dataset sizes make for wide confidence intervals. The ability to think through rough estimates of the implications of local data is a huge skill and needs to be reinstated as a vital complement to these reports with their high precision and huge dataset sizes.
If like me, you aren’t good at mental arithmetic, pull an old handheld calculator from a drawer (or get one second hand: they cost peanuts now). Even with a handheld calculator you won’t get confidence intervals but you can go to my shiny apps where you will see a section “Apps giving confidence intervals from observed counts or statistics and dataset sizes” where I think you’ll find the key ones for counts and means as well as more esoteric ones. These will give you useful estimates after you’ve typed in probably just four or five numbers from your data. (And if you find you want a CI that is not there, contact me and I will try to code it and add it to the list.)
Let’s reinstate the value of thinking through estimates and its more sophisticated relative, simulation and try to move away from awe at seemingly precise findings that are often reported with only tokenistic exploration of the likely robustness of generalising from them.
Try also #
- Confidence intervals
- Digital versus analogue
- Estimation
- Numerical approximation
- Numeric precision
- Simulation
Chapters #
Not covered in the OMbook.
Online resources #
None currently here nor likely I think.
Dates #
First created 23.iii.26.