This approach to evaluating findings starts out with an idea that it is worth investigating the relationship between two (or more variables) to see if they have any predictable relationship with one another by testing whether the data from your sample might easily have been seen if there were no systematic relationship in the population from which you took the sample you did, of the size(s) you had.
To take a very simple example you might want to know about gender differences in scores of clients entering a service. Your inferential test is “How likely is it that I saw a difference in mean score between the gender groups as large as I did, or larger, if in the population of people coming for therapy there is no mean difference in scores by gender?”
The method is based on a null model or null hypothesis, here:
“There is no difference in mean scores in the population” juxtaposed against an alternative (alternate in the American) hypothesis: “there is some non-zero difference in the population”. Given some other assumptions about the population and about the data being a random sample from the population then you have a mathematical model, really a simulation of the research process, that allows a statistician to say how unlikely it is that you would have seen differences by gender (I’m being careful not to presume binary gender) as big or bigger than you did on that model. By convention (usually) a probability lower than .05 (“p < .05”), i.e. lower than one in twenty is dubbed “statistically significant” and one above that dubbed “non-significant”, “NS”, “p > .05”.
Despite varying waves of challenges, this has been the dominant statistical approach to psychological and MH data since at least the 1940s. However, despite most of the concerns having been around at least that long, the dominance of the approach is waning though sadly not yet replaced by thoughtful data analysis. (OK, that’s not the name of a new statistical paradigm but perhaps it should be!) Sadly, some of the current critique is pretty unthoughtful: used appropriately and understood for what it is, the simple inferential test still has a lot going for it for particular questions, gender diffences are not such a bad example. However, good graphical descriptions of the observed data (histograms, boxplots, violinplots, ecdfs) and confidence intervals around observed means will complement and enrich the simple inferential test even for as simple a question as this.
As you can see, there’s really much more to come to spin off and complement this.
Try also #
Analysis of variance (ANOVA)
Alternative (“alternate”) hypothesis
Independence of observations
Multiple tests problem
Statistical assumptions / model
Type I error
Type II error
Violinplot (violin plot)
Online resources #
None of mine yet.
First created 18.viii.23.