{"id":2581,"date":"2021-11-13T18:04:04","date_gmt":"2021-11-13T18:04:04","guid":{"rendered":"https:\/\/www.psyctc.org\/psyctc\/?post_type=docs&#038;p=2581"},"modified":"2026-04-15T15:59:00","modified_gmt":"2026-04-15T13:59:00","password":"","slug":"jackknife-method","status":"publish","type":"docs","link":"https:\/\/www.psyctc.org\/psyctc\/glossary2\/jackknife-method\/","title":{"rendered":"Jackknife (jack-knife) method"},"content":{"rendered":"\n<p>The earliest of the computer intensive statistical methods.  (I think it&#8217;s almost always written &#8220;jackknife&#8221; but I have put &#8220;jack-knife&#8221; in case, like me, people sometimes put the hyphen in.)<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Details<\/h4>\n\n\n\n<p class=\"has-small-font-size\">[This and the bootstrap method explanation start from the same scenario.]<\/p>\n\n\n\n<p>Let&#8217;s take the simple example: suppose we want to know the 95% confidence interval (CI) of the mean of a set of scores on a measure and the distribution of the 880 scores we have looks like this.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"885\" src=\"https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife1-1024x885.png\" alt=\"\" class=\"wp-image-2579\" srcset=\"https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife1-1024x885.png 1024w, https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife1-300x259.png 300w, https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife1-768x664.png 768w, https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife1-1536x1328.png 1536w, https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife1.png 1700w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>The mean of those scores is .93 and the standard deviation is .52 so the parametric standard error of mean (which is SD\/sqrt(n)) is .52\/sqrt(880) = .52\/29.7 = 0.018 so the 95% confidence interval around our observed mean +\/- 1.96 * 0.018, i.e. from .90 to .97.<\/p>\n\n\n\n<p>However, the problem is that that calculation is &#8220;parametric&#8221;: based on the assumption that the distribution of the data is Gaussian and completely defined by two population parameters (hence &#8220;parametric&#8221;): the mean and the SD. (This is also sometimes expressed as being defined by the mean and the variance but as the SD is square root of the variance, that&#8217;s saying the same thing: that we are working from the assumption that the distribution is Gaussian and that therefore it is completely defined by just two parameters.) However, using a parametric method means that the estimate of the CI of the statistic, here the mean, will be poor if the distribution of the data isn&#8217;t Gaussian. Looking at that histogram shows that these data are incredibly unlikely to have come from a Gaussian population. They have a finite range not an infinite one, they actually only take discrete values (neither of those issue are terribly problematical for most parametric methods) but crucially the distribution is clearly positively skew: with a longer tail to the right than the left.<\/p>\n\n\n\n<p>The jackknife method estimates the standard error by looking at the distribution of means of the data <em>leaving out one observation at at time<\/em>. So we have 880 observations, the first &#8220;jackknife&#8221; estimate is the mean of observations 2 to 880, i.e. leaving out the first observation, the second estimate is the mean of observations 1 and 3 to 880, and so on until the last estimate is the mean of observations 1 to 880.<\/p>\n\n\n\n<p>Here&#8217;s are those estimates.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"885\" src=\"https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife2-1024x885.png\" alt=\"\" class=\"wp-image-2580\" srcset=\"https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife2-1024x885.png 1024w, https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife2-300x259.png 300w, https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife2-768x664.png 768w, https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife2-1536x1328.png 1536w, https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife2.png 1700w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>So these the 880 jackknife estimates of the sample mean.  We can see that they have a fairly tight range, actually from 0.9298995 to 0.9331786.  The green line is their mean and that&#8217;s always the same as the sample mean.  What is useful is that the distribution of these values gives us a &#8220;robust&#8221; estimate of the variance of the mean and a robust 95% CI for that mean based on the variance of these estimates.  (&#8220;Robust&#8221; means that the estimate is not based on an assumption of Gaussian distribution that is clearly violated, it is robust to deviation from Gaussian.)  <\/p>\n\n\n\n<p>For these data this jackknife 95% CI is from .898 to .967.  Here the answer is the same as the parametric estimate (.897 to .966) to 2 decimal places but we have the reassurance that it is &#8220;robust&#8221;, where the distribution of data deviates much more markedly from Gaussian the parametric and jackknife CIs will deviate more markedly and the parametric estimate will be tighter than it should be and may also be biased, i.e. not centred correctly.  <\/p>\n\n\n\n<p>However, gaining this robustness was computer intensive: it involved computing 880 means of 879 values.  That&#8217;s not particularly challenging even for early computers and for small datasets it can be done with a mechanical calculator.  The method was invented in 1949 when computer power was still very limited and things often done by hand.<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">Effect of sample size<\/h5>\n\n\n\n<p>We can see that the jackknife must have the correct property of much lower variance in those estimates in large samples.   To show that I took just 44 evenly spaced scores from the 880 so the raw sample data now look like this:<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"885\" src=\"https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife3-1024x885.png\" alt=\"\" class=\"wp-image-2583\" srcset=\"https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife3-1024x885.png 1024w, https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife3-300x259.png 300w, https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife3-768x664.png 768w, https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife3-1536x1328.png 1536w, https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife3.png 1700w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Now the jackknife estimates of the mean look like this.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"885\" src=\"https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife4-1024x885.png\" alt=\"\" class=\"wp-image-2584\" srcset=\"https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife4-1024x885.png 1024w, https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife4-300x259.png 300w, https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife4-768x664.png 768w, https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife4-1536x1328.png 1536w, https:\/\/www.psyctc.org\/psyctc\/wp-content\/uploads\/2021\/11\/jackknife4.png 1700w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>These means range from 0.8945198 to 0.9471873: much wider than the range for the full dataset of 880 values and the 95% CI is now from .771 to 1.084 (the parametric CI was .775 to 1.080, again the same as the jackknife to 2 decimal places).<\/p>\n\n\n\n<h5 class=\"wp-block-heading\">Summary<\/h5>\n\n\n\n<p>The jackknife method can be applied to many sample statistics, not just the mean.  However, it has been almost completely replaced by the bootstrap method.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Try also<\/h4>\n\n\n\n<p><a data-type=\"docs\" data-id=\"2582\" href=\"https:\/\/www.psyctc.org\/psyctc\/glossary2\/bootstrap-methods\/\">Bootstrap methods<\/a><br><a data-type=\"docs\" data-id=\"2578\" href=\"https:\/\/www.psyctc.org\/psyctc\/glossary2\/computer-intensive-statistics-methods\/\">Computer intensive methods<\/a><br><a data-type=\"docs\" data-id=\"2371\" href=\"https:\/\/www.psyctc.org\/psyctc\/glossary2\/distribution\/\">Distribution<\/a><br><a href=\"https:\/\/www.psyctc.org\/psyctc\/glossary2\/estimate-estimation\/\" title=\"\">Estimation<\/a><br><a href=\"https:\/\/www.psyctc.org\/psyctc\/glossary2\/parametric-tests\/\" title=\"\">Parametric statistics\/tests<\/a><br><a href=\"https:\/\/www.psyctc.org\/psyctc\/glossary2\/precision\/\" title=\"\">Precision<\/a><br><a data-type=\"docs\" data-id=\"1896\" href=\"https:\/\/www.psyctc.org\/psyctc\/glossary2\/dataset\/\">Sample<\/a><br><a data-type=\"docs\" data-id=\"1900\" href=\"https:\/\/www.psyctc.org\/psyctc\/glossary2\/population\/\">Population<\/a><br><a data-type=\"docs\" data-id=\"2262\" href=\"https:\/\/www.psyctc.org\/psyctc\/glossary2\/mean-arithmetic-mean-average\/\">Mean<\/a><br><a data-type=\"docs\" data-id=\"2435\" href=\"https:\/\/www.psyctc.org\/psyctc\/glossary2\/variance\/\">Variance<\/a><br><a data-type=\"docs\" data-id=\"2442\" href=\"https:\/\/www.psyctc.org\/psyctc\/glossary2\/standard-deviation-sd\/\">Standard Deviation (SD)<\/a><\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Chapters<\/h4>\n\n\n\n<p>Not mentioned in the book but methods used in the examples in chapter 8.<\/p>\n\n\n\n<h4 class=\"wp-block-heading\">Dates<\/h4>\n\n\n\n<p>Created 13.xi.21, tweaks 15.iv.26.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The earliest of the computer intensive statistical methods. (I think it&#8217;s almost always written &#8220;jackknife&#8221; but I have put &#8220;jack-knife&#8221; in case, like me, people sometimes put the hyphen in.) Details [This and the bootstrap method explanation start from the same scenario.] Let&#8217;s take the simple example: suppose we want to know the 95% confidence &hellip; <a href=\"https:\/\/www.psyctc.org\/psyctc\/glossary2\/jackknife-method\/\" class=\"more-link\">Continue reading <span class=\"screen-reader-text\">Jackknife (jack-knife) method<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","template":"","meta":{"footnotes":""},"doc_category":[18],"glossaries":[],"doc_tag":[],"knowledge_base":[],"class_list":["post-2581","docs","type-docs","status-publish","hentry","doc_category-om-book"],"year_month":"2026-04","word_count":734,"total_views":"1932","reactions":{"happy":"0","normal":"0","sad":"0"},"author_info":{"name":"chris","author_nicename":"chris","author_url":"https:\/\/www.psyctc.org\/psyctc\/author\/chris\/"},"doc_category_info":[{"term_name":"All OM book glossary entries","term_url":"https:\/\/www.psyctc.org\/psyctc\/glossary\/non-knowledgebase\/om-book\/"}],"doc_tag_info":[],"knowledge_base_info":[],"knowledge_base_slug":[],"_links":{"self":[{"href":"https:\/\/www.psyctc.org\/psyctc\/wp-json\/wp\/v2\/docs\/2581","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.psyctc.org\/psyctc\/wp-json\/wp\/v2\/docs"}],"about":[{"href":"https:\/\/www.psyctc.org\/psyctc\/wp-json\/wp\/v2\/types\/docs"}],"author":[{"embeddable":true,"href":"https:\/\/www.psyctc.org\/psyctc\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.psyctc.org\/psyctc\/wp-json\/wp\/v2\/comments?post=2581"}],"version-history":[{"count":4,"href":"https:\/\/www.psyctc.org\/psyctc\/wp-json\/wp\/v2\/docs\/2581\/revisions"}],"predecessor-version":[{"id":5359,"href":"https:\/\/www.psyctc.org\/psyctc\/wp-json\/wp\/v2\/docs\/2581\/revisions\/5359"}],"wp:attachment":[{"href":"https:\/\/www.psyctc.org\/psyctc\/wp-json\/wp\/v2\/media?parent=2581"}],"wp:term":[{"taxonomy":"doc_category","embeddable":true,"href":"https:\/\/www.psyctc.org\/psyctc\/wp-json\/wp\/v2\/doc_category?post=2581"},{"taxonomy":"glossaries","embeddable":true,"href":"https:\/\/www.psyctc.org\/psyctc\/wp-json\/wp\/v2\/glossaries?post=2581"},{"taxonomy":"doc_tag","embeddable":true,"href":"https:\/\/www.psyctc.org\/psyctc\/wp-json\/wp\/v2\/doc_tag?post=2581"},{"taxonomy":"knowledge_base","embeddable":true,"href":"https:\/\/www.psyctc.org\/psyctc\/wp-json\/wp\/v2\/knowledge_base?post=2581"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}