class: center, middle, inverse, title-slide # Meta-analysis: part 2: it’s all about effect sizes… . ### Thomas Pollet (
@tvpollet
), Northumbria University ### 2019-09-11 |
disclaimer
--- ## Outline Course. * Principles of systematic reviews / meta-analysis * **Effect sizes** * Fixed vs. random effects meta-analyses * Publication bias * Moderators and metaregression * Advanced 'stuff'. --- ## Effect sizes Effect sizes: * make studies comparable * allows us to do a comprehensive synthesis * is our 'dependent variable' (What we try to explain... .) -- Any standardised metric can be an 'effect size' (correlation coefficient, odds ratio, etc), if: * comparable across studies * represents the **magnitude** and **direction** of the relationship of interest. * is independent of sample size (but we'll need some way to weigh it) --- ## Effect sizes: families * d family of effect sizes (Cohen's _d_) : e.g., Gender and Religiosity. * _r_ family of effect sizes. : e.g., Age and facial attractiveness rating. * Effect sizes for categorical data (odds ratios) : e.g., Gender and mortality from alcohol abuse. <img src="https://media.giphy.com/media/QWA9p190VpNLxO1ryW/giphy.gif" width="300px" style="display: block; margin: auto;" /> --- ## Different take? - Types of findings from research. Lipsey and Wilson (2001, p. 37) list different types of research findings: * One-variable “relationships” (e.g., proportions) * Two-variable relationships (e.g., correlations or mean differences) * Multivariate relationships (e.g., multiple regression or structural equation models) (not covered here) -- Next to these, we often have reported test statistics (e.g., _t_, F-, `\(\chi^2\)`) --> from these we can get effect sizes! --- ## Effect size. Definition: 'scale-free index of effect magnitude' -- Which ones do you rely on most commonly? Go to [www.gosoapbox.com](www.gosoapbox.com) Enter code: 257883396 . **Thomas opens poll** ??? These need to be comparable across studies if we want to compare between studies. --- ## Variance of effect size. * With the exception of some cases (e.g., a meta-analysis of prevalence), we will need some measure of variance ( `\(SE^2\)` ) * This variance is a measure of statistical uncertainty. --> they serve as a weight ( `\(1/SE^2\)` ) * Effect sizes with large variances (i.e. small sample sizes) are weighted down. <img src="https://media.giphy.com/media/W7dBXzbnEpOBG/giphy.gif" width="300px" style="display: block; margin: auto;" /> --- ## Variance of effect size II * Variance of effect sizes typically vary between studies. Most of the time, sample sizes (N) vary between studies. -- * Why does this matter? -- * Heteroscedasticity! (--> violation of assumption) <div class="figure" style="text-align: center"> <img src="heteroscedasticity.svg" alt="heteroscedasticity (by Wikipedia Protonk)" width="325px" /> <p class="caption">heteroscedasticity (by Wikipedia Protonk)</p> </div> --- ## Effect size estimates and parameters. * Sample and (statistical) population as statistical concepts. * Effect size _estimates_ are based on studies. * Effect size _parameters_ refers to _population_ or the _true_ effect size. * Purpose of meta-analyis: make inference about effect size parameter based on effect size estimates. --- ## Overview Most common type of meta-analysis based on bivariate relationships. * The d family of effect sizes: a continuous and a (dichotomous) factor variable: - Raw (unstandardized) mean difference - Cohen’s _d_ - Hedge’s _g_ -- * The r family of effect sizes: two continuous/ordinal variables, e.g.: - Product-moment correlation coefficient (_r_) - Spearman’s rank correlation coefficient ( `\(\rho\)` ) -- * The odds ratio (OR) family, including proportions and other measures for categorical data, e.g.: - Odds ratio (OR) - Relative risk (RR) -- Do you know all of these? (gosoapbox question) ??? Examples used largely follow @Borenstein2009 --- ## Raw mean difference I Definition: * Raw difference between two (independent) means (e.g., comparison of a continuous variable by treatment and control group) * All studies in a meta-analysis use the **same** scale (e.g., Height in cm, intelligence, Milliseconds,...) -- Definition of **population** parameter: `\(\theta\)` = `\(\Delta\)` : `$$\Delta=\mu_T - \mu_C$$` with: `\(\mu_T\)` , `\(\mu_c\)` : Independent population means for treatment and control group --- ## Raw mean difference II Effect size estimation: `$$D=\overline{X}_T - \overline{X}_C$$` whereby: `\(\overline{X}_T\)` , `\(\overline{X}_C\)` : Independent sample means for treatment and control group. So for example, how much faster one's heart beats in an experimental group versus a control group. --- ## Raw mean difference III Sampling variance of the effect size (_D_) is also needed. Let's assume that the _population_ SDs for `\(\mu_T\)` and `\(\mu_C\)` are the same then: `$$Var(D) = \frac{n_T+n_C}{n_Tn_C}{SD}^2_{pooled}$$` whereby `$${SD}_{pooled} = \sqrt{\frac{(n_T-1){SD}^2_{T}+(n_T-1){SD}^2_{C}}{n_T+n_C-2}}$$` [Pooled Standard deviation refresher](https://vimeo.com/68706988) using pint prices in Newcastle and London. --- ## Very basic example by hand. ```r ## Heart rates, note that I didn't capitalise subscripts y_t <- 120; sd_t <- 10.5; n_t <- 50 y_c <- 100; sd_c <- 10; n_c <- 50 y_t - y_c ## D ``` ``` ## [1] 20 ``` -- ```r ## If we assume that at population level sd_t = sd_c, then numerator <- (((n_t - 1)*sd_t^2) + ((n_c - 1)*sd_c^2)) sd_pooled <- sqrt(numerator / (n_t + n_c - 2)) ``` -- ```r ## Variance of D and se (var_d <- ((n_t + n_c)/(n_t * n_c)) * sd_pooled^2) ``` ``` ## [1] 4.205 ``` ```r sqrt(var_d) ``` ``` ## [1] 2.05061 ``` ??? In practice you wouldn't have to do this often as you have functions to do this for you. --- ## Pooled standard deviation: simpler way. Note that you can also calculate the pooled SD directly from SDs. (Cohen, 1988). This is obviously easier... . `$${SD}_{pooled}= \sqrt\frac{SD_1^2+SD_2^2}{2}$$` <img src="https://media.giphy.com/media/BYBV1IkkWExHO/giphy.gif" width="350px" style="display: block; margin: auto;" /> --- ## Different population standard deviations If we cannot assume the same population standard deviations. `$$Var(D) = \frac{{SD}^2_{T}}{n_T} + \frac{{SD}^2_{C}}{n_C}$$` --> think of it as the violation of equality of variances (Levene's test). --- ## Standardized Mean Differences (SMD), _d_ The standardized mean difference could be appropriate when: * Studies use different (continuous) outcome measures * Study designs compare the mean outcomes in treatment and control groups. * Analyses use ANOVA, _t_-tests, or sometimes `\(\chi^2\)` (if the underlying outcome can be viewed as continuous) --- ## Extract relevant information. Before we can proceed, you'd need to assess what's available and what's not. In studies you'll be looking for the following. * Sample size * ANOVA tables * F or _t_ tests as reported in text * Tables of counts ( `\(\chi^2\)` ) --- ## Standardized mean difference: Cohen’s _d_ I Definition: * Difference between two (independent) means (e.g., comparison of a continuous variable by treatment and control group). * Studies use different dependent variables with different measurement scales and thus study outcomes **cannot** be compared directly. -- Definition of **population** parameter: `\(\theta\)` = `\(\delta\)` : `$$\delta= \frac{\mu_T - \mu_C}{\sigma}$$` with `\(\sigma_T = \sigma_C =\sigma\)` (thus assuming _population_ standard deviations are the same) with: `\(\mu_T\)` , `\(\mu_C\)` : Independent population means for treatment and control group ??? sigma = standard deviation. --- ## Standardized mean difference: Cohen’s _d_ II Effect size estimation: `$$d=\frac{\overline{X}_T - \overline{X}_C}{{SD}_{pooled}}$$` whereby: `\(\overline{X}_T\)` , `\(\overline{X}_C\)` : Independent sample means for treatment and control group. --- ## Standardized mean difference: Cohen’s _d_ III Sampling variance of effect size: Given the two sample standard deviation `\(SD_T\)` and `\(SD_C\)`, the variance of _d_ can be approximately estimated as: `$$Var(d) = \frac{n_T+n_C}{n_Tn_C}+\frac{d^2}{2(n_T+n_C)}$$` --- ## Again, a very basic example by hand. ```r y_t <- 120; sd_t <- 5.5; n_t <- 50 y_c <- 100; sd_c <- 4.5; n_c <- 50 ## If we assume that the population level standard deviations are the same, then numerator <- (((n_t - 1)*sd_t^2) + ((n_c - 1)*sd_c^2)) sd_pooled <- sqrt(numerator / (n_t + n_c - 2)) # note again that you could also calculate these from SDs. ``` -- ```r ## Cohen's d: d <- (y_t - y_c)/sd_pooled ``` -- ```r ## Variance of Cohen's d and SE of Cohen's d (var_d <- ((n_t + n_c)/(n_t * n_c)) + ((d^2) / (2 * (n_t + n_c)))) ``` ``` ## [1] 0.1192079 ``` ```r sqrt(var_d) # SE ``` ``` ## [1] 0.345265 ``` --- ## d from _t_-test or F values. Via algebra we can derive: For an independent samples _t_-test `$$d=t\sqrt{\frac{n_1+n_2}{n_1*n_2}}$$` and for a two-group one-way ANOVA: `$$d=\sqrt{\frac{F*(n_1+n_2)}{n_1*n_2}}$$` <img src="https://media.giphy.com/media/mnFwhN0P2AVZS/giphy.gif" width="275px" style="display: block; margin: auto;" /> --- ## Degrees of approximation _d_ value based on calculations. Many ways to derive _d_ value (see Borenstein et al. 2009) : **Excellent**: - Direct calculation based on means and standard deviations – Algebraically equivalent formulas (_t_-test) – Exact _p_ value for a _t_-test – Compute from Pearson correlation coefficient -- **Good**: - Numerator: _Estimates_ of the mean difference (adjusted means, unstandardized regression weight, gain score means) - Denominator: _Estimates_ of the pooled standard deviation (gain score standard deviation, one-way ANOVA with 3 or more groups, ANCOVA) -- **Poor**: - Approximations based on dichotomous data. (Example based on OR later on) --- ## Help more than one group! More complex formulae needed: - Lipsey & Wilson (2001) - [Rosnow et al. 2000](http://sci-hub.tw/https://journals.sagepub.com/doi/10.1111/1467-9280.00287) - [Lakens 2013](https://www.frontiersin.org/articles/10.3389/fpsyg.2013.00863/full) <img src="https://media.giphy.com/media/l2Ject9fem5QZOyTC/giphy.gif" width="350px" style="display: block; margin: auto;" /> --- ## Standardized mean difference: Hedges' _g_ I Definition: In small samples, Cohen’s d tends to **overestimate** `\(\mid\delta\mid\)`. It can be corrected by applying a simple correction factor _J_ which yields an unbiased estimate of `\(\delta\)`, this is called Hedges’ _g_. -- Definition of **population** parameter: `\(\theta\)` = `\(\delta\)` : `$$\delta= \frac{\mu_T - \mu_C}{\sigma}$$` with `\(\sigma_T = \sigma_C =\sigma\)` (thus assuming _population_ standard deviations are the same) with: `\(\mu_T\)` , `\(\mu_C\)` : Independent population means for treatment and control group --- ## Standardized mean difference: Hedges' _g_ II Effect size estimation: To convert from _d_ to _g_, a correction factor _J_ can be used. -- The exact formula for _J_ can be found in Hedges (1981). Here, we present an approximation used by Borenstein (2009): _g_ = _J_ x _d_ , and `\(J= 1-\frac{3}{4df-1}\)` and `\(df= n_T + n_C - 2\)` for independent groups -- Sampling variance of effect size: `\(Var(g) = J^2 \ast Var(d)\)` --- ## Again, a very basic example by hand. ```r ## Hedges' g is based on Cohen's d ## Calculate correction factor J J <- (1 - (3/(4 * (n_t + n_c - 2) - 1))) J ``` ``` ## [1] 0.9923274 ``` -- ```r ## So, Hedges' g is g <- d * J g ``` ``` ## [1] 3.949611 ``` -- ```r ## Variance and SE var_g <- var_d * (J *J) sqrt(var_g) # SE ``` ``` ## [1] 0.3426159 ``` --- ## Overview Most common type of meta-analysis based on bivariate relationships. * The d family of effect sizes: a continuous and a (dichotomous) factor variable: - Raw (unstandardized) mean difference - Cohen’s _d_ - Hedge’s _g_ * The r family of effect sizes: two continuous/ordinal variables, e.g.: - Product-moment correlation coefficient (_r_) - Spearman’s rank correlation coefficient ( `\(\rho\)` ) * The odds ratio (OR) family, including proportions and other measures for categorical data, e.g.: - Odds ratio (OR) - Relative risk (RR) --- ## Assumptions about the _r_ family of effect sizes. The correlation coefficient could be appropriate when: * studies have a continuous (ordinal) outcome measure, * study designs assess the relation between a quantitative predictor and the outcome (possibly controlling for covariates), or * the analysis uses (OLS) regression (or GLM) (not covered here, [check here](https://www.researchgate.net/profile/Donaldo_Canales/post/Inclusion_of_standardized_regression_beta_coefficients_in_meta-analysis/attachment/59d61e116cda7b8083a17312/AS%3A272471683469335%401441973719973/download/Peterson+%282005%29+-+On+the+Use+of+Beta+Coefficients+in+Meta-Analysis.pdf)). ??? Better to use raw correlations but if not available, one can use beta's from regression,... --- ## Product moment correlation (Pearson _r_ ) I * The correlation coefficient measures the association between two metric variables _X_ and _Y_ and ranges between -1 to +1. * It is the standardized covariance (divided by the product of the standard deviations). * In most meta-analyses, though, we do not use _r_ but apply the so-called Fisher’s _z_ transformation (stabilize the variance; approximately normally distributed). Have you heard of Fisher's _z_ ? **Thomas opens Gosoapbox question!** --- ## Product moment correlation (Pearson _r_ ) II Definition of **population** parameter `\(\theta = \rho\)` ; `$$\rho = \frac{Cov(X,Y)}{SD_X \ast SD_Y}$$` whereby X,Y are metric variables and Cov(X,Y) is the covariance of X and Y, `\(SD_X\)` and `\(SD_Y\)` are standard deviations of X and Y, respectively. --- ## Product moment correlation (Pearson _r_ ) III `$$r = \frac{{}\sum_{i=1}^{n} (x_i - \overline{x})(y_i - \overline{y})} {\sqrt{\sum_{i=1}^{n} (x_i - \overline{x})^2(y_i - \overline{y})^2}}$$` whereby: * `\(x_i\)` and `\(y_i\)` are sample values of _X_ and _Y_ for observation _i_ * `\(\overline{x}\)` and `\(\overline{y}\)`: Sample means of _X_ and _Y_ * n= number of observations. [Alternative formulae](https://en.wikipedia.org/wiki/Pearson_correlation_coefficient) <img src="https://media.giphy.com/media/9lMoyThpKynde/giphy.gif" width="250px" style="display: block; margin: auto;" /> --- ## Fisher's _r_ to z transformation `$$z_r = 0.5 \ast ln\left(\frac{1+r}{1-r}\right)$$` and `$$r= \frac{e^{2 \ast z_r}+1}{e^{2 \ast z_r}-1}$$` --- ## Visually, what does this do? <div class="figure" style="text-align: center"> <img src="Fisher_transformation.png" alt="Fisher's Transformation (wikipedia)" width="300px" /> <p class="caption">Fisher's Transformation (wikipedia)</p> </div> ??? this transformation is used in meta analysis for stabilizing the variance --- ## Product moment correlation (Pearson _r_ ) IV Sampling variance for effect size. * For the raw Pearson correlation coefficient _r_: `$$Var(r)= \frac{(1-r^2)^2}{n-1}$$` * For Fisher's z transformed coefficient `\(z_r\)`: `$$Var(z_r)= \frac{1}{n-3}$$` --- ## An example by hand. ```r ## r and n is given as: r <- 0.35555 n <- 150 ## Then, Fisher's z is calculated as follows: z_r <- 0.5 * log((1 + r) / (1-r)) # log is natural logarithm / log10 is base 10 z_r ``` ``` ## [1] 0.3717827 ``` ```r atanh(r) # fun fact, this is the inverse hyperbolic tangent..., https://en.wikipedia.org/wiki/Inverse_hyperbolic_functions#Inverse_hyperbolic_tangent ``` ``` ## [1] 0.3717827 ``` ```r ## Variance and SE var_z_r <- 1 /(n - 3) var_z_r ``` ``` ## [1] 0.006802721 ``` ```r sqrt(var_z_r) ## SE ``` ``` ## [1] 0.08247861 ``` --- ## A note on binary variables. - Binary with continuous: e.g., Gender and Risk Taking. So, we could just use Pearson _r_ (point-biserial correlation) - Binary with Binary: e.g., Gender and childlessness. We can calculate [ `\(\phi\)` correlation](https://escholarship.org/uc/item/7qp4604r). `$$\phi=\sqrt{\frac{\chi^2}{n}}$$` - **caveat**: Both are affected by uneven splits and this could cause problems, more in [Jacobs & Viechtbauer 2016](http://sci-hub.tw/https://onlinelibrary.wiley.com/doi/abs/10.1002/jrsm.1218). --- ## Overview Most common type of meta-analysis based on bivariate relationships. * The d family of effect sizes: a continuous and a (dichotomous) factor variable: - Raw (unstandardized) mean difference - Cohen’s _d_ - Hedge’s _g_ * The r family of effect sizes: two continuous/ordinal variables, e.g.: - Product-moment correlation coefficient (_r_) - Spearman’s rank correlation coefficient ( `\(\rho\)` ) * The odds ratio (OR) family, including proportions and other measures for categorical data, e.g.: - Odds ratio (OR) - Relative risk (RR) --- ## Effect sizes for categorical data. If `\(\pi_T\)` and `\(\pi_C\)` denote the population probabilities of being part of the two groups T and C; and with `\(P_T\)` and `\(P_C\)` denote the sample probabilities, then _population_ and _sample_ **Risk difference**: `\(\Delta = \pi_T - \pi_C\)` and `\(RD = P_T - P_C\)` **Risk ratio**: `\(\theta_{RR} = \pi_T /\pi_C\)` and `\(RR = P_T / P_C\)` **Odds ratio**: `\(\omega = \frac{\pi_T(1-\pi_C)}{\pi_C(1-\pi_T)}\)` and `\(OR = \frac{\pi_T(1-\pi_C)}{\pi_C(1-\pi_T)}\)` --- ## Odds ratio I Start with odds ratio as most common,... . Definition: * Associations between two binary variables. * An odds ratio (OR) = 1 represents no effect, or no difference between treatment and control. * OR ranges between 0 and `\(+\infty\)`. * OR can be quite non-normal, that's why we typically take ln(OR), which can range from `\(-\infty\)` to `\(+\infty\)`, with 0 indicating no difference. --- ## Odds ratio II Definition of **population** parameter `\(\theta = \omega\)` `$$\omega = \frac{\pi_T/(1-\pi_T)}{\pi_C/(1-\pi_c)} = \frac{\pi_T(1-\pi_C)}{\pi_C(1-\pi_T)}$$` | Treatment | Control --- | --- | --- Event | `\(n_{11}\)` | `\(n_{12}\)` Non-event | `\(n_{21}\)` | `\(n_{22}\)` `$$OR= \frac{n_{11}n_{22}}{n_{12}n_{21}}$$` --> which we then logtransform (ln(OR)) --- ## Odds ratio III Sampling variance of effect size: The sampling variance of ln(OR) is calculated as: `$$Var(ln(OR)) = \frac{1}{n_{11}} + \frac{1}{n_{12}} + \frac{1}{n_{21}} + \frac{1}{n_{22}}$$` --- ## Odds ratio IV : example by hand... This example. | Dead | Alive --- | --- | --- Treated | 50 | 950 Control | 100 | 900 ```r matrix_1<-matrix(c(50,100,950,900), nr=2, nc=2) # product across diagonal and then divide, note [] to call elements. OR<-(matrix_1[1,1]*matrix_1[2,2])/(matrix_1[1,2]*matrix_1[2,1]) OR # the odds of being dead vs. alive are x times lower for treated vs. control. ``` ``` ## [1] 0.4736842 ``` ```r log(OR) ``` ``` ## [1] -0.7472144 ``` --- ## Odds ratio V : example by hand (continued)... ```r # We take the inverse to get a more interpretable number: the odds of being alive vs. dead are x times greater for 1/OR ``` ``` ## [1] 2.111111 ``` ```r log(1/OR) # log transform gives number to use! ``` ``` ## [1] 0.7472144 ``` ```r Variance_ln_or<-(1/matrix_1[1,1]) + (1/matrix_1[1,2]) + (1/matrix_1[2,1]) + (1/matrix_1[2,2]) Variance_ln_or ``` ``` ## [1] 0.03216374 ``` --- ## Odds ratio VI There are alternative ways to estimate the odds ratio and ln(OR): one is known as the (Cochrane) Mantel-Haentszel procedure (MH or CMH). It turns out that this has some good properties (especially if sample sizes are small). -- So, if you are able to calculate yourself something to consider (Rosenberg et al., 2013) -- You can find more information [here](https://handbook-5-1.cochrane.org/chapter_9/9_4_4_1_mantel_haenszel_methods.htm) and [here](http://www.metafor-project.org/doku.php/tips:comp_mh_different_software) --- ??? Different authors use different terminology. --- ## Relative risk I Definition: * The relative risk ranges from 0 to infinity. (Relative Risk = Risk Ratio) -- * A relative risk (RR) of 1 indicates that there is no difference in risk between the two groups. A relative risk (RR) larger than one indicates that the treatment group has a higher risk than the control. A relative risk (RR) less than one indicates that the control group has a higher risk than the treatment group. -- * As was true for the odds ratio, the (natural) logarithm of the RR (LRR) has better statistical properties. The range of the LRR is from `\(-\infty\)` to `\(+\infty\)`, and as for the Ln(OR), a value of 0 indicates no treatment effect. --- ## Relative Risk II Definition of **population** parameter `\(\theta = \theta_{RR}\)` `$$\theta_{RR} = \pi_T /\pi_C$$` Effect size estimation: `$$RR = P_T / P_C$$` Relative Risk: | Treatment | Control | Total --- | --- | --- | --- Event | `\(P_{11}\)` | `\(P_{12}\)` | `\(P_{1x}\)` Non-event | `\(P_{21}\)` | `\(P_{22}\)` | `\(P_{2x}\)` Total | `\(P_{x1}\)` | `\(P_{x2}\)` | Whereby `\(P_{x1}\)` and `\(P_{x2}\)` are the marginal proportions for treatment (first column) and control (second column). --- ## Variance... . `$$Var(ln(RR)) = \frac{1}{n_{11}} + \frac{1}{n_{x1}} + \frac{1}{n_{12}} + \frac{1}{n_{x2}}$$` Whereby `\(n_{x1}\)` and `\(n_{x2}\)` are the N associated with the marginal proportions for treatment (first column) and control (second column) --- ## Relative risk IV: example by hand. ```r matrix_1_t<-t(matrix_1) # transpose our table margins<-margin.table(matrix_1_t,2) # this gets our margins. RR <- (matrix_1_t[1,1] / margins[1]) / (matrix_1_t[1,2] / margins[2]) RR # risk ratio of dying... ``` ``` ## [1] 0.5 ``` ```r # Note is not the same as RR of staying alive! # (1/RR =/= (950/1000)/(900/1000) = 1.055556) log_rr<-log(RR) var_log_rr <- 1/matrix_1_t[1,1] - 1/margins[1] + 1/matrix_1_t[1,2] - 1/margins[2] var_log_rr ``` ``` ## [1] 0.028 ``` --- ## Types of findings from research. Lipsey and Wilson (2001, p. 37) list different types of research findings: * **One-variable “relationships” (e.g., proportions)** * Two-variable relationships (e.g., correlations or mean differences) * Multivariate relationships (e.g., multiple regression or structural equation models) --- ## Odds and proportions I Typically proportions are converted to odds. Note: Alternatives possible: use raw prevalence [Freeman-Tukey double arcsine transformation](https://jech.bmj.com/content/67/11/974) (Barendregt et al., 2013) but see Schwarzer et al. (2019). Odds are defined as the ratio of two probabilities, _p_ : probability of event and _1-p_ probability of even not happening. Definition of **population** parameter: `$$\theta = \pi$$` `$$\pi= \frac{n_{event}}{n_{event}+n_{non-event}}$$` --- ## Odds and proportions II Calculation of outcome statistic: `$$odds=\frac{p}{1-p}$$` `$$logit=ln(odds)=ln(\frac{p}{1-p})$$` --- ## Odds and proportions III Variance of outcome statistic: The variance is only available for the logit: `$$Var(logit) = \frac{1}{n_{event}} + \frac{1}{n_{non-event}}$$` --- ## Odds and proportions IV ```r ## Titanic adult survival (N=2,092) n_survived <- 654 n_died <- 1438 p_died <- n_died / (n_survived + n_died) odds <- p_died / (1 - p_died) odds ``` ``` ## [1] 2.198777 ``` ```r n_died/n_survived # Shorter ``` ``` ## [1] 2.198777 ``` ```r log(odds) ``` ``` ## [1] 0.7879012 ``` ```r var_logit<- 1/n_survived + 1/n_died var_logit ``` ``` ## [1] 0.002224462 ``` --- ## Conversion between effect sizes. We could convert everything to a common effect size based **just** on _p_ values? _What do you think?_ **Thomas adds gosoapbox question** ??? Bad choice as: - The same effect size can have different _p_'s as they differ N. - different effect sizes can have the same _p_ value as they differ in N. --- ## Test statistics I Sometimes, studies just report test statistics, e.g. from a _t_-test, an ANOVA (_F_-statistic) or `\(\chi^2\)` test. -- Usually not ideal for a meta-analysis as we need an effect size and a measure of sample size some weight. -- A test statistic is a function of both effect size **and** sample size. -- In some cases, we can convert these statistics to an effect size (to one of the three families: _d_,_r_, OR) -- Two examples based on Lipsey & Wilson (2001; 172-ff). First: a Cohen's _d_ and then a Pearson _r_. --- ## Test statistics II * There are various calculators online (also converters between families): - [http://www.campbellcollaboration.org/escalc/html/EffectSizeCalculator-Home.php](http://www.campbellcollaboration.org/escalc/html/EffectSizeCalculator-Home.php) - [https://www.uccs.edu/lbecker/](https://www.uccs.edu/lbecker/) - [https://www.polyu.edu.hk/mm/effectsizefaqs/calculator/calculator.html](https://www.polyu.edu.hk/mm/effectsizefaqs/calculator/calculator.html) - [https://sites.google.com/site/lakens2/effect-sizes](https://sites.google.com/site/lakens2/effect-sizes) - [https://cebcp.org/practical-meta-analysis-effect-size-calculator/](https://cebcp.org/practical-meta-analysis-effect-size-calculator/) - [r script requires .csv in certain format](https://gillianpepper.com/2018/09/13/looking-to-convert-various-statistics-to-correlation-coefficients-heres-a-script-i-made-earlier/) by my colleague Gillian Pepper. -- * There is also an R package which does some common transforms. ('[esc](https://strengejacke.github.io/esc/)') -- * Here, we'll do some transforms via hand and via the escalc function from the [metafor](http://www.metafor-project.org/doku.php) package. --- ## Exact _p_ and a t-value conversion. Example from Lipsey & Wilson (2001):174-ff -- In a study you find reported: _“a t-test showed that the effect was statistically significant (p = .037), indicating a positive effect of treatment”._ -- No _t_ reported but the reported n for each group: `\(n_1\)`= 10 and `\(n_2\)`= 22. -- `$$df= n_1 + n_2-2$$` given _p_= .037 and df= 20. ```r # qt is quantile t distribution # two-tailed test # Try removing lower.tail=F what happens? qt(0.037/2, df=20, lower.tail = F) ``` ``` ## [1] 2.234812 ``` --- ## Cohen's d `$${\lvert}d{\rvert} = t*\sqrt{\left(\frac{n_1+n_2}{n_1*n_2}\right)}$$` Remember _t_= 2.234812 `$${\lvert}d{\rvert} = 2.234812*\sqrt{\left(\frac{10+12}{10*12}\right)} = {\lvert}0.9568893{\rvert} = 0.9568893$$` --- ## Convert to correlation coefficient (Lipsey & Wilson, 2001:193). Information reported in the study: _t_-value of 0.57; `\(n_1\)` = 25 and `\(n_2\)` = 52 . -- remember `\(df= n_1+n_2-2 = 75\)` Formula for conversion. `$$r= \sqrt{\frac{t^2}{t^2+df}}$$` `$$r= \sqrt\frac{0.57^2}{{0.57^2+75}}$$` Calculate it yourself! -- ```r sqrt(0.57*0.57/((0.57*0.57)+75)) ``` ``` ## [1] 0.06567583 ``` --- ## Converting between effect size families. The effects _d_, _r_ , and the odds ratio (OR) can all be converted from one metric to another. * Convenient to convert effects for comparison purposes (different disciplines have different preferences,...). * Sometimes only a few studies present results that require computation of a particular effect size. For example, if most studies present results as means and SDs (and thus allow _d_ to be calculated), but one reports the Pearson correlation between treatment and and outcome measure, then we might want to convert that single _r_ to a _d_. * In R, we can use the [esc](https://strengejacke.github.io/esc/) package. --- ## Conversions between _d_ and ln(OR) (Borenstein et al. 2009) * Converting _d_ to ln(OR) `$$ln(OR)= \frac{\pi*d}{\sqrt{3}}$$` with `\(\pi\)` = 3.14159... **not** 'proportion' `$$Var(ln(OR))= Var(d)*\frac{\pi^2}{3}$$` * Converting ln(OR) to _d_ $$ d = ln(OR) * \frac{\sqrt{3}}{\pi} $$ $$ Var(d) = Var(ln(OR)) * \frac{3}{\pi^2}$$ --- ## Converting _r_ to _d_ `$$d = ln(OR) * \frac{\sqrt{3}}{\pi}$$` and `$$Var(d) = Var(ln(OR)) * \frac{3}{\pi^2}$$` **Assumption**: Bivariate normal distribution for continuous data and we can split into two groups by dichotomizing one variable. --- ## Converting _d_ to _r_ `$$r=\frac{d}{\sqrt{d^2+A}}$$` `$$A= \frac{(n_1+n_2)^2}{n_1n_2}$$` A is a correction factor for cases where 'group sizes' ( `\(n_1\)` and `\(n_2\)`) are not equal. If group sizes are equal we can assume `\(n_1=n_2\)` and then A=4. --- ## Synthesising regression models. If everything is in the same unit (e.g., dollars, IQ, Milliseconds) and we only have a bivariate model, then we can synthesise the _b's_ from OLS regressions (Rosenberg et al., 2013) -- There are some caveats: * Everything must be on the same scale, so say that one study used Log(Testosterone) and one used raw Testosterone, then everything needs to be converted to a common scale. * Variance ( `\((se_b)^2\)`) estimates are needed. Sometimes you can derive them straight from a regression table or the reported 95%CI (remember `\(+/-1.96*se_b\)`). But you can also get these from the reported _t_-value as `\(t=b/se_b\)`. There are also formulae to derive these from `\(R^2\)`, see Rosenberg et al. (2013:70). * If one rescales the _b's_ then the variances also need to be rescaled! --- ## Ongoing debates regarding synthesizing OLS regressions. Depending on who you ask: * regression results should be synthesized together with bivariate effects (e.g., Pearson _r_). * only regression results that share the same variables in the model should be synthesized. * regression results should be synthesized ignoring the difference in models. * regression results should never be synthesized. -- Difficulties relate to (Becker & Wu, 2007): * Influence of other predictors in models (e.g., suppression) * Correlation between predictors (affects standard errors) * Standardised vs. unstandardised metrics. * ... Further reading, see for example: [Aloe & Thompson, 2013](https://www.journals.uchicago.edu/doi/pdfplus/10.5243/jsswr.2013.24) --- ## Exercises. * You read in a paper that --- ## Any Questions? [http://tvpollet.github.io](http://tvpollet.github.io) Twitter: @tvpollet <img src="https://media.giphy.com/media/3ohzdRoOp1FUYbtGDu/giphy.gif" width="600px" style="display: block; margin: auto;" /> --- ## Acknowledgments * Numerous students and colleagues. Any mistakes are my own. * My colleagues who helped me with regards to meta-analysis Nexhmedin Morina, Stijn Peperkoorn, Gert Stulp, Mirre Simons, Johannes Honekopp. * HBES for funding this Those who have funded me (not these studies per se): [NWO](www.nwo.nl), [Templeton](www.templeton.org), [NIAS](http://nias.knaw.nl). * You for listening! <img src="https://media.giphy.com/media/10avZ0rqdGFyfu/giphy.gif" width="300px" style="display: block; margin: auto;" /> --- ## References and further reading <p><cite>Aert, R. C. M. van, J. M. Wicherts, and M. A. L. M. van Assen (2016). “Conducting Meta-Analyses Based on p Values: Reservations and Recommendations for Applying p-Uniform and p-Curve”. In: <em>Perspectives on Psychological Science</em> 11.5, pp. 713-729. DOI: <a href="https://doi.org/10.1177/1745691616650874">10.1177/1745691616650874</a>. eprint: https://doi.org/10.1177/1745691616650874.</cite></p> <p><cite>Aloe, A. M. and C. G. Thompson (2013). “The Synthesis of Partial Effect Sizes”. In: <em>Journal of the Society for Social Work and Research</em> 4.4, pp. 390-405. DOI: <a href="https://doi.org/10.5243/jsswr.2013.24">10.5243/jsswr.2013.24</a>. eprint: https://doi.org/10.5243/jsswr.2013.24.</cite></p> <p><cite>Assink, M. and C. J. Wibbelink (2016). “Fitting Three-Level Meta-Analytic Models in R: A Step-by-Step Tutorial”. In: <em>The Quantitative Methods for Psychology</em> 12.3, pp. 154-174. ISSN: 2292-1354.</cite></p> <p><cite>Barendregt, J. J, S. A. Doi, Y. Y. Lee, et al. (2013). “Meta-Analysis of Prevalence”. In: <em>Journal of Epidemiology and Community Health</em> 67.11, pp. 974-978. ISSN: 0143-005X. DOI: <a href="https://doi.org/10.1136/jech-2013-203104">10.1136/jech-2013-203104</a>.</cite></p> <p><cite>Becker, B. J. and M. Wu (2007). “The Synthesis of Regression Slopes in Meta-Analysis”. In: <em>Statistical science</em> 22.3, pp. 414-429. ISSN: 0883-4237.</cite></p> --- ## More refs 1. <p><cite>Borenstein, M, L. V. Hedges, J. P. Higgins, et al. (2009). <em>Introduction to Meta-Analysis</em>. John Wiley & Sons. ISBN: 1-119-96437-7.</cite></p> <p><cite>Burnham, K. P. and D. R. Anderson (2002). <em>Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach</em>. New York, NY: Springer. ISBN: 0-387-95364-7.</cite></p> <p><cite>Burnham, K. P. and D. R. Anderson (2004). “Multimodel Inference: Understanding AIC and BIC in Model Selection”. In: <em>Sociological Methods & Research</em> 33.2, pp. 261-304. ISSN: 0049-1241. DOI: <a href="https://doi.org/10.1177/0049124104268644">10.1177/0049124104268644</a>.</cite></p> <p><cite>Carter, E. C, F. D. Schönbrodt, W. M. Gervais, et al. (2019). “Correcting for Bias in Psychology: A Comparison of Meta-Analytic Methods”. In: <em>Advances in Methods and Practices in Psychological Science</em> 2.2, pp. 115-144. DOI: <a href="https://doi.org/10.1177/2515245919847196">10.1177/2515245919847196</a>.</cite></p> <p><cite>Chen, D. D. and K. E. Peace (2013). <em>Applied Meta-Analysis with R</em>. Chapman and Hall/CRC. ISBN: 1-4665-0600-8.</cite></p> --- ## More refs 2. <p><cite>Cheung, M. W. (2015a). “metaSEM: An R Package for Meta-Analysis Using Structural Equation Modeling”. In: <em>Frontiers in Psychology</em> 5, p. 1521. ISSN: 1664-1078. DOI: <a href="https://doi.org/10.3389/fpsyg.2014.01521">10.3389/fpsyg.2014.01521</a>.</cite></p> <p><cite>Cheung, M. W. (2015b). <em>Meta-Analysis: A Structural Equation Modeling Approach</em>. New York, NY: John Wiley & Sons. ISBN: 1-119-99343-1.</cite></p> <p><cite>Cooper, H. (2010). <em>Research Synthesis and Meta-Analysis: A Step-by-Step Approach</em>. 4th. Sage publications. ISBN: 1-4833-4704-4.</cite></p> <p><cite>Cooper, H, L. V. Hedges, and J. C. Valentine (2009). <em>The Handbook of Research Synthesis and Meta-Analysis</em>. New York: Russell Sage Foundation. ISBN: 1-61044-138-9.</cite></p> <p><cite>Cooper, H. and E. A. Patall (2009). “The Relative Benefits of Meta-Analysis Conducted with Individual Participant Data versus Aggregated Data.” In: <em>Psychological Methods</em> 14.2, pp. 165-176. ISSN: 1433806886. DOI: <a href="https://doi.org/10.1037/a0015565">10.1037/a0015565</a>.</cite></p> --- ## More refs 3. <p><cite>Crawley, M. J. (2013). <em>The R Book: Second Edition</em>. New York, NY: John Wiley & Sons. ISBN: 1-118-44896-0.</cite></p> <p><cite>Cumming, G. (2014). “The New Statistics”. In: <em>Psychological Science</em> 25.1, pp. 7-29. ISSN: 0956-7976. DOI: <a href="https://doi.org/10.1177/0956797613504966">10.1177/0956797613504966</a>.</cite></p> <p><cite>Dickersin, K. (2005). “Publication Bias: Recognizing the Problem, Understanding Its Origins and Scope, and Preventing Harm”. In: <em>Publication Bias in Meta-Analysis Prevention, Assessment and Adjustments</em>. Ed. by H. R. Rothstein, A. J. Sutton and M. Borenstein. Chichester, UK: John Wiley.</cite></p> <p><cite>Fisher, R. A. (1946). <em>Statistical Methods for Research Workers</em>. 10th ed. Edinburgh, UK: Oliver and Boyd.</cite></p> <p><cite>Flore, P. C. and J. M. Wicherts (2015). “Does Stereotype Threat Influence Performance of Girls in Stereotyped Domains? A Meta-Analysis”. In: <em>Journal of School Psychology</em> 53.1, pp. 25-44. ISSN: 0022-4405. DOI: <a href="https://doi.org/10.1016/j.jsp.2014.10.002">10.1016/j.jsp.2014.10.002</a>.</cite></p> --- ## More refs 4. <p><cite>Galbraith, R. F. (1994). “Some Applications of Radial Plots”. In: <em>Journal of the American Statistical Association</em> 89.428, pp. 1232-1242. ISSN: 0162-1459. DOI: <a href="https://doi.org/10.1080/01621459.1994.10476864">10.1080/01621459.1994.10476864</a>.</cite></p> <p><cite>Glass, G. V. (1976). “Primary, Secondary, and Meta-Analysis of Research”. In: <em>Educational researcher</em> 5.10, pp. 3-8. ISSN: 0013-189X. DOI: <a href="https://doi.org/10.3102/0013189X005010003">10.3102/0013189X005010003</a>.</cite></p> <p><cite>Goh, J. X, J. A. Hall, and R. Rosenthal (2016). “Mini Meta-Analysis of Your Own Studies: Some Arguments on Why and a Primer on How”. In: <em>Social and Personality Psychology Compass</em> 10.10, pp. 535-549. ISSN: 1751-9004. DOI: <a href="https://doi.org/10.1111/spc3.12267">10.1111/spc3.12267</a>.</cite></p> <p><cite>Harrell, F. E. (2015). <em>Regression Modeling Strategies</em>. 2nd. Springer Series in Statistics. New York, NY: Springer New York. ISBN: 978-1-4419-2918-1. DOI: <a href="https://doi.org/10.1007/978-1-4757-3462-1">10.1007/978-1-4757-3462-1</a>.</cite></p> <p><cite>Harrer, M., P. Cuijpers, and D. D. Ebert (2019). <em>Doing Meta-Analysis in R: A Hands-on Guide</em>. https://bookdown.org/MathiasHarrer/Doing\_ Meta\_ Analysis\_ in\_ R/.</cite></p> --- ## More refs 5. <p><cite>Hartung, J. and G. Knapp (2001). “On Tests of the Overall Treatment Effect in Meta-Analysis with Normally Distributed Responses”. In: <em>Statistics in Medicine</em> 20.12, pp. 1771-1782. DOI: <a href="https://doi.org/10.1002/sim.791">10.1002/sim.791</a>.</cite></p> <p><cite>Hayes, A. F. and K. Krippendorff (2007). “Answering the Call for a Standard Reliability Measure for Coding Data”. In: <em>Communication Methods and Measures</em> 1.1, pp. 77-89. ISSN: 1931-2458. DOI: <a href="https://doi.org/10.1080/19312450709336664">10.1080/19312450709336664</a>.</cite></p> <p><cite>Hedges, L. V. (1981). “Distribution Theory for Glass's Estimator of Effect Size and Related Estimators”. In: <em>Journal of Educational Statistics</em> 6.2, pp. 107-128. DOI: <a href="https://doi.org/10.3102/10769986006002107">10.3102/10769986006002107</a>.</cite></p> <p><cite>Hedges, L. V. (1984). “Estimation of Effect Size under Nonrandom Sampling: The Effects of Censoring Studies Yielding Statistically Insignificant Mean Differences”. In: <em>Journal of Educational Statistics</em> 9.1, pp. 61-85. ISSN: 0362-9791. DOI: <a href="https://doi.org/10.3102/10769986009001061">10.3102/10769986009001061</a>.</cite></p> <p><cite>Hedges, L. V. and I. Olkin (1980). “Vote-Counting Methods in Research Synthesis.” In: <em>Psychological bulletin</em> 88.2, pp. 359-369. ISSN: 1939-1455. DOI: <a href="https://doi.org/10.1037/0033-2909.88.2.359">10.1037/0033-2909.88.2.359</a>.</cite></p> --- ## More refs 6. <p><cite>Higgins, J. P. T. and S. G. Thompson (2002). “Quantifying Heterogeneity in a Meta-Analysis”. In: <em>Statistics in Medicine</em> 21.11, pp. 1539-1558. DOI: <a href="https://doi.org/10.1002/sim.1186">10.1002/sim.1186</a>.</cite></p> <p><cite>Higgins, J. P. T, S. G. Thompson, J. J. Deeks, et al. (2003). “Measuring Inconsistency in Meta-Analyses”. In: <em>BMJ</em> 327.7414, pp. 557-560. ISSN: 0959-8138. DOI: <a href="https://doi.org/10.1136/bmj.327.7414.557">10.1136/bmj.327.7414.557</a>.</cite></p> <p><cite>Higgins, J, S. Thompson, J. Deeks, et al. (2002). “Statistical Heterogeneity in Systematic Reviews of Clinical Trials: A Critical Appraisal of Guidelines and Practice”. In: <em>Journal of Health Services Research & Policy</em> 7.1, pp. 51-61. DOI: <a href="https://doi.org/10.1258/1355819021927674">10.1258/1355819021927674</a>.</cite></p> <p><cite>Hirschenhauser, K. and R. F. Oliveira (2006). “Social Modulation of Androgens in Male Vertebrates: Meta-Analyses of the Challenge Hypothesis”. In: <em>Animal Behaviour</em> 71.2, pp. 265-277. ISSN: 0003-3472. DOI: <a href="https://doi.org/10.1016/j.anbehav.2005.04.014">10.1016/j.anbehav.2005.04.014</a>.</cite></p> <p><cite>Ioannidis, J. P. (2008). “Why Most Discovered True Associations Are Inflated”. In: <em>Epidemiology</em> 19.5, pp. 640-648. ISSN: 1044-3983.</cite></p> --- ## More refs 7. <p><cite>Jackson, D, M. Law, G. Rücker, et al. (2017). “The Hartung-Knapp Modification for Random-Effects Meta-Analysis: A Useful Refinement but Are There Any Residual Concerns?” In: <em>Statistics in Medicine</em> 36.25, pp. 3923-3934. DOI: <a href="https://doi.org/10.1002/sim.7411">10.1002/sim.7411</a>. eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/sim.7411.</cite></p> <p><cite>Jacobs, P. and W. Viechtbauer (2016). “Estimation of the Biserial Correlation and Its Sampling Variance for Use in Meta-Analysis”. In: <em>Research Synthesis Methods</em> 8.2, pp. 161-180. DOI: <a href="https://doi.org/10.1002/jrsm.1218">10.1002/jrsm.1218</a>.</cite></p> <p><cite>Koricheva, J, J. Gurevitch, and K. Mengersen (2013). <em>Handbook of Meta-Analysis in Ecology and Evolution</em>. Princeton, NJ: Princeton University Press. ISBN: 0-691-13729-3.</cite></p> <p><cite>Kovalchik, S. (2013). <em>Tutorial On Meta-Analysis In R - R useR! Conference 2013</em>.</cite></p> <p><cite>Lipsey, M. W. and D. B. Wilson (2001). <em>Practical Meta-Analysis.</em> London: SAGE publications, Inc. ISBN: 0-7619-2167-2.</cite></p> --- ## More refs 8. <p><cite>Littell, J. H, J. Corcoran, and V. Pillai (2008). <em>Systematic Reviews and Meta-Analysis</em>. Oxford, UK: Oxford University Press. ISBN: 0-19-532654-7.</cite></p> <p><cite>McShane, B. B, U. Böckenholt, and K. T. Hansen (2016). “Adjusting for Publication Bias in Meta-Analysis: An Evaluation of Selection Methods and Some Cautionary Notes”. In: <em>Perspectives on Psychological Science</em> 11.5, pp. 730-749. DOI: <a href="https://doi.org/10.1177/1745691616662243">10.1177/1745691616662243</a>. eprint: https://doi.org/10.1177/1745691616662243.</cite></p> <p><cite>Mengersen, K, C. Schmidt, M. Jennions, et al. (2013). “Statistical Models and Approaches to Inference”. In: <em>Handbook of Meta-Analysis in Ecology and Evolution</em>. Ed. by Koricheva, J, J. Gurevitch and Mengersen, Kerrie. Princeton, NJ: Princeton University Press, pp. 89-107.</cite></p> <p><cite>Methley, A. M, S. Campbell, C. Chew-Graham, et al. (2014). “PICO, PICOS and SPIDER: A Comparison Study of Specificity and Sensitivity in Three Search Tools for Qualitative Systematic Reviews”. Eng. In: <em>BMC health services research</em> 14, pp. 579-579. ISSN: 1472-6963. DOI: <a href="https://doi.org/10.1186/s12913-014-0579-0">10.1186/s12913-014-0579-0</a>.</cite></p> <p><cite>Morina, N, K. Stam, T. V. Pollet, et al. (2018). “Prevalence of Depression and Posttraumatic Stress Disorder in Adult Civilian Survivors of War Who Stay in War-Afflicted Regions. A Systematic Review and Meta-Analysis of Epidemiological Studies”. In: <em>Journal of Affective Disorders</em> 239, pp. 328-338. ISSN: 0165-0327. DOI: <a href="https://doi.org/10.1016/j.jad.2018.07.027">10.1016/j.jad.2018.07.027</a>.</cite></p> --- ## More refs 9. <p><cite>Nakagawa, S, D. W. A. Noble, A. M. Senior, et al. (2017). “Meta-Evaluation of Meta-Analysis: Ten Appraisal Questions for Biologists”. In: <em>BMC Biology</em> 15.1, p. 18. ISSN: 1741-7007. DOI: <a href="https://doi.org/10.1186/s12915-017-0357-7">10.1186/s12915-017-0357-7</a>.</cite></p> <p><cite>Pastor, D. A. and R. A. Lazowski (2018). “On the Multilevel Nature of Meta-Analysis: A Tutorial, Comparison of Software Programs, and Discussion of Analytic Choices”. In: <em>Multivariate Behavioral Research</em> 53.1, pp. 74-89. DOI: <a href="https://doi.org/10.1080/00273171.2017.1365684">10.1080/00273171.2017.1365684</a>.</cite></p> <p><cite>Poole, C. and S. Greenland (1999). “Random-Effects Meta-Analyses Are Not Always Conservative”. In: <em>American Journal of Epidemiology</em> 150.5, pp. 469-475. ISSN: 0002-9262. DOI: <a href="https://doi.org/10.1093/oxfordjournals.aje.a010035">10.1093/oxfordjournals.aje.a010035</a>. eprint: http://oup.prod.sis.lan/aje/article-pdf/150/5/469/286690/150-5-469.pdf.</cite></p> <p><cite>Popper, K. (1959). <em>The Logic of Scientific Discovery</em>. London, UK: Hutchinson. ISBN: 1-134-47002-9.</cite></p> <p><cite>Roberts, P. D, G. B. Stewart, and A. S. Pullin (2006). “Are Review Articles a Reliable Source of Evidence to Support Conservation and Environmental Management? A Comparison with Medicine”. In: <em>Biological conservation</em> 132.4, pp. 409-423. ISSN: 0006-3207.</cite></p> --- ## More refs 10. <p><cite>Rosenberg, M. S, H. R. Rothstein, and J. Gurevitch (2013). “Effect Sizes: Conventional Choices and Calculations”. In: <em>Handbook of Meta-analysis in Ecology and Evolution</em>, pp. 61-71.</cite></p> <p><cite>Röver, C, G. Knapp, and T. Friede (2015). “Hartung-Knapp-Sidik-Jonkman Approach and Its Modification for Random-Effects Meta-Analysis with Few Studies”. In: <em>BMC Medical Research Methodology</em> 15.1, p. 99. ISSN: 1471-2288. DOI: <a href="https://doi.org/10.1186/s12874-015-0091-1">10.1186/s12874-015-0091-1</a>.</cite></p> <p><cite>Schwarzer, G, J. R. Carpenter, and G. Rücker (2015). <em>Meta-Analysis with R</em>. New York, NY: Springer. ISBN: 3-319-21415-2.</cite></p> <p><cite>Schwarzer, G, H. Chemaitelly, L. J. Abu-Raddad, et al. “Seriously Misleading Results Using Inverse of Freeman-Tukey Double Arcsine Transformation in Meta-Analysis of Single Proportions”. In: <em>Research Synthesis Methods</em> 0.0. DOI: <a href="https://doi.org/10.1002/jrsm.1348">10.1002/jrsm.1348</a>. eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1002/jrsm.1348.</cite></p> <p><cite>Simmons, J. P, L. D. Nelson, and U. Simonsohn (2011). “False-Positive Psychology”. In: <em>Psychological Science</em> 22.11, pp. 1359-1366. ISSN: 0956-7976. DOI: <a href="https://doi.org/10.1177/0956797611417632">10.1177/0956797611417632</a>.</cite></p> --- ## More refs 11. <p><cite>Simonsohn, U, L. D. Nelson, and J. P. Simmons (2014). “P-Curve: A Key to the File-Drawer.” In: <em>Journal of Experimental Psychology: General</em> 143.2, pp. 534-547. ISSN: 1939-2222. DOI: <a href="https://doi.org/10.1037/a0033242">10.1037/a0033242</a>.</cite></p> <p><cite>Sterne, J. A. C, A. J. Sutton, J. P. A. Ioannidis, et al. (2011). “Recommendations for Examining and Interpreting Funnel Plot Asymmetry in Meta-Analyses of Randomised Controlled Trials”. In: <em>BMJ</em> 343.jul22 1, pp. d4002-d4002. ISSN: 0959-8138. DOI: <a href="https://doi.org/10.1136/bmj.d4002">10.1136/bmj.d4002</a>.</cite></p> <p><cite>Veroniki, A. A, D. Jackson, W. Viechtbauer, et al. (2016). “Methods to Estimate the Between-Study Variance and Its Uncertainty in Meta-Analysis”. Eng. In: <em>Research synthesis methods</em> 7.1, pp. 55-79. ISSN: 1759-2887. DOI: <a href="https://doi.org/10.1002/jrsm.1164">10.1002/jrsm.1164</a>.</cite></p> <p><cite>Viechtbauer, W. (2015). “Package ‘metafor’: Meta-Analysis Package for R”. </p></cite></p> <p><cite>Weiss, B. and J. Daikeler (2017). <em>Syllabus for Course: “Meta-Analysis in Survey Methodology", 6th Summer Workshop (GESIS)</em>.</cite></p> --- ## More refs 12. <p><cite>Wickham, H. and G. Grolemund (2016). <em>R for Data Science</em>. Sebastopol, CA: O'Reilly..</cite></p> <p><cite>Wiernik, B. (2015). <em>A Brief Introduction to Meta-Analysis</em>.</cite></p> <p><cite>Wiksten, A, G. Rücker, and G. Schwarzer (2016). “Hartung-Knapp Method Is Not Always Conservative Compared with Fixed-Effect Meta-Analysis”. In: <em>Statistics in Medicine</em> 35.15, pp. 2503-2515. DOI: <a href="https://doi.org/10.1002/sim.6879">10.1002/sim.6879</a>.</cite></p> <p><cite>Wingfield, J. C, R. E. Hegner, A. M. Dufty Jr, et al. (1990). “The" Challenge Hypothesis": Theoretical Implications for Patterns of Testosterone Secretion, Mating Systems, and Breeding Strategies”. In: <em>American Naturalist</em> 136, pp. 829-846. ISSN: 0003-0147.</cite></p> <p><cite>Yeaton, W. H. and P. M. Wortman (1993). “On the Reliability of Meta-Analytic Reviews: The Role of Intercoder Agreement”. In: <em>Evaluation Review</em> 17.3, pp. 292-309. ISSN: 0193-841X. DOI: <a href="https://doi.org/10.1177/0193841X9301700303">10.1177/0193841X9301700303</a>.</cite></p> --- ## More refs 13. --- ## More refs 14.