Fits right-truncated meta-analysis (RTMA), a bias correction for the joint effects of p-hacking (i.e., manipulation of results within studies to obtain significant, positive estimates) and traditional publication bias (i.e., the selective publication of studies with significant, positive results) in meta-analyses. This method analyzes only nonaffirmative studies (i.e., those with significant, positive estimates). You can pass all studies in the meta-analysis or only the nonaffirmative ones; if the former, the function will still analyze only the nonaffirmative ones.

## Usage

```
phacking_meta(
yi,
vi,
sei,
favor_positive = TRUE,
alpha_select = 0.05,
ci_level = 0.95,
stan_control = list(adapt_delta = 0.98, max_treedepth = 20),
parallelize = TRUE
)
```

## Arguments

- yi
A vector of point estimates to be meta-analyzed.

- vi
A vector of estimated variances (i.e., squared standard errors) for the point estimates.

- sei
A vector of estimated standard errors for the point estimates. (Only one of

`vi`

or`sei`

needs to be specified).- favor_positive
`TRUE`

if publication bias are assumed to favor significant positive estimates;`FALSE`

if assumed to favor significant negative estimates.- alpha_select
Alpha level at which an estimate's probability of being favored by publication bias is assumed to change (i.e., the threshold at which study investigators, journal editors, etc., consider an estimate to be significant).

- ci_level
Confidence interval level (as proportion) for the corrected point estimate. (The alpha level for inference on the corrected point estimate will be calculated from

`ci_level`

.)- stan_control
List passed to

`rstan::sampling()`

as the`control`

argument.- parallelize
Logical indicating whether to parallelize sampling.

## Value

An object of class `metabias::metabias()`

, a list containing:

- data
A tibble with one row per study and the columns

`yi`

,`vi`

,`sei`

,`affirm`

.- values
A list with the elements

`favor_positive`

,`alpha_select`

,`ci_level`

,`tcrit`

,`k`

,`k_affirmative`

,`k_nonaffirmative`

,`optim_converged`

.`optim_converged`

indicates whether the optimization to find the posterior mode converged.- stats
A tibble with two rows and the columns

`param`

,`mode`

,`median`

,`mean`

,`se`

,`ci_lower`

,`ci_upper`

,`n_eff`

,`r_hat`

. We recommend reporting the`mode`

for the point estimate;`median`

and`mean`

represent posterior medians and means respectively.- fit
A

`stanfit`

object (the result of fitting the RTMA model).

## References

Mathur MB (2022). “Sensitivity analysis for p-hacking in meta-analyses.” doi:10.31219/osf.io/ezjsx .

## Examples

```
# \donttest{
# passing all studies, though only nonaffirmative ones will be analyzed
money_priming_rtma <- phacking_meta(money_priming_meta$yi, money_priming_meta$vi,
parallelize = FALSE)
#>
#> SAMPLING FOR MODEL 'phacking_rtma' NOW (CHAIN 1).
#> Chain 1:
#> Chain 1: Gradient evaluation took 0.000439 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 4.39 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1:
#> Chain 1:
#> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 1:
#> Chain 1: Elapsed Time: 4.432 seconds (Warm-up)
#> Chain 1: 3.732 seconds (Sampling)
#> Chain 1: 8.164 seconds (Total)
#> Chain 1:
#>
#> SAMPLING FOR MODEL 'phacking_rtma' NOW (CHAIN 2).
#> Chain 2:
#> Chain 2: Gradient evaluation took 0.000363 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 3.63 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2:
#> Chain 2:
#> Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 2:
#> Chain 2: Elapsed Time: 3.983 seconds (Warm-up)
#> Chain 2: 3.158 seconds (Sampling)
#> Chain 2: 7.141 seconds (Total)
#> Chain 2:
#>
#> SAMPLING FOR MODEL 'phacking_rtma' NOW (CHAIN 3).
#> Chain 3:
#> Chain 3: Gradient evaluation took 0.000349 seconds
#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 3.49 seconds.
#> Chain 3: Adjust your expectations accordingly!
#> Chain 3:
#> Chain 3:
#> Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 3: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 3:
#> Chain 3: Elapsed Time: 5.19 seconds (Warm-up)
#> Chain 3: 2.922 seconds (Sampling)
#> Chain 3: 8.112 seconds (Total)
#> Chain 3:
#>
#> SAMPLING FOR MODEL 'phacking_rtma' NOW (CHAIN 4).
#> Chain 4:
#> Chain 4: Gradient evaluation took 0.000344 seconds
#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 3.44 seconds.
#> Chain 4: Adjust your expectations accordingly!
#> Chain 4:
#> Chain 4:
#> Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 4: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 4:
#> Chain 4: Elapsed Time: 4.656 seconds (Warm-up)
#> Chain 4: 3.63 seconds (Sampling)
#> Chain 4: 8.286 seconds (Total)
#> Chain 4:
# }
```