Given the hypothetical population effect size and the required power level, the function prospective() performs a prospective design analysis for Pearson's correlation test between two variables or t-test comparing group means (Cohen's d). According to the defined alternative hypothesis and the significance level, the required sample size is computed together with the associated Type M error, Type S error, and the critical effect value (i.e., the minimum absolute effect size value that would result significant).

prospective(
  effect_size,
  power,
  ratio_n = 1,
  test_method = c("pearson", "two_sample", "welch", "paired", "one_sample"),
  alternative = c("two_sided", "less", "greater"),
  sig_level = 0.05,
  ratio_sd = 1,
  B = 10000,
  tl = -Inf,
  tu = Inf,
  B_effect = 1000,
  sample_range = c(2, 1000),
  eval_power = c("median", "mean"),
  tol = 0.01,
  display_message = TRUE
)

Arguments

effect_size

a numeric value or function (see Details) indicating the hypothetical population effect size.

power

a numeric value indicating the required power level.

ratio_n

a numeric value indicating the ratio between the sample size in the first group and in the second group. This argument is required when test_method is set to "two_sample" or "welch". In the case of test_method = "paired", set ratio_n to 1. Whereas in the case of test_method = "one_sample", set ratio_n to NULL. This argument is ignored for test_method = "pearson". See Test methods section in Details.

test_method

a character string specifying the test type, must be one of "pearson" (default, Pearson's correlation), "two_sample" (independent two-sample t-test), "welch" (Welch's t-test), "paired" (dependent t-test for paired samples), or "one_sample" (one-sample t-test). You can specify just the initial letters.

alternative

a character string specifying the alternative hypothesis, must be one of "two_sided" (default), "greater" or "less". You can specify just the initial letter.

sig_level

a numeric value indicating the significance level on which the alternative hypothesis is evaluated.

ratio_sd

a numeric value indicating the ratio between the standard deviation in the first group and in the second group. This argument is required only in the case of Welch's t-test.

B

a numeric value indicating the number of iterations. Increase the number of iterations to obtain more stable results.

tl

optional value indicating the lower truncation point if effect_size is defined as a function.

tu

optional value indicating the upper truncation point if effect_size is defined as a function.

B_effect

a numeric value indicating the number of sampled effects if effect_size is defined as a function. Increase the number to obtain more stable results.

sample_range

a length-2 numeric vector indicating the minimum and maximum sample size of the first group (sample_n1).

eval_power

a character string specifying the function used to summarize the resulting distribution of power values. Must be one of "median" (default) or "mean". You can specify just the initial letters. See Details.

tol

a numeric value indicating the tolerance of required power level.

display_message

a logical variable indicating whether to display or not the information about computational steps and the progress bar. Not that the progress bar is available only when effect_size is defined as a function.

Value

A list with class "design_analysis" containing the following components:

design_analysis

a character string indicating the type of design analysis: "prospective".

call_arguments

a list with all the arguments passed to the function and the raw function call.

effect_info

a list with all the information regarding the considered hypothetical population effect size. The list includes: effect_type indicating the type of effect; effect_function indicating the function from which effect are sampled or the string "single_value" if a single value was provided; effect_summary summary of the sampled effects; effect_samples vector with the sampled effects (or unique value in the case of a single value); if relevant tl and tu specifying the lower upper truncation point respectively.

test_info

a list with all the information regarding the test performed. The list includes: test_method character sting indicating the test method (i.e., "pearson", "one_sample", "paired", "two_sample", or "welch"); the required sample size (sample_n1 and if relevant sample_n2), the alternative hypothesis (alternative), significance level (sig_level) and degrees of freedom (df) of the statistical test; critical_effect the minimum absolute effect value that would result significant. Note that critical_effect in the case of alternative = "two_sided" is the absolute value and both positive and negative values should be considered.

prospective_res

a data frame with the results of the design analysis. Columns names are power, typeM, and typeS.

Details

Conduct a prospective design analysis to define the required sample size and the associated inferential risks according to study design. A general overview is provided in the vignette("prospective").

Population effect size

The hypothetical population effect size (effect_size) can be set to a single value or a function that allows sampling values from a given distribution. The function has to be defined as function(n) my_function(n, ...), with only one single argument n representing the number of sampled values (e.g., function(n) rnorm(n, mean = 0, sd = 1); function(n) sample(c(.1,.3,.5), n, replace = TRUE)). This allows users to define hypothetical effect size distribution according to their needs.

Argument B_effect allows defining the number of sampled effects. Users can access sampled effects in the effect_info list included in the output to evaluate if the sample is representative of their specification. Increase the number to obtain more accurate results but it will require more computational time (default is 1000). To avoid long computational times, we suggest adjusting B when using a function to define the hypothetical population effect size.

Optional arguments tl and tu allow truncating the sampling distribution specifying the lower truncation point and upper truncation point respectively. Note that if effect_type = "correlation", distribution is automatically truncated between -1 and 1.

When a distribution of effects is specified, a corresponding distribution of power values is obtained as result. To evaluate whether the required level of power is obtained, user can decide between the median or the mean value as a summary of the distribution using the argument eval_power. They answer two different questions. Which is the required sample size to obtain 50 than the required level (median)?; Which is the required sample size to obtain on average a power equal or greater than the required level (mean)?

Test methods

The function retrospective() performs a retrospective design analysis considering correlations between two variables or comparisons between group means.

In the case of a correlation, only Pearson's correlation between two variables is available, whereas Kendall's tau and Spearman's rho are not implemented. The test_method argument has to be set to "pearson" (default) and the effect_size argument is used to define the hypothetical population effect size in terms of Pearson's correlation coefficient (\(\rho\)). The ratio_n argument is ignored.

In the case of a comparison between group means, the effect_size argument is used to define the hypothetical population effect size in terms of Cohen's d and the available t-tests are selected specifying the argument test_method. For independent two-sample t-test, use "two_sample" and indicate the ratio between the sample size of the first group and the second group (ratio_n). For Welch's t-test, use "welch" and indicate the ratio between the sample size of the first group and the second group (ratio_n) and the ratio between the standard deviation in the first group and in the second group (ratio_sd). For dependent t-test for paired samples, use "paired" (ratio_n has to be 1). For one-sample t-test, use "one_sample" (ratio_n has to be NULL).

Study design

Study design can be further defined according to statistical test directionality and required \(\alpha\)-level using the arguments alternative and sig_level respectively.

References

Altoè, G., Bertoldo, G., Zandonella Callegher, C., Toffalini, E., Calcagnì, A., Finos, L., & Pastore, M. (2020). Enhancing Statistical Inference in Psychological Research via Prospective and Retrospective Design Analysis. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.02893

Bertoldo, G., Altoè, G., & Zandonella Callegher, C. (2020). Designing Studies and Evaluating Research Results: Type M and Type S Errors for Pearson Correlation Coefficient. Retrieved from https://psyarxiv.com/q9f86/

Gelman, A., & Carlin, J. (2014). Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors. Perspectives on Psychological Science, 9(6), 641–651. https://doi.org/10.1177/1745691614551642

Examples

# Pearson's correlation prospective(effect_size = .3, power = .8, test_method = "pearson", B = 1e3)
#> Evaluate n = 501 #> Estimated power is 1 #> #> Evaluate n = 251 #> Estimated power is 1 #> #> Evaluate n = 126 #> Estimated power is 0.93 #> #> Evaluate n = 64 #> Estimated power is 0.68 #> #> Evaluate n = 95 #> Estimated power is 0.85 #> #> Evaluate n = 80 #> Estimated power is 0.79 #> #> Evaluate n = 88 #> Estimated power is 0.83 #> #> Evaluate n = 84 #> Estimated power is 0.8 #>
#> #> Design Analysis #> #> Hypothesized effect: rho = 0.3 #> #> Study characteristics: #> test_method sample_n1 sample_n2 alternative sig_level df #> pearson 84 NULL two_sided 0.05 82 #> #> Inferential risks: #> power typeM typeS #> 0.803 1.125 0 #> #> Critical value(s): rho = ± 0.215
# Two-sample t-test prospective(effect_size = .3, power = .8, ratio_n = 1.5, test_method = "two_sample", B = 1e3)
#> Evaluate n = 501 #> Estimated power is 1 #> #> Evaluate n = 251 #> Estimated power is 0.95 #> #> Evaluate n = 126 #> Estimated power is 0.74 #> #> Evaluate n = 188 #> Estimated power is 0.89 #> #> Evaluate n = 157 #> Estimated power is 0.82 #> #> Evaluate n = 142 #> Estimated power is 0.78 #> #> Evaluate n = 150 #> Estimated power is 0.78 #> #> Evaluate n = 154 #> Estimated power is 0.81 #> #> Evaluate n = 152 #> Estimated power is 0.82 #> #> Evaluate n = 151 #> Estimated power is 0.8 #>
#> #> Design Analysis #> #> Hypothesized effect: cohen_d = 0.3 #> #> Study characteristics: #> test_method sample_n1 sample_n2 alternative sig_level df #> two_sample 226 151 two_sided 0.05 375 #> #> Inferential risks: #> power typeM typeS #> 0.802 1.114 0 #> #> Critical value(s): cohen_d = ± 0.207
# Welch t-test prospective(effect_size = .3, power = .8, ratio_n = 2, test_method = "welch", ratio_sd = 1.5, B = 1e3)
#> Evaluate n = 501 #> Estimated power is 1 #> #> Evaluate n = 251 #> Estimated power is 0.98 #> #> Evaluate n = 126 #> Estimated power is 0.83 #> #> Evaluate n = 64 #> Estimated power is 0.55 #> #> Evaluate n = 95 #> Estimated power is 0.73 #> #> Evaluate n = 110 #> Estimated power is 0.79 #>
#> #> Design Analysis #> #> Hypothesized effect: cohen_d = 0.3 #> #> Study characteristics: #> test_method sample_n1 sample_n2 alternative sig_level df #> welch 220 110 two_sided 0.05 301.979 #> #> Inferential risks: #> power typeM typeS #> 0.791 1.139 0 #> #> Critical value(s): cohen_d = ± 0.215
# Paired t-test prospective(effect_size = .3, power = .8, ratio_n = 1, test_method = "paired", B = 1e3)
#> Evaluate n = 501 #> Estimated power is 1 #> #> Evaluate n = 251 #> Estimated power is 1 #> #> Evaluate n = 126 #> Estimated power is 0.93 #> #> Evaluate n = 64 #> Estimated power is 0.66 #> #> Evaluate n = 95 #> Estimated power is 0.82 #> #> Evaluate n = 80 #> Estimated power is 0.77 #> #> Evaluate n = 88 #> Estimated power is 0.79 #> #> Evaluate n = 92 #> Estimated power is 0.83 #> #> Evaluate n = 90 #> Estimated power is 0.81 #>
#> #> Design Analysis #> #> Hypothesized effect: cohen_d = 0.3 #> #> Study characteristics: #> test_method sample_n1 sample_n2 alternative sig_level df #> paired 90 90 two_sided 0.05 89 #> #> Inferential risks: #> power typeM typeS #> 0.806 1.125 0 #> #> Critical value(s): cohen_d = ± 0.209
# One-sample t-test prospective(effect_size = .3, power = .8, ratio_n = NULL, test_method = "one_sample", B = 1e3)
#> Evaluate n = 501 #> Estimated power is 1 #> #> Evaluate n = 251 #> Estimated power is 1 #> #> Evaluate n = 126 #> Estimated power is 0.92 #> #> Evaluate n = 64 #> Estimated power is 0.69 #> #> Evaluate n = 95 #> Estimated power is 0.84 #> #> Evaluate n = 80 #> Estimated power is 0.76 #> #> Evaluate n = 88 #> Estimated power is 0.8 #>
#> #> Design Analysis #> #> Hypothesized effect: cohen_d = 0.3 #> #> Study characteristics: #> test_method sample_n1 sample_n2 alternative sig_level df #> one_sample 88 NULL two_sided 0.05 87 #> #> Inferential risks: #> power typeM typeS #> 0.804 1.145 0 #> #> Critical value(s): cohen_d = ± 0.212
# \donttest{ # Define effect_size using functions (long computational time) prospective(effect_size = function(n) rnorm(n, .3, .1), power = .8, test_method = "pearson", B_effect = 500, B = 500, tl = .15)
#> If 'effect_type = correlation', effect_size distribution is truncated between 0.15 and 1
#> Truncation could require long computational time
#> Evaluate n = 501 #> Estimated power is 1 #> #> Evaluate n = 251 #> Estimated power is 1 #> #> Evaluate n = 126 #> Estimated power is 0.96 #> #> Evaluate n = 64 #> Estimated power is 0.73 #> #> Evaluate n = 95 #> Estimated power is 0.88 #> #> Evaluate n = 80 #> Estimated power is 0.82 #> #> Evaluate n = 72 #> Estimated power is 0.78 #> #> Evaluate n = 76 #> Estimated power is 0.8 #>
#> #> Design Analysis #> #> Hypothesized effect: rho ~ rnorm(n, 0.3, 0.1) [tl = 0.15 ; tu = 1 ] #> n_effect Min. 1st Qu. Median Mean 3rd Qu. Max. #> 500 0.151 0.251 0.317 0.317 0.372 0.629 #> #> Study characteristics: #> test_method sample_n1 sample_n2 alternative sig_level df #> pearson 76 NULL two_sided 0.05 74 #> #> Inferential risks: #> Min. 1st Qu. Median Mean 3rd Qu. Max. #> power 0.238 0.5935 0.8030 0.746044 0.9185 1.000 #> typeM 0.985 1.0400 1.1105 1.189070 1.2720 1.895 #> typeS 0.000 0.0000 0.0000 0.000080 0.0000 0.007 #> #> Critical value(s): rho = ± 0.226
prospective(effect_size = function(n) rnorm(n, .3, .1), power = .8, test_method = "two_sample", ratio_n = 1, B_effect = 500, B = 500, tl = .2, tu = .4)
#> Truncation could require long computational time
#> Evaluate n = 501 #> Estimated power is 1 #> #> Evaluate n = 251 #> Estimated power is 0.92 #> #> Evaluate n = 126 #> Estimated power is 0.67 #> #> Evaluate n = 188 #> Estimated power is 0.84 #> #> Evaluate n = 157 #> Estimated power is 0.77 #> #> Evaluate n = 172 #> Estimated power is 0.8 #>
#> #> Design Analysis #> #> Hypothesized effect: cohen_d ~ rnorm(n, 0.3, 0.1) [tl = 0.2 ; tu = 0.4 ] #> n_effect Min. 1st Qu. Median Mean 3rd Qu. Max. #> 500 0.201 0.262 0.303 0.302 0.343 0.399 #> #> Study characteristics: #> test_method sample_n1 sample_n2 alternative sig_level df #> two_sample 172 172 two_sided 0.05 342 #> #> Inferential risks: #> Min. 1st Qu. Median Mean 3rd Qu. Max. #> power 0.426 0.670 0.800 0.773828 0.8885 0.972 #> typeM 0.999 1.069 1.122 1.155608 1.2160 1.477 #> typeS 0.000 0.000 0.000 0.000008 0.0000 0.004 #> #> Critical value(s): cohen_d = ± 0.212
# }