Request
To request a blog written on a specific topic, please email James@StatisticsSolutions.com with your suggestion. Thank you!
Monday, April 1, 2013
Monday, February 25, 2013
Bonferroni Correction
- Also known as Bonferroni type adjustment
- Made for inflated Type I error (the higher the chance for a false positive; rejecting the null hypothesis when you should not)
- When conducting multiple analyses on the same dependent variable, the chance of committing a Type I error increases, thus increasing the likelihood of coming about a significant result by pure chance. To correct for this, or protect from Type I error, a Bonferroni correction is conducted.
- Bonferroni correction is a conservative test that, although protects from Type I Error, is vulnerable to Type II errors (failing to reject the null hypothesis when you should in fact reject the null hypothesis)
- Alter the p value to a more stringent value, thus making it less likely to commit Type I Error
- To get the Bonferroni corrected/adjusted p value, divide the original α-value by the number of analyses on the dependent variable. The researcher assigns a new alpha for the set of dependent variables (or analyses) that does not exceed some critical value: αcritical = 1 - (1 – αaltered)k, where k = the number of comparisons on the same dependent variable.
- However, when reporting the new p-value, the rounded version (of 3 decimal places) is typically reported. This rounded version is not technically correct; a rounding error. Example: 13 correlation analyses on the same dependent variable would indicate the need for a Bonferroni correction of (αaltered =.05/13) = .004 (rounded), but αcritical = 1 - (1-.004)13 = 0.051, which is not less than 0.05. But with the non-rounded version: (αaltered =.05/13) = .003846154, and αcritical = 1 - (1 - .003846154)13 = 0.048862271, which is in-fact less than 0.05! SPSS does not currently have the capability to set alpha levels beyond 3 decimal places, so the rounded version is presented and used.
- Another example: 9 correlations are to be conducted between SAT scores and 9 demographic variables. To protect from Type I Error, a Bonferroni correction should be conducted. The new p-value will be the alpha-value (αoriginal = .05) divided by the number of comparisons (9): (αaltered = .05/9) = .006. To determine if any of the 9 correlations is statistically significant, the p-value must be p < .006.
Thursday, January 24, 2013
Checking the Additional Assumptions of a MANOVA
So a MANOVA is typically seen as an extension of an ANOVA
that has more than one continuous variable. The typical assumptions of an ANOVA
should be checked, such as normality, equality of variance, and univariate
outliers. However, there are additional assumptions that should be checked when
conducting a MANOVA.
The additional assumptions of the MANOVA include:
- Absence of multivariate outliers
- Linearity
- Absence of multicollinearity
- Equality of covariance matrices
Absence of multivariate outliers is checked by assessing
Mahalanobis Distances among the participants. To do this in SPSS, run a
multiple linear regression with all of the dependent variables of the MANOVA as
the independent variables of the multiple linear regression. The dependent
variable would be simply an ID variable. There is an option in SPSS to save the
Mahalanobis Distances when running the regression. Once this is done, sort the
Mahalanobis Distances from greatest to least. To identify an outlier, the
critical chi square value must be known. This is derived from the critical chi
square value at p = .001 with the
degrees of freedom being the number of dependent variables. With 3 variables,
the critical value is 16.27, so any participants with a Mahalanobis Distance
value greater than 16.27 should be removed.
Linearity assumes that all of the dependent variables are
linearly related to each other. This can be checked by conducting a scatterplot
matrix between the dependent variables. Linearity should be met for each group
of the MANOVA separately.
Absence of multicollinearity is checked by conducting
correlations among the dependent variables. The dependent variables should all
be moderately related, but any correlation over .80 presents a concern for
multicollinearity.
Equality of covariance matrices is an assumption checked by
running a Box’s M test. Unlike most tests, the Box’s M test tends to be very
strict, and thus the level of significance is typically .001. So as long as the
p value for the test is above .001,
the assumption is met.
Subscribe to:
Posts (Atom)