Order from us for quality, customized work in due time of your choice.
Introduction
Effect size is an important statistic that reflects the strength/magnitude of a relationship between two variables. This paper uses the article by Mancuso (2010) as an example to show that effect sizes should be reported in studies because they allow for assessing the practical importance of the attained results, whereas statistical significance (p-value) only permits for estimating the statistical trustworthiness (or, more precisely, non-extremity) of the obtained results of a test (Forthofer, Lee, & Hernandez, 2007).
Summary of the Article
Mancuso (2010) examined whether trust of patients with diabetes and their health literacy could predict glycemic control; such factors as socioeconomic status, knowledge of diabetes, demographical issues, depression, and activities related to self-care were also taken into account. The data was gathered from 102 participants, and subjected to a simultaneous multiple regression. The study showed that all the named factors together can provide a model accounting for a substantial amount of variance (28.5%), but further analysis demonstrated that only patient trust (B=-0.873, SE=0.165) and depression (B=0.036, SE=0.014) were statistically significant when it came to assessing the correlation between glycemic control and each of these variables taken separately.
Relationship between Effect Size, Power, and Sample Size in the Article
A pilot study was conducted to estimate the magnitude of the effect to be detected, supplying the result of R2=.19, and providing grounds for searching for an effect of medium size, i.e.,.15 (Mancuso, 2010, p. 97). For such an effect size, it was needed to use a sample of 92 participants to attain a power of 80%. However, 102 individuals were studied, offering the statistical power of 85% to detect an effect of a medium size (.15) (Mancuso, 2010, p. 97).
Practical and Statistical Significance of Effect Size
The correlation detected by Mancuso (2010) was practically significant; the author used effect size to describe its magnitude. It is stated, for example, that for the overall model, R2=.32, R2adj.=.285, so the effect size was close to large (Mancuso, 2010, p. 98). Practically, it means that the model accounted for 28.5% of the variance in glycemic control (Mancuso, 2010, p. 98), so the magnitude of the correlation between the model and the perceived control was practically important enough to be taken into account. On the contrary, if the effect size had been very small (e.g.,.0001, accounting for 0.01% of the variance), it would probably have meant that even though the (statistically significant) correlation between phenomena had existed, it had been so weak that it had not been worthy of being taken into account practically.
The estimated effect size of the correlation was also used to calculate the sample size needed to achieve an acceptable level of statistical power (85%), meaning that effect size was utilized to help run a powerful enough test. It should be noted, however, that increasing sample size could potentially raise statistical power almost to 1.0 (Ellis, 2010), allowing for detection of even the smallest, almost non-existent, and thus practically uninteresting effects (Field, 2013).
Conclusion
Therefore, it is important to report both practical (effect size) and statistical (p-value) significance of the results of studies. p-values indicate statistical significance of tests, that is, whether the results are statistically reliable, whereas effect sizes show whether the magnitude of the detected effect is large enough to be taken into account.
Also, because the power of a statistical test depends heavily upon the sample size, if a large enough sample is used, it might be, for example, possible to detect even the tiniest differences in a practically homogenous population, which is why reporting effect size is paramount to show the actual magnitude of these differences and assess their practical importance.
References
Ellis, P. D. (2010). The essential guide to effect sizes: Statistical power, meta-analysis, and the interpretation of research results. Cambridge, UK: Cambridge University Press.
Field, A. (2013). Discovering statistics using IBM SPSS statistics (4th ed.). Thousand Oaks, CA: SAGE Publications.
Forthofer, R. N., Lee, E. S., & Hernandez, M. (2007). Biostatistics: A guide to design, analysis, and discovery (2nd ed.). Burlington, MA: Elsevier Academic Press.
Mancuso, J. M. (2010). Impact of health literacy and patient trust on glycemic control in an urban USA population. Nursing and Health Sciences, 12(1), 94-104.
Order from us for quality, customized work in due time of your choice.