post hoc analyses are usually concerned with finding patterns and/or relationships between subgroups of sampled populationthat would otherwise remain undetected and undiscovered were a scientific community to rely strictly upon a priori statistical methods
Tests
•Fisher's
least significant difference (LSD)
•Bonferroni procedure
•Holm–Bonferroni
method
•Newman–Keuls
method
•Duncan's
new multiple range test (MRT)
•Rodger's
method
•Scheffé's method
•Tukey's procedure
•Dunnett's correction
•Sidák's inequality
•Benjamini–Hochberg (BH) procedure
Post
hoc procedures vs. a priori comparisons (we look at only post hoc)
Comparisonwise error rate - alpha for each comparison
Experimentwise error rate - the probability of making a
Type I error for the set of all possible comparisons
alphae = 1 - (1-alpha)c
SPSS One-Way ANOVA with Post Hoc Tests
A hospital wants to know how a homeopathic medicine for depression performs in comparison
to alternatives. They adminstered 4 treatments to 100 patients for 2 weeks and then measured
ANOVA - Main Assumptions
- Independent observations often holds if each case (row of cells in SPSS) represents a unique person or other statistical unit. That is, we usually don't want more than one row of data for one person, which holds for our data;
- Normally distributed variables in the population seems reasonable if we look at the histograms we inspected earlier. Besideds, violation of the normality assumption is no real issue for larger sample sizes due to the central limit theorem.
- Homogeneity means that the population variances of BDI in each medicine group are all equal, reflected in roughly equal sample variances. Again, our split histogram suggests this is the case but we'll try and confirm this by including Levene's test when running our ANOVA.
There's many ways to run the exact same ANOVA in SPSS. Today, we'll go for 
because it'll provide us with partial ate square as an estimate for the effect size of our 
model
. We'll briefly jump into  and  before pasting our syntax.
The post hoc test we'll run is Tukey’s HSD (Honestly Significant Difference), denoted as 
“Tukey”. We'll explain how it works when we'll discuss the output.
“Estimates of effect size” refers to partial ate square. “Homogeneity tests” includes Levene’s
test for equal variances in our output.
SPSS ANOVA Output - Levene’s Test
are all equal, which is a requirement for ANOVA. “Sig.” = 0.949 so there's a 94.9% probabilityof
finding the slightly different variances that we see in our sample. This sample outcome is very
likely under the null hypothesis of homoscedasticity; we satisfy this assumption for our ANOVA.
SPSS ANOVA Output - Between Subjects Effects
 If our population means are really equal, there's a 0% chance of finding the sample
 If our population means are really equal, there's a 0% chance of finding the sample
differences we observed. We reject the null hypothesis of equal population means.
 The different medicines administered account for some 39% of the variance in the BDI
 The different medicines administered account for some 39% of the variance in the BDI
scores. This is the effect size as indicated by partial eta squared.
Partial Eta Squared is  the Sums of Squares for medicine divided by
 the Sums of Squares for medicine divided by  the corrected total
 the corrected total
 the Sums of Squares for medicine divided by
 the Sums of Squares for medicine divided by  the corrected total
 the corrected total
sums of squares (2780 / 7071 = 0.39).
 Sums of Squares Error represents the variance in BDI scores not accounted for by medicine. Note that
 Sums of Squares Error represents the variance in BDI scores not accounted for by medicine. Note that  +
 +  =
 =  .
.
comparing 4 means results in (4 - 1) x 4 x 0.5 = 6 distinct comparisons, each of which is listed
twice in this table. There's three ways for telling which means are likely to be different:
 Statistically significant mean differences are flagged with an asterisk (*). For instance, the
 Statistically significant mean differences are flagged with an asterisk (*). For instance, the
very first line tells us that “None” has a mean BDI score of 6.7 points higher than the placebo 
which is quite a lot actually since BDI scores can range from 0 through 63.
 As a rule of thumb, “Sig.” < 0.05 indicates a statistically significant difference between
 As a rule of thumb, “Sig.” < 0.05 indicates a statistically significant difference between
two means.
 A confidence interval not including zero means that a zero difference between these means
 A confidence interval not including zero means that a zero difference between these means
in the population is unlikely.
Obviously,  ,
,  and
 and  result in the same conclusions.
 result in the same conclusions.
 ,
,  and
 and  result in the same conclusions.
 result in the same conclusions.
by : Fatin Farhana bt Marzuki
References
https://www.spss-tutorials.com/spss-one-way-anova-with-post-hoc-tests-example/
https://statistics.laerd.com/statistical-guides/one-way-anova-statistical-guide-4.php








 
No comments:
Post a Comment