multcompare and ttest2

Dear Friends,
I have a simple and very basic question regarding the P-value of multcompare and ttest2 function. As i understand, after anova we use PostHoc analysis to p-value between all pairs. I expected to have the same p-value using ttest2. BUT, their p-value is very different! would you please to help me to figure out the problem? thanks Karlo

回答(1 个)

the cyclist
the cyclist 2016-3-2
编辑:the cyclist 2016-3-2

0 个投票

When you make pairwise comparisons among several groups (not just two), you are more likely to find a difference between a given pair, just by random chance.
The P-values you get from multcompare take into account. The P-values you get from running all the t-tests independently does not (because it doesn't "know" that you have run multiple comparisons.)

5 个评论

Thanks for your answer. please let me explain my problem with an example:
I have 4 group and one measure values.
[p,tbl,stats]=anova1(A);
[c,m,h,gnames] = multcompare(stats,'ctype','bonferroni', 'display','on')
'c(:,6)' gives a P=0.0225 between group 1 and 2.
However,
[h,p_val,ci,stats] = ttest2(A(:,1), A(:,2)); give p_val= 0.0085, i was thinking the corrected value would be p_val*4= 0.034, which is different from c(1,6).
why these values are different?
thanks again, karlo
Why do you expect the multcompare P-value to be
p_val*4 = 0.034
? Do you have a reference for that expectation?
I thought multiplying P_value by 4(number of groups) would be Bonferroni correction for the independent t-test. it sounds i am wrong. would you please tell me how this multiple comparison correction has to be done?
Many many thanks for your time and patient with my naive questions
There are many possible solutions to the multiple comparisons problem. I don't know the algorithm that multcompare uses, and can't dig into it right now. There are references in the documentation. You could also type
edit multcompare
to see what the code does.
Dear the cyclist,
You are correct that uncorrected multiple comparisons may lead to false positives (by chance). However, when using "multcompare" with the 'lsd' correction i(e, non-corrected ttest), I still get p-values that differ from those obtained from a standard ttest (not very different, but still). Do you have any idea as to why this happens?
Felix

请先登录,再进行评论。

类别

帮助中心File Exchange 中查找有关 Analysis of Variance and Covariance 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by