Hey, I'm doing some statistics work here at the moment, and wikipedia has explained it fairly well but I need some more help understanding it.
Q-Test...
With the Q-test, my values for one set are:
0.2, 0.2, 0.2, 0.3
Well for the Q test it says...
Well my outlier in my set of data is 0.3, although on wikipedia's example it uses the lowest value in the data set as it's outlier.
If I put my values into the equation...
Q=gap/range
Q=(0.2-0.3)/(0.3-0.2)
Q= -1
So referring to the table for Q...
Well I'm doing 95% significance, so the value for Q95% for 4 values, is 0.829.
Q= -1
So thats screwed me right up... Can anyone explain?
Q-Test...

With the Q-test, my values for one set are:
0.2, 0.2, 0.2, 0.3
Well for the Q test it says...
Q = Qgap/Qrange
Where Qgap is the absolute difference between the outlier in question and the closest number to it. If Qcalculated > Qtable then reject the questionable point.
Well my outlier in my set of data is 0.3, although on wikipedia's example it uses the lowest value in the data set as it's outlier.
If I put my values into the equation...
Q=gap/range
Q=(0.2-0.3)/(0.3-0.2)
Q= -1
So referring to the table for Q...
If Qcalculated > Qtable then reject the questionable point.
Well I'm doing 95% significance, so the value for Q95% for 4 values, is 0.829.
Q= -1
So thats screwed me right up... Can anyone explain?