Stats question

Associate
Joined
11 Nov 2003
Posts
860
Location
South London
Hi,

If I have an electronic dice and I want to test if it is truly random or not (or biassed towards a certain number), is there a way that I can test it? (eg some sort of stats test that looks at the number of outcomes compared to the expected number of outcomes for each number).

Anyone know how to do this?

Fred
 
If it's anything like the electronic die we made in D&T at school a couple of years ago, it will be based on the length of time that the button is held down for (astable attached to a decade counter was how mine worked). So the outcomes will depend on the length of time the button is held for, and the previous value of the dice (if not reset before next seeding).

null :)
 
Even if you recorded 1,000 results from it say, 800 were X and 200 were Y it could still be perfectly random. That's the beauty of randomness! I don't think there's a way to check it to be honest.
 
Yep,

Repeat the trial N times, and as N tends to infinity the distribution of the trial data should approximate the theoretical distribution. You could create a confident interval to measure how likely/unlikey your results are I guess.

David
 
welshtom said:
i can tell you now its not truely random, computers cant generate random numbers, not true ones anyway

You can actually but not in the normal way (i.e rand() :p ), you need to sample a source of entropy outside of the computer. Such as radioactive decay, thermal noise, shot noise and clock drift, time between key stokes etc...
 
Last edited:
Una said:
You can actually but not in the normal way (i.e rand() :p ), you need to sample of source of entropy outside of the computer. Such as radioactive decay, thermal noise, shot noise and clock drift, time between key stokes etc...


In terms of defining a random variable it is quite mathematically involved, and at a graduate level looks at probability spaces. In terms of looking at 'real life' examples for a comparison set, it can often appear non random. Looking at would should be a random set, there is a possibility that the set may not actually appear random.

For example if you take what you think is a statistically random time series, then look at it's acf, then it shouldn't show acf lags that are above N^2/Sigma level, but I've seen a few examples that counter this.

Cheers

David
 
Back
Top Bottom