Mathematics genius needed

Soldato
Joined
24 Nov 2002
Posts
16,379
Location
38.744281°N 104.846806°W
I just had a conversation with a friend, discussing song-shuffling, and a wider implication of pseudorandom generation.

If you have 10 songs, the chance of a repeat is x. Now, if you have 10,000 songs, is the chance of repeat greater or smaller?

Instantly you'd think there is a smaller chance of a repeat as there are more songs, but I'm sure I remember this not being the case. Something to do with the Law of Large Numbers.

Anyone shed any light on this?
 
The chance of repeat is smaller the second time round. One song is picked at random if I assume correctly.

Therefore if you have 10 songs the chance of repeat is one in ten, the next instance is one in ten thousand.
 
The chance of repeat is smaller the second time round. One song is picked at random if I assume correctly.

Therefore if you have 10 songs the chance of repeat is one in ten, the next instance is one in ten thousand.

That's what you'd expect though, I'm sure it's not actually the case and can be explained mathematically.

I.e., If I toss a coin 10 times, I'd expect 5 heads, and 5 tails. However if I tosssed 1000 times, I except 500 heads, but I might get 510 heads, 10000 times, 5200 heads etc. (Law of Large Numbers).
 
No. That explanation mathmatically assumes that it will happen. In reality, it might not.

Whether it's repeated or played for the first time, the chances are still 1 in 10 and 1 in 10,000.
 
No. That explanation assumes that matmatically, it will happen. In reality, it might not.

Whether it's repeated or played for the first time, the chances are still 1 in 10 and 1 in 10,000.
Okay, that's chances... is there a way to quantify the reality, though? I.e. adjust for the larger sample size.
 
The 'reality' is that the chances are 1/10 and 1/10000. Sample size is irrelevant to how the calculation is made in this case (although obviously the numbers used will change).
 
The 'reality' is that the chances are 1/10 and 1/10000. Sample size is irrelevant to how the calculation is made in this case (although obviously the numbers used will change).
Damn. I'm *sure* sample size would affect it in somehow....

I know shuffle sucks as (pseudo)random number generators are generally quite poor in the short term, but I honestly thought this suckyness was enhanced by a greater population size.
 
Shuffle songs arn't random. If you listen to the same load of songs on shuffle day after day, you'll notice a pattern.

No, you won't.

It's a seed based random number generator and as far as we are concerned it is effectively random. What you notice is a quirk in psychology were any repeated song sequences (e.g. hearing beat it after thriller more than once in a week) "pop" out for us as noticeable events.
 
It depends on whether the next song is chosen based on the previous song or not. From what i understand of how wimamp does it it effectivly randomly genorates a number that it has not previously had (untill they have all been used till when it allows all possabilitys and starts again). This means a repeat of a song can only occur if you break the order its playing (and going to play).

If they arnt dependent on the previous/current song i think the best way to work it out is the number of times that song has been chosen divided by the total number of songs (theres good maths in this and its been ages since a-level statiscs)
 
I suspect that even if the 'randomisation' is flawed with large numbers, the impact will be so negligible as to make very little difference. Certainly far outweighed by the benefit of having a large number of tracks.

Imagine some bloke quickly counts you out 10 pound coins and says "there's 10 quid mate".
Now imagine he shovels over thousands of pound coins and goes "there's 10 grand mate". There may be a higher accuracy in his counting in the first instance, but you're still better off in the second even if you get 'short changed'
 
That's what you'd expect though, I'm sure it's not actually the case and can be explained mathematically.

I.e., If I toss a coin 10 times, I'd expect 5 heads, and 5 tails. However if I tosssed 1000 times, I except 500 heads, but I might get 510 heads, 10000 times, 5200 heads etc. (Law of Large Numbers).

That's not correct. In fact that's the exact OPPOSITE of the law of Large Numbers:

The law of large numbers (LLN) is a theorem in probability that describes the long-term stability of a random variable. Given a sample of independent and identically distributed random variables with a finite expected value, the average of these observations will eventually approach and stay close to the expected value.

The more times you toss, the closer you will get to 50% heads and 50% tails...

...unless the coin is biased towards one or the other (manufacturing defect or anything else to do with chaos, such as the way you throw, wind, surface it lands on, etc etc), in which case the chance isn't 50/50 anyway.
 
I would never claim to be a maths genius by any definition but little light bulds are flashing from the deep distant past of A-levels along the lines of;

Need to define the question more clearly.

What is the possibility of the 1st song being the same as the 3rd song but the 2nd song being different out of 3 plays of a 10 song playlist where the song to be played is randomly selected each time from the full list of 10 songs with not account taken of previous songs played.

In this case
chance the first song is song 'X' is 1/10
change the second song is not 'X' is 9/10
chance the third song is 'X' is 1/10

Chance of the situation occuring as listed in the definition would therefore be 1/10*9/10*1/10 = 9/1000.

Saying that I also remember my brain switching off during a-level maths due to the bordom factor........[SHRUG].

RB
 
Back
Top Bottom