As a scrubby CS student who is replying to this mainly as some kind of terrible alternative to revision, I would probably remove the ranf(); calls and pass those values in as arguments. Then you could test against whatever list of data you want, with a known list of outputs, and not have to do any crazy statistical anythings against the output of that function.
I appreciate the input but I don't think this approach would actual serve as a valid test. I know I can create unit tests for any sub-function and replace the call to ranf() with known values to test these sub-functions.
However, what I really want to test is if that function as a whole returns the correct results, i.e. normally distributed random number. The only way to test for this is to use statistical tests over large sample sizes.
You can remove the randomness, replace calls to rand/ranf etc. or use a fixed random seed to the same values are returned. However, you then fail to test the scope of the returned values.
E.g., it may so happen to be with the fixed seed and the extent of your testing by chance everything woks as expected. With a different seed (or after many additional calls to rand() the function gives an erroneous result.