The result is what it says on the tin - the average (of a series of runs) of the time it takes the browser to complete a benchmark suite. Obviously JS takes longer to execute on a slower machine, so you'd get a slower time - that's why you can't directly compare the times between machines.
There hasn't been a merge from the TraceMonkey branch since beta 1 was cut, so you're not really gaining much (in terms of JS performance) over a more stable beta.
The setting I mentioned in #2 turns on the JIT bits of the JavaScript engine for content, i.e. webpages. The other option for Chrome does the same for the browser's own JS (i.e. bits of the browser itself, extensions etc.) and is a bit more unstable (and unlikely to be enabled on the released 3.1).
Edit: Just updated the graph to include the last WebKit nightly too.
There hasn't been a merge from the TraceMonkey branch since beta 1 was cut, so you're not really gaining much (in terms of JS performance) over a more stable beta.
The setting I mentioned in #2 turns on the JIT bits of the JavaScript engine for content, i.e. webpages. The other option for Chrome does the same for the browser's own JS (i.e. bits of the browser itself, extensions etc.) and is a bit more unstable (and unlikely to be enabled on the released 3.1).
Edit: Just updated the graph to include the last WebKit nightly too.