Today, there are API limitations with DX9 that make many functions impractical from a performance or efficiency perspective. For example, if it is inefficient to draw lots of individual blades of grass due to API or hardware behavior, then doing that will mean that the developer will have to make some trade-offs, either by reducing the quality of the grass, or reducing the quality of something else, as the drawing of that grass would consume more GPU (or CPU) resources than might be practical. So, with a new API and architecture, things which before might have been impractical due to speed reasons, now become practical, allowing higher levels of image quality, and more realistic scenes.
Speed-wise, there are lots of great features in DX10 that will make things more efficient. Pervasive instancing and things like geometry shaders allow refactoring of the graphics algorithms to move the graphics workload entirely to the GPU, or using new functions of the API to do things on the GPU that simply weren't possible on GPUs before. Those "speed" things can all result in improved image quality, and I expect you'll see developers be able to take advantage of some of those benefits early, the result being richer, more-detailed and more alive worlds. Of course, DX10 has some great features for image quality, both in terms of API-visible functionality like geometry shaders, as well as more consistent and specified behavior for things like texture filtering, antialiasing, and transparency that should also benefit first-generation DX10 games.