The 1000 is fake -- it is the result of the array controller's 512MB cache and the 10 Queue Depth; essentially ATTO is throwing 10 x IO in parallel, and the small block sizes result in a small overall amount of data that fits in the P410 cache, therefore appearing super fast. Hence why the larger block sizes at the bottom are all fairly stable at around 120MB/s. The higher numbers at smaller block sizes represent what real-world random usage will feel like (optimised and cached by the controller), but copying a bunch of giant MKVs will behave like the big block sizes at the bottom. You have a much gruntier array controller, hence you are seeing much higher speeds at small block sizes.
What are the speeds at the larger block sizes? ATTO isn't the most professional of tools, because to get a true feel for speed, tests need to be much larger so that they overwhelm any cache, giving a true representation of the underlying storage. In other words, don't rely on the high speeds at small block sizes, they aren't even making it to disk (they are, eventually, but masked by the cache).
So... just to finally understand why you are using a VHDX -- this is a VM? All this time I thought it was a physical box that we were talking about.