You've got a problem somewhere if you're taking that long to back up 150GB!
Here's what we do to back up circa 20TB of VMs:
Veeam "controller" - running as a VM with 2 vCPUs and 16GB RAM
Veeam proxy - we did have multiple VM proxies but they are hideously destabilising! We've reverted to a single mega-box which has two quad-core Xeons and lots of RAM which gets the job done.
Veeam repository - this is a 3rd server which has local disk and a couple of iSCSI LUNs on a NetApp filer connected over 10GbE.
We get a full backup (an actual, proper full backup and not a synthetic full) done on a Friday night - takes until late on Saturday to complete, final size on disk is about 7TB after deduplication and compression.
Once that is done, we have a 96-slot LTO6 library which scoops up the .vbk files to tape. We keep two weeks of fulls on disk which covers us for the common restore window.
We then run nightly incrementals. We initially used reverse incrementals because that's very advantageous in a particular scenario we were dealing with, but on the whole it is detrimental because your daily tape incrementals are the size of the fulls. Manageable but not ideal - so we reverted to forward incrementals and our daily tape regime works well with that.
Our VMWare change rate isn't that great really - our incs are tiny (less than 500GB a week) so when you factor in the odd hiccup with CBT (which happens a lot more than I'd like...) we're easily hitting our window. I'm certain that if we ran any sort of virtualised file server that number would be drastically higher. As it is, speaking in this context, our 30TB of file server data has a change rate of circa 200GB a day and that is hit by maybe 500 users maximum. Depending on what your users are doing, 150GB a day doesn't seem too high to me.
Bear in mind that we go to tape during the day because we're done with our backups overnight in our actual backup window. Absolutely everything is disk to disk to tape for this reason. We're shipping roughly 50TB to tape week-in, week-out.
Have you done anything to tune that iSCSI connection? RSS makes a huge difference to performance (IME) and at 1GbE Jumbo Frames and Flow Control should all be managed to give you the best possible throughput.
For the benefit of others posting in this thread, I have about 2.5TB of Veeam backups coming over the WAN over a variety of link speeds from 10M to 100M but they are forever-incremental and are a last resort really (time to restore would be ridiculous). We get around that by having multiple restore points on local replicas (which works really well). To be quite honest, in a smoking crater situation we'd be restoring those VMs to local compute here and moving everything possible client-side to Citrix while we responded to whatever happened. If you don't have that luxury I'd be thinking long and hard about restoration scenarios and what the business is expecting in terms of RPO and RTO. A week of downtime is a realistic proposition for such a scenario (without a backup plan) and you might as well put your efforts into finding a new job at that point.