I have sympathy for the people who have to clean this mess up, but it's 2017 and trying to dodge responsibility for keeping computer systems secure by attempting to hold an attacker responsible is not an option. If there's any good that comes out of this it will be to drive all the charlatans out the industry who claim that whatever product they are selling is a silver bullet (I am aware this is incredibly wishful thinking), and ensure that information security is properly resourced and funded within an organisation as an integral part of every project and system deployment.
End of the day though the hackers are almost always one step ahead - while there is little excuse for not being on top of security updates, etc. and even less excuse for sloppy practises at the end of the day no matter how secure you try to be it only goes so far - not that makes any excuse for not making an effort either.
With:
Correct AV access protection rules to prevent it executing on the local machine
Correct and restrictive permissions on network shares to prevent it recursing through folders
Correct snapshot backups data can be restored within minutes
#1 is available with most corporate AV vendors that I know of, certainly McAfee VSE/ePO which is widespread in the industry.
#2 is simple IT 101
#3 is moot - but I can't believe they don't have their data on a SAN with snapshots taken.
This should never have happened, or it if did the effects should have been minimised, mitigated and the outage minimal.
While a decent backup system will help none of that will stop a "0day" infection using a new exploit that might circumvent traditional execution restrictions, etc. and even with a good backup/snapshot system you may need to delay things to properly forensically investigate to make sure you've fully cleaned the infection out and not going to have to deal with other nasty surprises.