This has always confused me. Why spend a whole day with 0 service?
They should either take half their servers off line and patch them, then swap them over with the other half, and patch them. Or (and this is the correct way) they should already have enough redundancy within the system so that in the event of a disaster they have fail over sites that are currently sitting idle. They should patch the fail over sites then bring the live sites offline and failover to the backup sites. Once everything is tested and works they should then patch the live sites and either leave them as the new failover sites or perform another failover, putting the original live sites back as live, and have the original backup sites perform the backup role.
That way we would only have say 30 min outage per failover max whilst DNS entries are updated.