Hopefully people have some experience with this, if not we can all lament the problems together 
Let's say you have an office with 60 employees in. Your CRM system is a web app hosted in Amazon's cloud - but all file attachments and static content is stored in S3. You have a department that receives large artwork files from external collaborators, and they use one of the many file transfer services that is effectively a nice interface to an S3 bucket.
In this world where everything is HTTPS to an Amazon endpoint, how are you supposed to maintain quality of service to your line-of-business applications? Maybe you can rely on the DNS request being appropriate to each application, but with S3 this isn't always the case. Keeping track of which services are hosted at which IP address is a lot of work and will often involve manually figuring it all out since most services won't tell you because of the nature of managing public cloud services, and again doesn't really apply to S3.
Can newer firewalls work out that a particular flow is a download based on the volume of data being transferred in a certain time, and throttle it accordingly?
How are people dealing with this, or is it a "just buy a bigger pipe" type of scenario?

Let's say you have an office with 60 employees in. Your CRM system is a web app hosted in Amazon's cloud - but all file attachments and static content is stored in S3. You have a department that receives large artwork files from external collaborators, and they use one of the many file transfer services that is effectively a nice interface to an S3 bucket.
In this world where everything is HTTPS to an Amazon endpoint, how are you supposed to maintain quality of service to your line-of-business applications? Maybe you can rely on the DNS request being appropriate to each application, but with S3 this isn't always the case. Keeping track of which services are hosted at which IP address is a lot of work and will often involve manually figuring it all out since most services won't tell you because of the nature of managing public cloud services, and again doesn't really apply to S3.
Can newer firewalls work out that a particular flow is a download based on the volume of data being transferred in a certain time, and throttle it accordingly?
How are people dealing with this, or is it a "just buy a bigger pipe" type of scenario?
Last edited: