QoS in the cloudy world

Man of Honour
Joined
18 Oct 2002
Posts
26,577
Hopefully people have some experience with this, if not we can all lament the problems together :D

Let's say you have an office with 60 employees in. Your CRM system is a web app hosted in Amazon's cloud - but all file attachments and static content is stored in S3. You have a department that receives large artwork files from external collaborators, and they use one of the many file transfer services that is effectively a nice interface to an S3 bucket.

In this world where everything is HTTPS to an Amazon endpoint, how are you supposed to maintain quality of service to your line-of-business applications? Maybe you can rely on the DNS request being appropriate to each application, but with S3 this isn't always the case. Keeping track of which services are hosted at which IP address is a lot of work and will often involve manually figuring it all out since most services won't tell you because of the nature of managing public cloud services, and again doesn't really apply to S3.

Can newer firewalls work out that a particular flow is a download based on the volume of data being transferred in a certain time, and throttle it accordingly?

How are people dealing with this, or is it a "just buy a bigger pipe" type of scenario?
 
Last edited:
Would something akin to a 'cloud access broker' type solution help here?

Theory that as well as being able to identify what's being accessed, Dropbox, sales force etc, they can also identify what is being done on said cloud service.

So they recognise actions such as downloading, similar to how some web filtering/NGIPS/NGFW solutions are able to provide granular control of web applications by cataloging and being able to identify the various actions that are available to a user on that particular site.

I guess one approach could be indentification of the application based on the traffic rather than trying to rely on doing it by IP addresses etc.

Not sure if there's any capability there that could be built on?

Will have a poke about tomorrow with a particular CASB solution and see if there's any way to do any sort of QoS/traffic prioritisation type stuff.

That make as much sense written out as it does in my head? :)
 
Last edited:
Can newer firewalls work out that a particular flow is a download based on the volume of data being transferred in a certain time, and throttle it accordingly?

We've had some Checkpoint firewalls in recently and they rate limit based on source/destination. It'd be nice if our web proxy could do this, but it only offers the option for YouTube, oddly..
 
That make as much sense written out as it does in my head? :)

Maybe ;) I'm more trying to discuss it as a problem that doesn't seem to have a real solution outside of buying direct connections to the services you want to use (lol). I think you're on the right path though.

We've had some Checkpoint firewalls in recently and they rate limit based on source/destination. It'd be nice if our web proxy could do this, but it only offers the option for YouTube, oddly..

I'm aware of rate limiting based on endpoints and also the application templates that firewalls can apply to things, but when everything is Amazon Web Services that's not hugely helpful. Perhaps the fix is going to be something along the lines of each AWS application having a unique tag on the traffic and the priority can be set accordingly?
 
Last edited:
Perhaps the fix is going to be something along the lines of each AWS application having a unique tag on the traffic and the priority can be set accordingly?

True, although setting a x00kb/s limit for AWS would be better than none if it's a per client setting. No-one would hog the pipe in that case when up/downloading the files!

This is the kind of thing why it's better off inside a proxy. Break the SSL connection and inspect it - see what's being requested and where and rate limit as appropriate :)
 
Perhaps the fix is going to be something along the lines of each AWS application having a unique tag on the traffic and the priority can be set accordingly?

So the way some devices work is by 'application' identification, even when traffic is say all http and going to the same cloud service, they can still identify what the traffic is.

As a made up example, say BBC website and Facebook were both hosted on the same cloud platform. The system can ID each of them even though they are going to the same cloud service servers.

Guess got to think of it more as inspecting the traffic to identify what's going on rather than just looking at Destination IPs.

Then you just need a way to say BBC traffic has priority over Facebook traffic or the like.
 
I wasn't aware it was possible to do that unless you MITM the SSL(TLS) and push a new root CA key to your devices? Otherwise that could be a good place to start.

Dogers makes a good point though - I could just set a per-client limit for traffic related to S3 and the small attachments won't be affected because the throttle will never kick in before the file has loaded.
 
Last edited:
I wasn't aware it was possible to do that unless you MITM the SSL(TLS) and push a new root CA key to your devices? Otherwise that could be a good place to start.

Yup for ssl/tls traffic you'd need a device that can handle that to do it all on board, or have some other device that can do that for you (I know we used to use VSS based solutions before our boxes were able to do the inspection themselves).

Imagine that for 'outbound' connection scenarios, such as user browsing, you still need to load the device cert onto their endpoint as a trusted CA.
 
Yup for ssl/tls traffic you'd need a device that can handle that to do it all on board, or have some other device that can do that for you (I know we used to use VSS based solutions before our boxes were able to do the inspection themselves).

Imagine that for 'outbound' connection scenarios, such as user browsing, you still need to load the device cert onto their endpoint as a trusted CA.

Yeah, that's exactly how we do SSL- Deep packet inspection with our Sonicwalls. The Sonicwall performs a MITM analysis of the traffic. The End user device must have the Cert generated from the Sonicwall installed, in order to secure the connection though.

Once we have that in place, we can analyse, limit and control the traffic as if the SSL wasn't there.

Nate
 
Think this is how most devices will handle things, I've only any real experience with IPS implementations of SSL inspection so not sure if things like the F5 or VSS appliances do it any differently for user based/outbound traffic.
 
Certificate pinning is going to break that horribly though, so watch out for it if you're doing that sort of thing on your networks.
 
Back
Top Bottom