Tinternet hackers

Okay in olden days I would write my letter, seal it with wax and my ring (no sniggering please), a ring which was issued to me, and some underling would run off and deliver it for a cabbage. The recipient can check it is genuine and whether it has been tampered with. So a tech version must me possible to sort the wheat from the chaff.

Couldn't humans just have to do a squiggle to access some sites to prove they are human and not bots?

Like I said I have no idea what I am going on about - but I am interested in why the process is so flawed that it can be grifted.

You're missing the point. It doesn't matter what you do, at some point you have to deal with the traffic. To do that takes time and resources, so even if you simply reject everything, you still have to use bandwidth and processing power to get the packet, look at it and then discard it. Enough will bring a system to it's knees.

Say you can securely authenticate every packet (which is processor intensive) so that you can be sure it is from someone you want to talk to. If I send you billions of packets that have to be checked, your bandwidth is choked and your router that rejects packets is brought to a stand-still, and you are off-line.

It's been explained several times, and when you keep replying with "but there must be a way to do it" you might as well be saying "but make it magic". All you can do is have bigger, faster hardware and fatter pipes, but then you just need to generate more attacks to overwhelm the extra resources trying to deal with it.
 
First, it isn't a hack. It is just flooding somewhere with traffic so that the people that want to get access can't, and the place that is wanting the genuine requests can't handle the volume to service the real ones.

The ring and seal anology isn't appropriate to this scenario, because what is happening is that you create your letter however you want... you 'try' to deliver it, but some other guy has got 10,000 letters he is getting a machine to make and ramming the mailbox/letterbox at the same time. It isn't about authenticating, it's about volume.

A lot of places are protected in part by caching services such as Akamai, but that won't stop everything as that is all based on rules of what they do and don't let through (blacklist/whitelist etc). A classic of this is getting hammered by bots that are crawling websites in an aggressive manner and effectively do a minor DOS attack unwittingly. (which can actually lead to lawsuits because of the bot crawler creator being careless).
 
Last edited:
Thanks I will take a look.

https://www.akamai.com/uk/en/solutions/products/cloud-security/bot-manager.jsp

Steampunk I disagree, I am saying it is an otherworldly crap design in the first place. There must be a better way to tech.

Last word to Einstein
“We can not solve our problems with the same level of thinking that created them”

:p

And that's why no one has done it for the last 20 years? What is this "better way" ? You just keep giving the high concept of "it must be better" with no idea of how things work.

Whatever you do involves processing. That processing is what overwhelms in any attack. There is no way of getting around having to deal with attacks. You can't just "do it better" by not dealing with attacks. The effort you expend in dealing is what causes the attack to work.

Even when we have sci-fi advances like quantum computing and AI, those same resources can be used to implement attacks, as well as defend against them. You always have to expend resources, and that's what the attack forces you to do, and why it succeeds in knocking sites off the internet. Your resources are used defending the attack instead of servicing your real visitors/customers.

I don't know how else I can explain it to you any simpler, but this is why no one has fixed this problem since the internet began.
 
Okay in olden days I would write my letter, seal it with wax and my ring (no sniggering please), a ring which was issued to me, and some underling would run off and deliver it for a cabbage. The recipient can check it is genuine and whether it has been tampered with. So a tech version must me possible to sort the wheat from the chaff.

Couldn't humans just have to do a squiggle to access some sites to prove they are human and not bots?

Like I said I have no idea what I am going on about - but I am interested in why the process is so flawed that it can be grifted.

The issue happens before your ring is inspected (yes I wrote that on purpose). If 10,000 underlings descend on the recipient in a 15 minute period then your cabbage request is going to take a long time to be serviced.
 
What is this "tinternet" that you speak of? :confused:

The largest DDOS attack so far was launched recently, on a security analyst's website called KrebsOnSecurity. It was hit with around 620 Gbps of traffic, which is roughly 77.5 GB /second of traffic. The website was backed by Akamai, which managed to expertly handle the load, but the website was taken offline shortly afterwards to protect the Akamai network.

https://krebsonsecurity.com/2016/09/krebsonsecurity-hit-with-record-ddos/ said:
There are some indications that this attack was launched with the help of a botnet that has enslaved a large number of hacked so-called “Internet of Things,” (IoT) devices — routers, IP cameras and digital video recorders (DVRs) that are exposed to the Internet and protected with weak or hard-coded passwords.

Crazy times. :(
 
Nope.
A "DOS" attack is nothing clever, it's just using a normal function of a network, like loading a web page or something, but done multiple times in rapid succession.
Spread all those page loads across thousands of compromised computers worldwide and there's no way you could possibly tell the difference between a malicious bit of code loading a page and a normal user loading a page.

Bit skeptical about that, surely for most 'botnets' it ought to be rather obvious no?

I mean in order to be effective aren't most of these botnets going to have to hammer the page? In which case you've got a straight forward classification problem where there is a very obvious difference between the normal users and the botnet computers.
 
Bit skeptical about that, surely for most 'botnets' it ought to be rather obvious no?

I mean in order to be effective aren't most of these botnets going to have to hammer the page? In which case you've got a straight forward classification problem where there is a very obvious difference between the normal users and the botnet computers.

Depends on the size of the botnet really, if you have enough zombies/slaves they can request a page just 2-3 times a second. That's not much different from legitimate traffic trying to reach the same thing.

Not to say it's impossible, there are solutions to mitigate DDoS but they will flag false positives, denying some from accessing said pages when they're not part of the attack, and those solutions have a breaking point.

If the attack is big enough to saturate/impact the datacenter where the server is located, you'll likely just get blackholed to prevent the attack impacting the rest of the datacenter.

Preventative methods will get better, but so will the attacks really. Cat and mouse game :)
 
Thanks I will take a look.

https://www.akamai.com/uk/en/solutions/products/cloud-security/bot-manager.jsp

Steampunk I disagree, I am saying it is an otherworldly crap design in the first place. There must be a better way to tech.

Last word to Einstein
“We can not solve our problems with the same level of thinking that created them”

:p
Say you run a small shop.

Then one day somone organises a flash mob there 1000 people show up not buying anything just clogfing up your store.


You hire a securiry gard to check everyone and only let in your regular customers.

One of your tegulars arrives, he still has to wait for your security gaurd to work through the queue of 1000 people saying no to each before he gets to him
 
/\ A master of analogy.

I understand it. Lets face it there are some Pro explanations here.

I believe it should be designed better. Steampunk says it can't so I am disappointed we didn't solve that this time round just so I could say told you so. NVM

The fact that teenage boys can create a web page and code to knock off the service of legitimate businesses; and in a revision install a stop button to essentially extort businesses is pathognomic of a fragile masterpiece, the internet.

By the way, it is well worth having a read of the court transcript posted above, the FBI Special Agent noted that one of the 'hackers' or whatever you want to call them had a balance of $99,000. That is a lot of Red Bull for a teenager.
 
Depends on the size of the botnet really, if you have enough zombies/slaves they can request a page just 2-3 times a second. That's not much different from legitimate traffic trying to reach the same thing.

Not to say it's impossible, there are solutions to mitigate DDoS but they will flag false positives, denying some from accessing said pages when they're not part of the attack, and those solutions have a breaking point.

Even if they are trying at 2-3 times a second they'll likely be plenty different from legitemate traffic to allow for classification, I don't think type 1 errors will be an issue unless the botnets are designed to actually pose as real traffic, in which case they'll be massively less effective as carrying out the attacks in the first place.
 
Even if they are trying at 2-3 times a second they'll likely be plenty different from legitemate traffic to allow for classification

The actual requests made automatically by a script are identical to ones made purposely by people. There's no way the differentiate them.
You can look for patterns, in your example you could block any ip sending more than 3 requests a second, but that's not actually identifying the requests from scripts, that's just applying probability that those requests are automated.

You dont even need a script to do it either. How many times have you seen a link posted on here and everyone clicks it and takes the site offline? (maybe not that common on here anymore, but certainly on larger sites like reddit or HotUKDeals or something).

A DOS attack uses a service in the way it was designed to be used. There's no way that you can implement something to defend against that otherwise your service no longer does what it is supposed to do and is entirely pointless.
 
Even if they are trying at 2-3 times a second they'll likely be plenty different from legitemate traffic to allow for classification, I don't think type 1 errors will be an issue unless the botnets are designed to actually pose as real traffic, in which case they'll be massively less effective as carrying out the attacks in the first place.

The attacks are real traffic, that's the root of the problem.

Imagine 100k computers, they have Chrome open and they're all hitting the same website - not from someone with a keyboard and mouse but a script running on the machine. Now imagine your computer doing the same thing, except you're clicking on links to go the website. The difference between the real/fake traffic is zero.

As touch explained above you can try use patterns to determine real/fake, but you're just guessing then which has varying degrees of success.
 
Back
Top Bottom