Virgin Media to dump neutrality and target BitTorrent users

It's still a worrying sign for things to come. If it starts with torrents and then moves on to other things (newsgroups, eventually FTP, VOIP, etc) I'm sure we'll become a lot more bothered.

Personally it's a reason for me to never go to VM again. I don't use torrents much these days but now and again they're handy.

Same here.
 
Torrents are fine if its legit stuff, like i wanted all the ATI tech demos but couldnt find them, i went to a torrent site for the hell of it and BOOM all ATI tech demos in 1 folder, legal, quick, easy and no harm to anyone.
 
What we, as consumers pay for our VM BB connections would just about cover maybe 1-2mbit of constant use from what i've read, something I can quite believe.

Not for services like IPlayer, the BBC are very savvy (as are the other large content providers) and have large cache engines that ISP can either privately peer with or connect to at exchanges, if you like "free peering". They are not having to pay "tranist" prices for the traffic, if they are they aren't running their network properly.

10GigE ports at these exchanges aren't that expensive, the linecards and chassis are expensive but then that's just the nature of the beast and a one off CAPEX cost, the OPEX is pretty low.

HEADRAT
 
Last edited:
I remember watching a documentary about the internet usage and how the internet will eventually be too overcrowded to use.

this was about 5 years ago its not getting better.

I'm wondering why VM are resorting to this. they know people use peer to peer. I'm just assuming their network is too overloaded with their "fibreoptic broadband" (lol they're funny guys VM) lines.
 
I remember watching a documentary about the internet usage and how the internet will eventually be too overcrowded to use.

this was about 5 years ago its not getting better.

I'm wondering why VM are resorting to this. they know people use peer to peer. I'm just assuming their network is too overloaded with their "fibreoptic broadband" (lol they're funny guys VM) lines.

The internet will never be too congested to use, that's why it was invented. In the UK it's becoming overloaded because of a series of bad decisions and reluctance to invest in infastructure. Look at Japan and Sweeden, they bit the bullet years ago and it's paid off. Germany has quite a good core infastructure too, hence why hosting is quite cheap over there.
 
Not bothered if they throttle torrents, haven't used them for years anyway. If they start to throttle Usenet though I will be looking for another ISP.
 
Well the packets might get through, but on a highly congested network it may render applications too slow to be usable.

HEADRAT

The internet is designed not to get congested. The whole principal of packet switching is it naturally avoids congested areas and load balances traffic across all available links. If it gets busy you just add more links, which isn't that hard to do there's plenty of unlit fibre out there just no one in the UK can be bothered to invest in lighting it. The UK has an issue due to the affordability of internet access being disproportionate to the investment in the infastructure. LLU was supposed to kick start this by reducing BTs monopoly and encouraging ISPs to invest their own cash directly into improving the country's WAN infastructure.

Don't wet the bed the internet isn't going to slow to a halt any time soon. If it did whoever put the money in to speed it up would be rolling in it.
 
I guess you've never heard of "shortest path", I can assure you that the Internet can and does get congested.

While there are many mechanism (MPLS TE) to try and mitigate congestion, if there is simply too much traffic then you get congestion.

There is a significant CAPEX cost to light new fibre.

HEADRAT
 
Last edited:
Their network is on it's knees and cannot cope with the change in peoples bandwidth demands.
People want to stream movies and play games.
They want the bandwidth they have paid for, 24/7.
The times of logging on for 5 mins to download emails are over.

There is a fundamental problem in how 'internet' is perceived and delivered.
- The consumer pays for 20mb and expects 20mb speeds ALL THE TIME. Like a TV package; you get what you pay for.
- The provider (VM) essentially lets people believe this is the case but implements many measures to control peoples connections and provide them with a service that is not what the customer expects.

VM, and other providers, advertise packages using maximum bandwidth numbers but fail to mention you can only download so much data before they cripple your connection. This practice of shaping your internet is soon to get much worse with more systems using the Phorm scam and Deep Packet Inspection.


They (VM) should not be blocking and filtering content. This is fundamentally wrong.
Their download caps are also way too low given today's usage requirements. This is just profiteering that relies on peoples ignorance to go unchanged.
 
Last edited:
There is an unreaslistic expectation by the consumer, this has been fed by NET companies offering higher and higher bandwidths, more is not always better.

HEADRAT
 
The internet can't get congested...That's a good one.

There is a finite amount of bandwidth available at every link on the internet, if too much traffic is generated and not enough links to cope with it then you get congestion - yes the internet as a whole might not get completely gridlocked, but traffic will slow to an unacceptable amount due to packet loss and packets taking longer and longer routes to avoid the links that are completely overloaded.

Whilst you probably won't see total gridlock, it would become unusable for a lot of things (basically anything that is time critical, good bye voice calls, gaming etc).

It's also worth noting that the biggest problem won't be internal to the ISP's networks, but the external links especially to the rest of the world, and that affects Sweden and Japan as much as it does us, from memory Japan doesn't have a much better connection to the outside world than we do, and can/would suffer from international torrent traffic as much as we would (luckily there is only one country that generally speaks Japanese so Japanese torrent traffic for example would tend to stay within the Japanese national network).

VM could probably cope very well if torrent traffic was purely p2p within the VM network, as they've got a lot of internal capacity and the ability to improve it cheaply, without the need to first make arrangements with other companies (who might not be interested unless there was something in it for them).

Re the cost of peering, iirc the problem isn't so much peering, as the fact that torrents can/do go all over the place which means that unless you've got great peering with every major ISP in the world you end up paying for the bandwidth - a 10gb link might not be massively expensive, but that's only 500 20mb users downloading at full pelt, and doesn't allow for any of the other costs involved in supplying those users.

Hence traffic shaping/prioritising would let them make more intelligent use of the network, rather than throwing more and more external links, many of which might only be half utilised much of the time, it lets them try to ensure that time critical traffic might stand a chance of getting through without massive amounts of overkill.
 
There is an unreaslistic expectation by the consumer, this has been fed by NET companies offering higher and higher bandwidths, more is not always better.

HEADRAT

Aye, I would personally prefer a speed/bandwidth billing system that makes it clear exactly what people are getting.

20mb isn't a target to be used all the time (despite what some people might think;)), but the maximum speed you can get.

A fairer system might be for all the ISP's to offer something like
UP to 20 with 200gb bandwidth - that way you get a good burst speed, and a reasonable monthly bandwidth allowance (I'm a heavy internet user, as is my brother, but I doubt we'd do anything like that most month).

Or UP To 10mb with 500gb a month bandwidth - costing more (the actual connetion doesn't cost much more, it's the bandwidth), but giving a better sustained data rate.

At the moment with 20mb, you could potentially do that 200GB in a day or so, and some people are doing that week in, week out - and it's that sort of user that is really bringing the network to it's proverbial knees (the cost of provisioning that sort of daily/weekly bandwidth usage will be much higher than we're paying every month).


Of course, doing that is going to prove unpopular with people who think that their £35 a month entitles them to download hundreds of GB every single week via torrents, with no regards for the other users on the consumer network.
 
If only they were optimizing their system based on usage. They are not, they simply apply the brute force method of clawing back revenue at the expense of the customer.

Push them ads, reduce their bandwidth, block traffic and limit what they can see on the web while still taking their money.
 
Re the cost of peering, iirc the problem isn't so much peering, as the fact that torrents can/do go all over the place which means that unless you've got great peering with every major ISP in the world you end up paying for the bandwidth - a 10gb link might not be massively expensive, but that's only 500 20mb users downloading at full pelt, and doesn't allow for any of the other costs involved in supplying those users.

Peering is actually a cashless agreement between Tier1 and sometimes Tier 2 ISP's or Internet Exchanges like LINX. If an ISP is actually paying for an upstream connection then that is IP Transit.
The whole peering issue is the most fragile part of the Internet in general. These are often agreements between ISP's based on the amount of traffic sent to eash other. If ISP A is receiving a lot but not sending much to ISP B then they could make a decision that this link is not cost effective and cut the link and propose to charge ASP B for IP transit instead. This happens a few times a year where major ISP's have a spat.
The outcome is that if your website is at a data centre that buys transit from ISP B then people originating from ISP A may have difficulty in connecting to it.
 
There are a number of types of Peering etc.

Usually NO SLA:-

Private Peering = Connect your equipment together, one off cost then pretty much free.
Peering = Across an exchange like LINX,one off cost then pretty much free, maybe some Exchange Fees
Settlement Based Peering = You agree to pay something but usually a lower cost

With SLA

Transit = You may per Meg for the amount of traffic you put over your link, cost usually tend to very by volume etc.
 
Doesn't this all come down to ISP's refusing to plainly state their bandwidth allowances?

"Download as much as you like"*
*This is subject to the contention in your area, how much you pay us, how vindictive we're feeling, what you download, where you download it from, and the direction the wind is blowing. Should you download anything we reserve the right to disconnect your service as part of our 'Fair Usage Policy' - by that we mean, if you're downloading more than two emails a month.

Give me a break.

As for all those whining about VM, don't use them.
Find an ISP that delivers what they say they will.
Don't want to pay extra? Stop moaning and stump up.
 
I guess you've never heard of "shortest path", I can assure you that the Internet can and does get congested.

While there are many mechanism (MPLS TE) to try and mitigate congestion, if there is simply too much traffic then you get congestion.

There is a significant CAPEX cost to light new fibre.

HEADRAT

You have obviously never heard of Dynamic routing metrics. Routing protocols such as OSPF (open shortest path first) use these to route traffic across the most efficient route. They're usually made up of a complex weighting based upon bandwidth, latency, number of hops, and CONGESTION. Additionally you can force a link to be less utilised by adding a base adminitrative distance to the route. I.E if a link is pay per meg you want it used only when you can't cram anything more down dedicated peering.
I never once mentioned there was infinite bandwith per link, however there is potentially an infinite number of potential routes to a destination to get around a congested area. The bandwidth per link is largely inconsequential as it's going to be chopped up by trafic prioritisation anyway, what matters is the number of links and it does cost money to implement them which many other nations have done :) the UK hasn't done much in comparison. To base the state of "the internet" on what's happening with a handful of consumer ISPs is really rather ignorant.
 
Last edited:
Back
Top Bottom