It's just a network, speed is determined by how quickly the computer can serve up data and more practically the speed of the slowest connection between it and you. So the server is probably going to have a gigabit ethernet to the datacenter router, that'll have a gigabit ethernet to the upstream provider and they'll have high bandwidth ATM connections likely. Then it's down again to your ISP, likely via gigabit ethernet to the DSL aggregation router and then over the ADSL network and it's associated bits.
People make it complex but it's just a lot of networks connected together to make a bigger, a big, very well designed one yes, but the technology isn't far removed from what small business use for their networks.
The transatlantic link cables have existed in various forms for a long time, the latest tend to be multiple fibre bundles carrying wavelength traffic for maximum bandwidth (current cables are good for about 3.2Tbps (yes, terabits, 3200 gigabit connections - or in real life 320 STM-64 10Gbit links). There are multiple systems, they're generally commercial and owned by the big backbone carriers (companies who basically own the internet backbone having built it yet nobody has ever heard off - level3, sprint, global crossing etc...)