Yes GTX480 was GF100 my mistake!
How is the Titan a replacement to GTX480 (Fermi). The 680 (Kepler) is the replacement in terms of microarchitecture so I was looking at the timescale between new microarchitectures rather than sheer performance increase.
I really can't see Kepler lasting into 2015..
By every measure Titan is the 480gtx replacement, they are both high end cards, high end bus, high die size, double the transistor count of the previous generation, etc, etc. This pattern is clear in AMD/Nvidia cards for the past well, couple decades.
In terms of microarchitecture, yes, no, architecture changes can really happen almost as fast as you want. But ultimately the industry is ruled by transistor count more than architecture, significantly so. So it's ultimately ruled by process nodes and power/transistor count available on new chips. Either way, there is still 2-3 years between real micro architectures. IT used to really be 18-24 months between real "next gen" products. As process nodes get harder to shrink and more expensive they are slowing down so 3 years will likely become more standard, and that can slip further over time
The main reason for the difference in which chip launched first was Nvidia had so many issues with the 480gtx, and previously the 280gtx in terms of yields that this was the first gen they tried to get the more midrange part ready for the start of the architecture. In reality if they'd aimed to do the 460gtx part before or at the same time as the 480gtx, it would likely have been available when the 5870 was, 6 months earlier, because it wouldn't have had as extreme yield/heat problems as the 480gtx. IE it was a planning issue that moved the 680gtx up to the front of the queue and gained 6 months.
The likely scenario is that the high end(anything over 350mm^2) is extremely unlikely to launch before 16nm which may may just be available for gpu's by mid 2015. At best we'll have 680/7970 replacements(300-350mm^2 cards) on 20nm which could happen this year but look to be at least 8 months out, but they'll be very high power for what would count as parts that likely won't be much if any faster than the current highest end parts. But if they do this, which costs millions and many engineers working on it, they'll have to do all that work again for 16nm which will be so much the better process for gpu's(due to the much bigger power reduction than 20nm). Ultimately waiting what will be 6-8 months more and skipping 20nm will be better for the consumer, save AMD/Nvidia a hell of a lot of work on 20nm for a process that is barely better and VERY expensive for a barely better process.