Because 20nm doesn't hold a lot of benefits, it's electrical property improvements, ie, power reduction, are limited. 16nm finfet version of 20nm is said to offer a 40% power reduction over 20%. Imagine a 300w tdp card, that is suddenly 180W instead, then up shader count to compensate and you have a serious high end card compared to 20nm.
IF these dual card shenanigans are true it more than ever indicates no high end on 20nm, but that they are waiting for 16nm.
16nm is only a year behind 20nm, because it's effectively the same process, it's still a 20nm base metal layer(which is where you really take your process name) and they they are sticking finfet transistors on top of the metal layer rather than normal transistors. These are a little thinner(because they are more vertical than horizontal like Intel's 22nm), so they are calling it 16nm, it kind of is, kind of isn't. Either way because it's the same equipment, same base layer and same 20nm equipment, the idea of them being only a year later(possibly even a little sooner) is not an out there theory.
Spending money on R&D for a high end dual card that wouldn't even launch for another couple months and has LOADS of downsides with minimal volume would be borderline retarded if they were only 2-3 months ahead of replacement 20nm parts.
We may/may not see a midrange part on 20nm, if that is upper midrange (7970/680gtx) or lower midrange (7870...660ti?) who knows. It wouldn't be surprising to see them skip 20nm entirely as there is very little long term benefit. You'd save 10's of millions not playing around with 20nm design/transistors, and putting the R&D, time into 16nm tape outs and there would likely be a huge amount of waste.
There is a reason Nvidia have done an entirely new product, low end maxwell, on 28nm and not 20nm, which I think bodes very poorly for new faster shiney 20nm products in the next year. But 2015 and 16nm could easily bring about the full 80-90% full on generational improvements we haven't been seeing of late.