• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Launches Three Kaveri APU SKUs in February 2014 – Feature Set For A10 and A8 APUs Detailed

v56k.jpg
 

It really is a pity 3dfx was unable to get their voodoo 5's out to market when they meant to, Hardware Transform and Lighting aside they had some great graphics features available that we didn't see for another ~9 years... like Depth of Field etc.

Anyhow, back on topic a bit, it's crazy to think that the graphics units in APU's are more powerful than that beast.
 
It's not Tri-Channel, that's for sure.

They could have made done quad, but as we can see some FM2+ boards already, it's dual channel.

I wondered over quad but figured it might require more drastic board redesign for traces.
But since the boards are already out I guess that settles it as I think you also require a fair amount more pins. Something to research when I get the time.

It's an incentive to oc the memory at least, if you want to get the best out of the stonger igp sku's.
 
Depth of field is such a backwards step, when it really appeared in games 4-5 years ago, it was just cheap blurring out crap distant stuff to save power. Then it became in IQ thing, and it started being render it normally, then process it to add "realistic" blur in the right distances, and started to take extra power to do it. The reality is, you don't actually want a game choosing where you are focusing for you, it's unnatural. When I focus on that thing in the distance at the top left of the screen in real life, I can focus on it, in game, it's blurred out because some twit thinks DOF looks great. Yeah, blur out that part of the screen I'm looking at, because the game can't possibly know I'm looking there, and boom, DOF is completely unnatural. :(

Technically it's impressive, in game usage, woeful. Can't stand motion blur nor DOF because they can't mimic real life, because in real life you can choose where these occur, when a game forces it on you, it's just wrong.
 
...

Technically it's impressive, in game usage, woeful. Can't stand motion blur nor DOF because they can't mimic real life, because in real life you can choose where these occur, when a game forces it on you, it's just wrong.

Year this is more my point. I hate blur in games and it's one of the first things I turn off, but at the time 3Dfx were showing it along with other features it was impressive. If they'd survived it wouldn't have surprised me if they had been first to the market with a gpu with programmable shaders.

Anyhow back to Kaveri, now that I think about it it's surprising they didn't go for a quad channel memory controller to compensate for the fact the APU will most likely be memory bandwidth starved.

If they want to make that monster Kaveri APU that's been commented about in this thread, 6 or 8 core, 13? CU's and Quad Channel memory controller then I'd definitely be interested in giving one of those a try, and I doubt I'd be the only one.
 
Quad channel increases the cost of everything, die size, more power usage, more links on the mobo increasing cost a lot, needing 4 sticks or memory in your average say Dell computer, increased pin out on the cpu for memory connections on a smaller die size is very difficult to do. It's physically very hard to have enough pins on a smaller die size.

It would improve it for sure but it's not feasible on anything but fairly large, high power and already high cost cpu's. A £200 mobo when you're buying a £600 chip is one thing, a £200 mobo when you want a £60 chip is another ;)

As you say though, there is potential if they wanted to go bigger for more memory channels but it's really about timing for Intel and AMD.

I do wonder about HSA, if an 8 core Steamroller has just become not feasible as they require at least a small gpu on die to ensure HSA functionality. I think they could get away with dual channel with improved mem efficiency without such a large gpu. I'd quite like to see a 3 module and half sized gpu version but I think they're waiting on DDR4 now :(

Might be excavator versions of high end chips we finally see with ddr4, hoping for a 6 or 8 core with a smaller gpu.

In terms of Kaveri, it would seem from most info that HSA + Mantle will help cut down the required bandwidth somewhat, over heads, wasted calls to the gpu, talking to it more efficiently should all reduce, or optimise what bandwidth they have, it's not huge but it's certainly better than nothing.

What will be interesting is with enough of a push by AMD if games are optimised for AMD architecture, if HSA brings a bigger boost than most are expecting, if they make a quad core Kaveri extremely competitive with Intel's quad cores. Being as I have a 2500k, a Kaveri with HSA could be better in games, optimising for the architecture should have a bigger benefit than people realise. Very interested to see how much performance the Kaveri patch brings to BF4.
 
your thoughts re UMC as above?

Honestly don't know much about them, a quick look into it and both TSMC and UMC used to make K5's and at one stage some k7's to ease supply. I'm not really sure whats going on with glofo and AMD. They keep signing AMD up to seemingly try and be exclusive for certain types of chips, but then basically giving AMD fines when they don't hit production targets. While I can understand deals, GloFo is "ex" AMD production and one of if not their biggest customers. Glofo looks bad when they are seemingly constantly behind on production dates, processes(though it's understandable as they've had to try and do basically 3 processes while building their first fab and staffing up).

However surely the best way to become a trusted foundry, is not constantly screwing over your best client. In terms of UMC, I don't know, years ago with much bigger processes things were relatively simple, producing a chip on another process/foundry wasn't that big a deal nor cost too much. These days the cost of taping out the same chip in another fab is huge. You'd have to know 24 months in advance that a particular chip needed more production and commit 10's of millions in upfront costs and 100's of millions in wafers.
 
On the DDR4/iGP issue; it won't help much

Bandwidth is the problem and a boost from, say, 2133Mhz DDR3 to 2400Mhz DDR4 doesn't add much. Certainly it doesn't get you past the bottleneck with kaveri's rumoured IGP spec.

It helps, but it isn't a silver bullet.

An on-die setup, or GDDR5 or similar is needed to get over the hump.
 
On the DDR4/iGP issue; it won't help much

Bandwidth is the problem and a boost from, say, 2133Mhz DDR3 to 2400Mhz DDR4 doesn't add much. Certainly it doesn't get you past the bottleneck with kaveri's rumoured IGP spec.

It helps, but it isn't a silver bullet.

An on-die setup, or GDDR5 or similar is needed to get over the hump.

You can get 3000MHZ DDR3.
 
Even at 4000mhz, you only get to a theoretical 64GB/s, which is behind a 7750 and 7770. Unless DDR4 comes in at 128bit bus width, which I've not seen suggested, it will still put the brakes on IGP performance.

So AMD would really need to consider a Triple or Quad Channel memory controller come the DD4 Kaveri refresh if they want to unleash it's full potential.

Though as drunkenmaster mentioned, quad channel would increase the cost of everything from the APU to motherboards, so that quite possibly isn't even an option.
 
Even at 4000mhz, you only get to a theoretical 64GB/s, which is behind a 7750 and 7770. Unless DDR4 comes in at 128bit bus width, which I've not seen suggested, it will still put the brakes on IGP performance.

Actual bandwidth of a discrete gpu is irrelevant. If you go from a 20gb/s to 40gb/s you double the bandwidth available to the igpu, which will increase performance, it's really that simple.

You're saying you have lets say 50% of a 7770 performance on die because it's lacking bandwidth, and with ddr4 that is still only 75%, but that is a 50% increase on the previous igpu, thus a massive increase.

As for what's available in ddr3, there is a power problem with that, higher voltage mem, and cost. it's being pushed beyond the spec it was designed for.

What is ultra expensive ddr3 memory, will be "cheap" by ddr4 standards. IE ddr3 2400Mhz sticks are say £80, 3000Mhz, the cheapest pack still 2x4gb is £400, the 2900 is £300 or something. There are exceptionally few sticks doing those speeds, with ddr4 probably 4.5-5.5Ghz will be those stupid sticks, the cheapest sticks will end up as 3+Ghz it will be decently priced up to 4Ghz.

Either way, we're waiting on tsv's and silicon level connection between separate chips on the same package... connected on die with stacked memory with WAY beyond 128bit = igpu bandwidth problem a thing of the past.

DDR4 is both about density, speed, and lower power. Once you get it lower power, stacked, high density then it becomes viable to add the cost of a transposers/tsvs which is are just about doable but, everything needs to be the right thing to make it all work.

One of the reasons for ddr4 being delayed is ddr3 speeds. If ddr4 launched 2 years ago at 2133mhz, then the mass production would lead to tweaking and ddr4 could be significantly beyond 3Ghz by now(or just there, who knows). Thing is when something is in production is when you get a ramp in speeds. As ddr3 speed increases, ddr4 is pushed back(partly for other things also, tsv's and the like to become cheaper to do) but the speed is a problem. If ddr3 2400 is widely available, whats the point of releasing lower speed ddr4, so they are trying to ramp it up, but without the mass production that usually leads to these improvements. It's a bit of a cluster **** at the moment and Intel/AMD have likely released at least one, maybe two generations of products on memory they thought would be faster/cheaper by now.
 
Back
Top Bottom