Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
He's saying that it's pointless running it at a lower core temp because unless you've hit some other limit you'll be getting less performance than you could. He wasn't saying it was a good cooler, rather that if you have a better cooler you should still run it as close to 95C it'll get, unless it hits a limit before then.
This was said about the Titan launch and temps but it is proven that water will make it run faster. I see this Powertune the same as Boost 2.0. Both have temp targets, that is governed by volts and TDP.
I have seen also what i expected to see and that is vdroop. The 290X needs more power to stop that. Look for custom cards with 2x8 pin connectors to see that vanishing or a custom bios with a higher TDP.
Edit:
One thing I did read and that was about the power tune thing. So basically, AMD have copied Nvidia's boost then![]()
Looks like a pretty convincing win to me.
http://www.hardocp.com/article/2013...0x_crossfire_video_card_review/7#.UntumPl93_F
The only important question which never got asked (I wonder why) is the following:
MS recently rejected Mantle in favour of a Direct X 11/12 future so Mantle is only for PC gamers with low end GPU's looking for an FPS boostanyone with a high end GPU Nvidia or AMD will not see much benefit as no publisher is going to only support Mantle as games cost too much to make they would never see their money back !
I’m not a developer, so I’m unfortunately unable to intelligently answer that question. What I can say is that we’re unveiling the architecture of the API next week at the AMD Developer Conference, and that may answer more of your question.
1) I’m not on the CPU team, so I don’t know the answer to this question.
2) Yes, but we do not test or qualify such configurations so I cannot guarantee that it will work properly. You would need a CrossFire bridge.
3) I know one GPU versus two is contentious, and always will be, but I think two 7870 GPUs for 1440p will ultimately provide more performance than a single 280X.
4) Stay tuned for the AMD Developer Conference next week. On the 13th, one of the Mantle-supporting game developers will be introducing the first public demonstration with performance figures.
5) Because lots of people have graphics questions, and THG asked us in Hawaii to participate.![]()
I’m sorry, I don’t sit in on the meetings that determine the roadmap for partner solutions. I don’t know.
There is no minimum clock, and that is the point of PowerTune. The board can dither clockspeed to any MHz value permitted by the user’s operating environment.
Plain and simple, THG and Tech Report have faulty boards. You can tell because Sweclockers performed the same retail board test and got the expected results: performance in identical to the AMD-issued samples.
Every 290X should be running 2200 RPM in quiet mode, and every 290 should be running 2650 RPM. We will be releasing a driver today or tomorrow that corrects these rare and underperforming products, wherever they may exist.
aving addressed #2, we’re comfortable with the performance of the reference cooler. While the dBa is a hard science, user preference for that “noise” level is completely subjective. Hundreds of reviewers worldwide were comfortable giving both the 290 and 290X the nod, so I take contrary decisions in stride.
Hardly! The 290X has uber mode and a better bin for overclocking.
No, because we’re very happy with every board beating Titan.
You’re right, Mantle depends on the Graphics Core Next ISA. We hope that the design principles of Mantle will achieve broader adoption, and we intend to release an SDK in 2014. In the meantime, interest developers can contact us to begin a relationship of collaboration, working on the API together in its formative stages.
As for “backwards compatibility,” I think it’s a given that any graphics API is architected for forward-looking extensibility while being able to support devices of the past. Necessary by design?
I suggested that it’ll benefit CPU bottlenecking and multi-GPU scaling as examples of what Mantle is capable of. Make no mistake, though, Mantle’s primary goal is to squeeze more performance out of a graphics card than you can otherwise extract today through traditional means.
It’s impossible to estimate the trajectory of a graphics API compared to a physics library. I think they’re operating on different planes of significance.
I will also say that API extensions are insufficient to achieve what Mantle achieves.
The work people are doing for consoles is already interoperable, or even reusable, with Mantle when those games come to the PC. People may have missed that it’s not just Battlefield 4 that supports Mantle, it’s the entire Frostbite 3 engine and any game that uses it. In the 6 weeks since its announcement, three more major studios have come to us with interest on Mantle, and the momentum is accelerating.
We fundamentally disagree that there is more excitement about G-Sync than 4K. As to what would be easier with respect to NVIDIA’s technology, it’s probably best to wait an NVIDIA AMA.
No, we are not making an OpenCL physics library to replace PhysX. What we are doing is acknowledging that the full dimension of GPU physics can be done with libraries like Havok and Bullet, using OpenCL across the CPU and GPU. We are supporting developers in these endeavors, in whatever shape they take.
You would need to show me examples. Compute is very architecturally-dependent, however. F@H has a long and storied history with NVIDIA, so the project understandably runs very well on NVIDIA hardware. Meanwhile, BitCoin runs exceptionally well on our own hardware. This is the power of software optimization, and tuning for one architecture over another. Ceteris paribus, our compute performance is exemplary and should give us the lead in any scenario.
You would have to ask the console companies regarding the architecture of the hardware.
We do not have an official or internal name. It’s “graphics core next.”
No, it means NVIDIA extended the PhysX-on-CPU portion of their library to developers interested in integrating those libraries into console titles.
1) The gaming strategy you’re seeing today is the brainchild of the graphics GM Matt Skynner, along with his executive colleagues at AMD. It comes with the full support of the highest echelons at AMD.
2) I want to reiterate this answer: Plain and simple, THG and Tech Report have faulty boards. You can tell because Sweclockers performed the same retail board test and got the expected results: performance in identical to the AMD-issued samples.
Every 290X should be running 2200 RPM in quiet mode, and every 290 should be running 2650 RPM. We will be releasing a driver today or tomorrow that corrects these rare and underperforming products, wherever they may exist.
3) I don’t sit in on these engineering meetings.
4) I cannot speculate on future products, I’m sorry.
5) I cannot think of any reverse examples, where offloading from the GPU to the CPU would be beneficial.
6) We have no plans to enter the smartphone market, but we’re already in tablets from companies like Vizio with our 4W (or less) APU parts.
7) Graphics Core Next is our basic architecture for the foreseeable future.
8) You will see AMD-powered Steam machines in 2014.
9) No, it’s more about changing the direction of CPU architecture to be harmonious with GPUs. Of course the GPU ISA has to be expanded to include things like unified memory addressing and C++ code execution, but this capability already exists within Graphics Core Next. So, on the GPU side, it’s all about extending the basic capabilities of the GPU, rather than changing the fundamentals to get GPGPU.
Thracks has edited this
1: This is something for Stanford to undertake, not really something we can “help them” with, as we already provide the necessary tools on our developer portal.
2: Mantle is in the Frostbite 3 engine. EA/Dice have disclosed that the following franchises will soon support Frostbite: Command & Conquer, Mass Effect, Mirror’s Edge, Need for Speed, PvZ, Star Wars, Dragon Age: Inquisition. With respect to unannounced titles, I guess we all have to wait and see what they have in store!
1: Right now I’m playing Tomb Raider, Dishonored, and Chivalry: Medieval Warfare.
2: I don’t remember the name, but it’s a special and easily-applied compound that cures during manufacturing.
We set suggested prices for our GPUs in US dollars. The prices you see in any other country are the product of tax, duty, import, and the strength of a currency compared to US dollars. Once a retailer purchases the board from us, we have absolutely no control over what they do with the product.
I currently live in Canada, and as a nation we are struggling with the same problem on premium electronics. We're a nation of 35 million people, looking longingly across the border to a country of 350,000,000 people and some of the least expensive electronic prices in the world. I come from the US, and it was immediately obvious that Canada's fiscal policies create more expensive electronic products than what I'm accustomed to. I hear about this struggle every day in the news, but I accept (with frustration) that it is a product of the fact that Canada imports everything with higher tax/duty/import fees than the US.
4) Stay tuned for the AMD Developer Conference next week. On the 13th, one of the Mantle-supporting game developers will be introducing the first public demonstration with performance figures.
Well, it is now the 14th, what happened there? No performance numbers?
Another thing that's of interest is that running a game on mantle eliminates CPU bottlenecking.