• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD - Ask Me Anything

Caporegime
Joined
12 Jul 2007
Posts
43,698
Location
United Kingdom
Taken from Toms3d. Plenty of Gold in here regarding Mantle and what not. I have copy pasted over all the questions to make it easy viewing for ocuk folk. Enjoy!

Source
http://www.tomshardware.co.uk/forum/id-1863987/official-amd-radeon-representatives.html

Ever wanted to ask one of the big hardware or software giants something directly? Why’d they do that? Where’d the idea come from for that last product? What’s in store next? Well, now you have the chance!

Tom’s Hardware is proud to announce a follow-up of our brand new community features – ASK ME ANYTHING.
 
What is the rationale for AMD using stock cooling solutions on enthusiast cards at launch when more robust and effective cooling solutions have been demonstrated by your partners and other 3rd parties resulting in superior acoustics, better heat dissipation, and potentially higher benchmarks?
Thank you for taking the time to talk with us on the forums!!

Using a 100% reference design at launch ensures that AMD can control every facet of the initial production. It minimizes the number of variables that can "go wrong."

Will Mantle support all GCN GPUs, including the low-end ones such as the HD 7730 and HD 7750?

Anything with Graphics Core Next. Anything!

With Kavari being the 3rd generation APU from AMD, what are the major improvements that we will see? How potentially disruptive is this technology to the historic relationship of CPU/GPU and system resources in a desktop environment and how does AMD plan to leverage this potential disruption into higher market share since Nvidia and Intel don't fully compete in this APU space?
Thanks!!

I can't say much on our next-generation APUs, as today's AMA team is from the Radeon side of the business. But what I can say is that your question will be answered in its entirety on November 11-13 at the AMD Developer Conference in San Jose.

Hi AMD,

hope you answer all my following questions, but feel free to skip silly ones.

1) Regarding motherboard sockets, i know you use on the server side (Socket G34,Socket C32,Socket F), Land grid array CPU sockets , while on the desktop you remain using
pin grid array socket just like(FM1,FM2,FM2+,AM3+), why ? and how much difference does it make ? is the server Land grid array superior ?

2) Any news on the GDDR5M side ? what is your target audience for it ? if there will be GDDR5M available in the future, what should we expect regarding latency and frequency ?

3) Why did you ended the HD 7750 ? i love it a lot ? and its sad that the newer generation does have huge gap between the R7-250 and the R7-260, in Cape Verde language, were
talking about HD 7730 and HD 7790, there is huge GAP in between, will we ever see R7-250X or any other filling, and will it be powered by PCI express alone ?

Thanks very much in advance.

Hello. :) I'm not privy to the GDDR or socket questions, but I can answer your last one!

The HD 7750 is still shipping in the market for precisely the reason you've identified. Its performance falls between that of the R7 250 and the R7 260X.

My question is about changing from one Radeon card to another new Radeon card. I have gotten different descriptions of the steps to go through as far as removing old software or not, and about using the CD with the replacement card to install Radeon Catalyst software and the vendor's software.

I did not find a good instruction for remove old Radeon and install new Radeon on AMD's site. Many instructions assume starting with a new install.

I-7 860 Win 7 64bit Pro, HP HPE590e computer from 2010.
Radeon 5770 OEM from factory order.
Replacement card Gigabyte Radeon 7950.

Steps recommended for me to go through, please? And thank you for being willing to do this service.

With Windows 7, this is the procedure I have always followed in one form or another:

1) Fully uninstall the driver with the AMD Catalyst uninstaller tool: http://www2.ati.com/drivers/amd_cleanup_util_1.2.1.0.ex...

2) Reboot your PC!

3) Install the new driver. Reboot your PC!

4) Done!

I've never had a problem following this approach.

Let me rephrase my stock cooler question. Why not use a more robust "reference design" incorporating some of the brilliant solutions proven out by your card partners and other 3rd parties in regards to noise and greater heat dissipation...especially at the high end?

Our reference solutions are designed to accommodate every system that fully complies with the ATX and PCIe mechanical and electrical specifications. Further, we have OEM and system integrator customers that prefer a blower-style assembly. In order to accommodate all parties interested in purchasing a reference GPU, our hand is guided in the direction of the designs you see.

This is one I've always wondered, how come graphic card makers (like yourselves) don't release a legacy AGP card on the newest technology? You could saturate the bandwidth available in AGP with a GCN card that's probably so efficient it'd be single slot! I know at least two people (and I reckon there are many more) who are forced to look at the second-hand market for older cards and it's not ideal.

Thanks for doing the AMA by the way! Made me so happy when I heard you guys were coming on lol :p

Glad to be doing an AMA. :) I think it's mostly because AGP is a fully dead standard across the industry. The ROI would likely be negative.

I like what AMD is doing with regard to mantle and the R9 290x and am entertaining the prospect of buying a couple, but I am reluctant to do so until the runt/dropped frame issue has been completely addressed for single GPU, crossfire setups and Eyefinity systems. Can you tell us what progress AMD has made with regard to this and what AMD is still working on to address any existing issues?

1) Frame pacing was 100% resolved on single-GPU systems in early 2013.

2) Frame pacing for multi-GPU systems, at any resolution (e.g. Eyefinity/4K), is fully resolved in hardware on the 290 and 290X.

3) Frame pacing for the R9 270, R9 280 and HD 7000 Series systems will require a software solution. Our engineers are working on that right now, and we intend to release a driver this quarter to resolve the issue.

When designing the 8000 series, what is the default resolution the cards are made for? (1080p, 1440p, 4k resolution)

The R9 290/290X: 4K
The R9 280: 1440/1600p
The R9 270X:/R7 260X: Max settings 1080p and high settings 1080p

From a hardware perspective, these are our design goals for the products.

Hi, can we have real numbers about the Mantle gain ? Games are still running perfect at maximum with a 7970ghz, if the gain is not really high to hold the next graphic step it will be a bit useless for high end cards. Is it designed to improve latency ? Add more graphic effects?

Does the next cpu will have better single core performance ? Thats the weakest part of you cpus.

Real numbers regarding Mantle will be published at the AMD Developer Summit running November 11-13 in San Jose. I will say that we are not undertaking such a mammoth effort to yield 3-4% performance--that would be a waste of time.

Regarding the AMD developer summit, is there going to be more developer / game support announced or will it be the same ones from the GPU 14 event. Do you think there will be enough support for Mantle on the 290 / 290X or will it take longer than usual to implement and see the advantages on this current gen of cards.

We are planning to announce more titles than what you saw at GPU14. Depending on the schedule, those announcements may come slightly after APU13.

First of all, thank you AMD for taking the time to answer our questions. My question is this: nVidia has always been in the upper hand, specially with their proprietary technologies and, their main point of advantage, deals with the most popular game brands (WB Games, 2K Games, Rockstar, ...). With the introduction of Mantle and your Gaming Evolved Program, I would like to know which game companies (aside the ones you already told us) are going to colaborate with your teams to provide better optimized games to the Radeon line in the future?

We're partnered with:
Eidos Montreal, DICE, Square Enix, Rebellion, Codemasters, EA, Crytek, Irrational Games, Nixxes, and more. We have an excellent relationship with almost every major studio out there, and we're in frequent contact to assist with performance optimizations for Radeon.

Can you give an indication the performance gain of Mantle? Will it be purely in FPS or will it be visual...i.e. better graphics, more eye candy, improvements to tress-fx. Is it used in conjunction with D3D or by itself?

Mantle can be used to increase raw FPS, or to increase image quality while maintaining the same FPS.

It is a complete and standalone graphics API, meaning Mantle must be capable of coding and rendering all the in-game effects you see today. But Mantle also needs to be sufficiently extensible so that we can collaborate with game studios to create the effects of tomorrow—and it is extensible to do so!

What ever happened to Dave Baumann?

He's leading the desktop graphics product management team. He sits about 30 feet away from me.

Not sure if this is considered Radeon tech or not... If it's not, please disregard the questions.

There has been speculation that mantle-like technology is at use in the Xbox One and the PS4. Is this true?

Also, it's obvious that AMD must have had some kind of advantage with regard to what they had to offer the Xbox One and PS4. What would you s

I think your questions will be best answered by this blog we recently published.

Let me pull a relevant quote:

"It’s not that Mantle is the initial language with which developers are writing their games on each platform, as some have surmised; the point of Mantle is that it’s easy to reuse, in whole or in part, the development effort expended on the next-generation consoles when bringing the same game to life on the PC. This is because Mantle allows developers to use the same features and programming techniques they are already utilizing for next-gen game consoles. And while the initial iteration of Mantle is intended specifically for PCs, it has been designed from the beginning to be extensible to other platforms as well."

With respect to how the custom solutions we've co-engineered with the console vendors helps their devices compete with PCs, that's really a question better answered by them. They know their devices much better than we do. :)

Thank you for answering my previous question. This one is a bit more technical. I have read that when people crossfire 2 7990, they encounter issues.

My question is: Are dual GPU cards meant to be Crossfired?
According to amd website chart, it states nothing about 7990

CrossFire is uniquely capable of running up to 4 simultaneous GPUs. The 7990 can certainly be CFed with a second one. You can also pair 4x 7970s (or 2x 7970 + 7990), or place 4x 290s together.
 
Last edited:
What is a single tech that AMD is currently working on that you don't feel there is enough buzz about? Something possibly overlooked or overshadowed but to which we should really be paying attention?

My favorite is the new implementation of PowerTune on the 290X and 290. There's a lot of doom and gloom around the 95C temperature, because people are used to a world where the product is designed to run as cold as possible... but that's not the world we're living in with these units. The doom and gloom is based on an old viewpoint.

95C is the optimal temperature that allows the board to convert its power consumption into meaningful performance for the user. Every single component on the board is designed to run at that temperature throughout the lifetime of the product.

If you throttle the temperature down below that threshold, then the board must in turn consume less power to respect the new temperature limit. Consuming less power means lowering vcore and engine clock, which means less performance.

You want to take full advantage of product TDP to maximize performance, and that is accomplished with a 95C ideal operating temperature for the 290 and 290X.

Even with a third-party cooling solution, like the Accelero 3 some users have started deploying, the logic of PowerTune will still try to maximize TDP by allowing temperatures to float higher until some other limit is met (voltage, clock, fan RPM, whatever).

It's so bloody smart and it kills me that more people don't fully understand it.

Cpus from intel seem to perform better when gaming.
a)Does AMD sometimes sacrifices quality to ensure lower and more competitive price, or is it intel's hardware fetching better?
b)When developing a new GPU do you cooperate with other AMD departments to ensure maximum performance when using amd products together?

I cannot comment on A as I don't work in the CPU division.

With respect to B, we design our GPUs to have the same level of compatibility with whatever CPU the user has. We do not optimize graphics hardware or drivers for any CPU architecture over another.

Another question from me; I remember there was something on the GPU 14 event that mentioned anti-aliasing support with deferred rendering / lighting. I was wondering if this included super sampling anti-aliasing as well.

You're referring to Forward+. I want to give some background on Forward+ so other people know what it is:

Forward+ not only allows for the efficient utilization of thousands of light sources, but also performs an accurate simulation of their interactions with objects in the world. Forward+ also resolves the developer’s choice between a “forward” or “deferred” rendering engine for their game, which can have serious implications for the look, feel and features of a title.

For example, a “forward” rendering engine supports anti-aliasing, texture transparencies (like leaves and chainlink fences), and a wide variety of texture types (e.g. cloth and metal). Meanwhile, a “deferred” engine supports a large number of light sources, efficient rendering effects (“shaders”), and works well on game consoles. Thanks to Forward+, all of these technologies can efficiently coexist.

Forward+ further extends its leadership over forward and deferred engines by substantially increasing the diversity in the material types that a game may render. Whereas lighting on cloth and metal textures might look very similar on a game with a deferred engine (e.g. old gen consoles or console ports), Forward+ can render the distinct and realistic lighting differences between them.

Because of the way Forward+ is designed, it's suitable for use with both MSAA and SSAA.

Thanks for that.
I would assume that Mantle carries all the same graphic rendering abilities that other current APIs carry? Or are capable of rendering all the same effects or more or less?
Are they any graphical distinctions between Mantle and other APIs? Like for example, Tress FX in a way. Something that differentiates itself or sets it above other APIs?

Yes, Mantle is a complete graphics API capable of doing anything you see today. And more. The primary advantage, however, is that it can speak directly to the hardware in a unique way that other APIs cannot.

Opps didnt realize only the Radeon side... In that case...

What was the influence to do something like Mantle when we have seen other projects like it "GLIDE" fail? What do you think will make your solution better or more enduring? Especially since the console market didn't want it? (Granted will say my 7970 works amazing max settings in BF4)

Game developers requested Mantle themselves. That's the key difference. The industry told us they want it.

Has AMD considered about making Downsampling software for high end cards? As you guys have stated before, the 290 is meant for 4k monitors but currently the standard for monitors and TVs is 1080p. It has been proven and tested that downsampling improves image quality. (I think it will be a great feature for SteamOS)

We are not considering any such software.

Hi, I haven't seen any answers with regards to your CPU department yet, just your GPU department. Is this being done on purpose because of some news being released on the 11th Nov or have you just overlooked the questions?
If it is the latter, are there any Steamroller FX-CPUs coming out or was the 9000 series the end of the FX line? I'd rather know now because I'd rather have a strong CPU and GPU separate than have a Weaker CPU and GPU sitting together.

No answers have been provided on CPUs as all members of AMD in attendance today are from the Radeon business in Markham, Ontario. :) We cannot comment on products I don't intimately know!

I don't know if you have taken a look at your competitor's website, the "GeForce" brand image is really strong, from their website, forum, to their Geforce Experience software, everything is polish and consistent. I think you guys really need an overhaul in terms of the "face" and product image you want to present to your customers, rather than fragment of places and visual themes that it is right now. What do you think?

And I think you need actual support staff from AMD to answer some questions, who can assist politely and professionally. When I go there, I don't see the kind of lively discussions that your competitor has enjoyed in their forum, and what is worse is some regulars who appear to be helpful but sometimes give out rude comments and even be paranoid and accuse your customers who posted technical problems.

Thanks for reading my concerns.

I invite you to check out the new www.amd.com, which was just updated for a uniform look and feel. I also invite you to interact with us on Twitter and Facebook, which are monitored 24 hours a day by individuals like myself who enjoy interacting with customers/fans/etc. I like taking the time to reach out to people.

You say that developers requested Mantle. How so?
And what is your favorite color? ;)

Johan from DICE sat across the table from Matt Skynner, the GM of our graphics business, and explained how he wanted a close-to-metal graphics API. Matt said "we can do that," and the rest was history.

Other devs are very excited when they learn about what Mantle is capable of, and how Mantle can help them bring their game to life on the PC.

//edit: my favorite color is eminence: http://en.wikipedia.org/wiki/Shades_of_purple#Eminence

"You want to take full advantage of product TDP to maximize performance, and that is accomplished with a 95C ideal operating temperature for the 290 and 290X. "

So there is no benefit in performance if the card could run cooler? Wouldn't you be able to get more performance before you hit 95c?

A better cooler increases the watts of heat that the product can emit before the 95C equilibrium temperature is reached. In turn, this raises the maximum permissible clockspeed (within the limit of product TDP) the board can sustain.

Users with the Accelero coolers are finding they're reaching the clockspeed limit of the product at a lower temperature limit than 95C. So you can see how the experience is very customizable and interesting for enthusiasts.

What are the chances of your marketing department creating a video of Jim Keller and Raja Koduri preforming the Fusion Dance from Dragonball Z in order to promote Kaveri and HSA?

Such a dance would take 10 conferences to complete, and may require the use of a Potara Earring.

We all want to know:

- How much performance increase are we looking at with mantle enabled games?
- Is there any graphical improvement/options unlocked while using mantle?

- Are we going to see more FX series processors competing in the mid and high-end market?

Question 1:
I cannot give you a precise answer now, but the first performance numbers will be revealed during the AMD Developer Conference next week. Keep your eyes open on the 13th.

Question 2:
This is up to the game developer. Mantle is a complete and high-performance graphics API, therefore the developer can use the performance boosts to increase raw FPS, or they can sustain the same FPS and increase image quality vs. other APIs.

Question 3:
Not a CPU guy, sorry.

Hi Thracks,

Why AMD is not giving 2gb DDR5 version for 6670HD and 7750HD than providing 2gb DDR3 version for the same?
I think both these cards are capable of using more than 1gb video memory compared to ddr3 version for the same cards.

2GB framebuffer would not be useful on these cards. The architectural implementations on these products target resolutions and image quality settings that don't require 2GB of RAM. That allows the product to hit the specific prices users in this segment expect.

Thanks for hosting this!

You already addressed my question about when some tasty new games would be coming to Never Settle (coupon is sitting on my desk). Thank you!

I wanted to ask about plans for SteamOS support. Does that have an impact on your plans for the fglrx driver? And while I'm sure it's a secondary priority, is Mantle going to be supported by fglrx? Or maybe it's too early to know that.

My A10 / 7950 gaming box might move to the living room and become a Steam Machine, but if SteamOS had support I'd build a new Kaverti system for the living room and leave my Windows one where it is.

I want to have an answer for you, but the Steam Machines are still a very young enterprise. We are working closely with Valve to answer all of these questions, but I will say this:

Expect AMD-based Steam Machines in 2014, which means a stable and high-performance driver is required. We've already been making massive contributions to the Linux 3.11 and 3.12 kernels to substantially improve performance and features, so you can see the evidence in our efforts there.

Dear AMD,
I have always been a devoted customer in the old ATI age or namely the Radeon 7500 on AGP. For as long as I can remember ATI/AMD and nVidia has always moved the gaming performance crown up and down, I was just reaping the amazing performance of Radeons in content creation software.

In the past 5 years I am stuck on nVidia, because Adobe/Autodesk reached for CUDA. I would have really liked to slam in HD7750 with a passive cooler and enjoy some amazing OpenCL performance for a cent of the price, but alas CUDA is what works. Now that OpenCL is mature enough Adobe started to move away from CUDA to OpenCL. Autodesk still focuses on CPU, but I do think the time for more GPU acceleration draws near.

1 - Do you have any ongoing project or collaborations with Software companies such as Adobe or Autodesk, to push up OpenCL adoption?
2 - Do you have any ongoing project to optimize drivers or enhanced support for AMD video cards, both workstation class and desktop class, in Adobe/Autodesk software?
3 - I know that workstation class cards are there for a reason, but will we see any of the W-series limited features to make their way down to the Radeon series?
4 - Do you plan on any major driver release that will increase the W-series performance in dynamics, fluids, ncloth and other simulations?
5 - Are you allowed to hunch a date for the next workstation card line release?

Thank you in advance.

1) So happens we do: http://www.amd.com/us/press-releases/Pages/amd-and-adob...

2) Specifically for workstation-class graphics, Adobe and Autodesk are a significant focus. They're amongst the biggest DCC packages in the world.

3) There are examples of workstation features that exist in Radeon: 10-bit color support, double precision compute and more. But, by in large, their feature sets are distinct as dictated by the respective libraries of software run on these products.

4) I'm not sure.

5) Nope!

Can you show us a roadmap on where AMD Radeon is heading over the next few years?

Thanks for answering these questions!

Can we expect a dual GPU from the R9 family, and if you can answer it when? Which quarter, month etc.

Sorry, guys. You know I cannot speculate on future roadmaps.

Hi, and thank you very much for this question and answer sesion!

Because it is really hard to get some connections to AMD even on exhibitions, i will use this chance to ask you some questions.

1. The Hawaii XT with its 44 CUs in 4 blocks with 11CUs each is really ugly for a big chip. I have heared, that this is the real physical situation. Is this true, or is the real physical chip with 48 CUs?

2. Since the HD5k series AMD have GDS, but AMD never sayed something abaout it. The only thing in the programmer guide is on http://developer.amd.com/wordpress/media/2012/12/AMD_So... ftrom 2012 we can read

The GDS block contains support logi
c for unordered append/consume and
domain-launch-ordered append/consume
operations through the global wave
sync (GWS).

GWS is something lots of programmer wait since GPGPU is up the road...

Why do AMD say nothing about it, and why it have gone?

3. Is it true, that R9 290X (Hawaii XT) is only 1:8 DP:SP? If yes, why have you cut DP:SP down? 7970 have 1:4 DP:SP.

4. Will we see GWS with OpenCL 2.0 in the future? The API declaration/call changes in OpenCL seems to show a global barrier for worksync over CUs.

5. Have Hawaii XT for the FirePro Market 1:2 or still just 1:4 DP:SP ratio?

6. Is there a chance to see RDMA soon with FirePro/Radeon cards? There are lot of people out there who would pray to you. If yes, please with zero copy!

7. Is there a chance to see AMD at the SC next week?

8. What is the situation of unified addressspace for GPU and CPU Memory. AMD say since a few years, that they will give us GPUs with coheren! unified addresspace. Is there a chance to see this in the next future?

I am not a developer, but I will answer as much as I can.

1) 4 blocks with 11 CUs each is the configuration of the R9 290X.

2) GDS via GWS should still be in the architecture.

3) Yes, the R9 290 and 290X are 1:8 DP. As to "why" I do not know, except to say that these are primarily gaming graphics cards, so it would make sense that the architectural implementations are optimized for these types of workloads.

4) I'm not sure.

5) I do not know what the FirePro team plans.

6) Not sure!

7) Yes!

8) 290 and 290X support system unified addressing.
 
Last edited:
What is Team Red or is it Red Team? Can I join? Lol

#AMDRedTeam!

My colleagues Heather, Stella and I came up with the idea to give people a chance to share their GPUs, rigs, favorite games and more. Sometimes people need a "spark" of opportunity, and we hoped that the AMDRedTeam idea for Twitter would give people that chance to share their systems. So far the response has been incredible.

If you want to join, Tweet @AMDRadeon and ask to join!

Hi, thanks for coming here and answering questions, my question is sort of cpu related but not about the next gen ones that arent released so hopefully I get my answer.
Anyway what I would like to know is will mantle remove/help the cpu bottleneck eg if I play mantle supported games will it be worth my while buying a high end r9 290 card to pair with my trinity 4 core cpu due to the draw call increase or will the cpu still require a big upgrade to get enough out of the card to warrant buying it over say a 7870?

The draw call improvements of Mantle does help alleviate cases where the CPU is the bottleneck. Mantle is very good at parallelization. Beyond that, it's too early to say.

Where can I get AMD Radeon case stickers :p

Any estimate for crossfire stutter to be completely fixed?

Improvements to Linux drivers?

1) See here.

2) Of course. We've already been making massive commits to the Linux 3.11 and 3.12 kernels to substantially improve performance and features for Radeon on *NIX. Steam Machines only encourage that process.

I would suggest a second AMD AMA for the CPU/APU side of things. Lots of questions here on the subject(s).

Agreed.

Also to AMD, what's the hold up on Mobile Kavari APU's? I'm hearing March2014 or later for first retail products, that's 5 months from now. Or in another time scale, 4 months from the first appearence of a 28nm/GCN APU in a retail product (XB1/PS4)

Has focus on the XB1 and PS4 pulled resources from getting Kavari to the channel?

Kavari looks like a great product, that I want to support, just not sure if I'll be able to wait that long...

regards
Jordan

I can't answer any of these questions directly, because the truth is that I don't know. But I hope some of them will be answered for you at the AMD Developer Conference next week. Lots of information to be had on HSA and the like there.

what can true audio do outside of gaming?

I love this question.

First and foremost, TrueAudio is a programmable DSP. Its most obvious use is signal processing for gaming audio, but you could conceivably program it to do audio filtering, voice control, biometrics or anything else a powerful DSP is capable of.

Are you planning to raise the Fan PWM Target when the GPU sound solution detects loud sound/music?

No, but we do plan to switch to direct RPM control vs. PWM very shortly. PWM control does not always yield the exact RPM you're looking. Converting an electrical pulse to a mechanical rotation isn't an exact science! It's subject to the design variances of the fan and the PWM module, so we're going to make it the exact science it deserves to be.

Not a literal answer to your question, but a fun fact I wanted to share.

Is the necessary equipment to measure the Back-EMF or an encoder for detecting the speed already on the current boards? That makes me wonder why it wasn't feedback-controlled in first place. Or are these plans for the future hardware revisions?

We can already monitor the speed and adjust accordingly, but direct RPM control is simply smarter.

How do you convince developers to use Mantle when other APIs are vendor neutral?

For existing games, how hard or easy is it to "port" it to use Mantle?

What's your position on Mantle and OS X? (currently Apple use nVidia GPUs, so it may not make much sense, but without Mantle AMD GPUs might be a hard sell for future Macs)

We haven't had to convince them. Every single one of them has come to us and asked for it without prompting!

Right now we are concentrating on the PC platform for Mantle, so that's our position on OS X.

I have a couple of questions, hopefully they're not too far-reaching:

For Hallock: PR management for AMD in South Africa is non-existent. One guy seems to be doing the job on his own and there's quite a lot of mindshare being taken over by Nvidia here (Intel as well, but as Thrash said, no CPU guys are here). Why isn't AMD doing any marketing in my country (which warmly welcomed an APU stand at a major games convention in 2012)? And If you're able to answer, what is AMD doing for South Africa in particular to make sure prices stay reasonable and keep your products affordable? Case in point, currently an A10-6800K costs the same as a Core i5-4330. On the GPU front you're still mostly price-competitive.

For Corpus: Mantle has been talked about and mentioned in the same name as some big AAA studios. What can Mantle offer to indie developers and would the API be able to use some of the same optimisations on lower-class products? (e.g. the G-series chips with GCN graphics)

For Nekechuk: Since the R-200 refresh we've gone from having several variations on a theme for GCN down to just seven SKUs (R7 240, 250, 260X; R9 270X, 280X, 290, 290X). Given that there's space inside your lineup for an R7 260 and R9 270, 280, would it be feasible for AMD to release cards for those model numbers, or can we expect a simplified lineup of just seven cards in every generation from now on? (I'm personally hoping for the latter)

For Nalasco: Tom Pieterson is everywhere on the internet talking about Geforce products. I hardly see you anywhere on the net or Youtube. Will this change?

For Parfitt: GSync got tongues wagging in spite of the fact that people wouldn't be able to see the difference on a compressed Youtube video. Is AMD considering a similar solution, or working towards one that's more open than GSync? I was also disappointed that I'm not able to run an Eyefinity setup with monitors of different sizes and resolutions, despite this being one of the features teased in the Catalyst 12.2 beta and was called Eyefinity Bezel compensation (http://bit.ly/1cEGAdh). Lastly, AMD still does not support PLP (portrait-landscape-portrait) monitor setups for gamers - will that ever change?

From me: I don't know the situation in South Africa, or the African continent in general, as I'm the PR lead for North America. With respect to pricing, however, that will always depend on the value of the local currency compared to the US$, along with the customs/duty/tax/import of a country. So I would ask: what is South Africa doing to ensure that prices on premium electronics are competitive on the world stage? This is a struggle Canada, my home, is undertaking right now.

Ritche replies: Mantle is a full graphics API. Anything indie developers are doing with other APIs today can be done on Mantle, plus you get direct hardware access. Anything with a GCN graphics core can leverage Mantle. We're open to working with any developer on Mantle! Please contact us.

Nalasco replies: Tom Petersen performs a different role from me, but you'll see AMD out and evangelizing more in the months ahead.

And Shane says: You'll hear more from us on G-Sync soon. Bezel compensation is designed to treat the bezels of matched-sized/resolution displays as an object game content passes behind, rather than an object that chops game content in half. The feature is not intended to support for mixed-sized or mixed-resolution configurations. With respect to PLP, that is a feature we have in development, but I don't have an ETA at this time.

To make the three complete and then shut up: The 95°C temperature target fits nicely with the theory that you're trying to get the same heat flow through a smaller footprint in relation to nVidias Titan at its 80°C (considering the die size ratio and an ambient temperature of max 30°C). Was the reduced (and more affordable) die size the rationale to reach out closer to absolute maximum of what chips can handle before melting?

Product cost is a function of die size (and other parameters). We were confident that we could achieve industry-leading performance on a twenty-something percent smaller die using GCN, and we knew that would, in turn, give us a more attractive price for gamers.

We went for it. And thus $399-$549 hella fast GPUs were born.

Why do development and provision of AMD graphics card drivers last for shorter periods of time then Nvidia? They seem to be hosting really, really old drivers, updated for use on modern computers, and I haven't seen that so much with you guys.

Our driver support is identical to NVIDIA. They support GPUs from 2010 (or later) in Windows 8.1, as do we.

I have a couple questions for you guys:

Regarding the 290/290x, when can we expect to see vendor based cooling solutions?

How would you respond to reviewers complaints regarding these cards reference style blowers? All of them iterated the same "it feels cheap compared to Nvidia's solutions" line and also moaned about the incredible noise of the fans. While I understand you design them to reach an ideal setting for the underlying technology, many have felt that the solutions are lacking and seem to indicate very little progress over your previous (7xxx) coolers.

How did you guys react to Nvidia's G-Sync and Shadowplay? Any plans for a rebuttal?

Do you believe that TrueAudio would provide an effective replacement for discrete soundcards/dacs/amps for audio enthusiasts?

How do you feel about the current shareholder structure, with the majority ownership (or last I checked significant stake at least) by the UAE's Abu Dhabi Investment Authority (ADIA)? Have their goals been largely in line with what you see as being the employee goals or has there been tension in that respect?

Did AMD originally intend to price the 290x at a higher price point but lowered it to further take market share from Nvidia?

I'm not certain when you'll see third-party solutions, I'm afraid. I don't sit in on the meetings that would determine this.

We're presently assessing G-Sync, but have no comment at this time.

TrueAudio is NOT designed to replace user soundcards! Please see this interview with MaximumPC which explains a lot.

The 290X debuted at the price we intended from the day it was conceived. :)

Hello Thracks from AMD.
My first GPU ever was ATI Radeon budget card. I have somewhat very fond memories connected with it. I remember when I bought my first PC, the fastest card was X850XT. I was having dreams about owning it. Oh, boy, these were the times!BTW, did you work for ATI too?
Now my question: I totally love R9 290X. But I found one thing strange about it. I liked that ROPs were doubled from HD7970 and memory bus were bumped to 512-bit. But shader count were bumped only to 2816 and CUs to 44. Wouldn't be it more nice from chip engineering perspective to throw 3072 shaders (48 CUs) into Hawaii, thus making it more nice 50% increase from HD7970?

3072 shaders would not fit into the die size we were targeting for the 290X, and from an architectural perspective, 2816 is a balanced shader count for the render backends and bus width.

Do you plan on having true Audio on the more mid range cards like future 270x or 280x cards?

TrueAudio was designed for the R9 290(X) and the R7 260X as the top-end cards in the R9 and R7 Series, respectively. The 270X and 280X do not have the necessary hardware to enable TrueAudio.

Hey guys, I've been an AMD fan since I built my first machine. I've got a few questions.

1. At UCF, my professor said you guys had an office a little ways from campus where you make the GPUs. I've never seen the office, But is that actually what you guys do there?
2. Do you guys keep little ATI trinkets hidden in your desks or are you guys totally AMD now?

1) There's an office in Miami that does hardware/software QA and some board engineering.

2) You still find ATI doodads floating around from time to time!

Their latest driver (supports up to Win 8.1) supports 8000 series card, which dated back to 2006. Radeon 4000 series came up later, yet you guys already ceased support them in Win 8.1

AMD Catalyst 13.9 supports HD 2000, 3000 and 4000 in Windows 8.1. The driver must be installed through the device manager, however, as it has not yet passed Microsoft driver certification, rendering it ineligible for automatic installation via .exe.

Hi, AMD. Great AMA so far!

I'm currently on a 7950 CF setup. Great cards with amazing value. My question is, will Mantle help with CF scaling and GPU usage? In BF4 for example, my 2600k @ 4.5ghz is near maxed out, yet my cards are at 75% usage most of the time, sometimes even dropping to 50%. This is on a 64 player server so this is clearly a CPU bottleneck ( or maybe it's the game being unoptimized? ). Will Mantle help with this?

On the topic of CF and mantle, is there anything else aside from scaling and GPU usage that you feel Mantle will help with?

We're assessing some tweaks that will bring BF4 to 90-95% scaling 1->2 GPUs. That's without Mantle.

Going forward, Mantle could help any game with CPU bottlenecking and multi-GPU scaling. It's deeply parallelized as an API.
 
When will AMD have quality drivers better than Nvidia linux drivers?

Has the Steam linux client meant that AMD has committed more resources to this undervalued department?

Is there a roadmap for linux driver improvements?
 
Will amd release anything similar to shadowplay in the next few months?

Our approach to providing game streaming to users is through the Raptr app: http://raptr.com/amd

I strongly suggest those of you interested in HPC visit our dedicated developer forum: http://devgurus.amd.com/welcome

I'm sure you would be able to find the answers to your questions there. :) AMD developers frequent that community and can help with ISA, APIs, SDK questions.

Thanks for the response! I can't wait to see those tweaks :bounce:

Two more questions:

I'm on a 144hz monitor ( Asus VG248QE ). There are issues with this refresh rate on the desktop like cards not downclocking. I have fixed this by dropping down to 120hz. Thing is, I get random artifacts while on the desktop and/or browsing. Is this a driver issue? If so, any plans on resolving it?

Second question - On the topic of 144hz, when I first got my 2nd card to CF, I would get BSODs when loading games @ 144hz. Games like Hitman where it lets you set your settings before loading the game, and Sleeping Dogs where you can't change refresh rate. There are people with 7990's that also have this issue and only happens in CF.

I've fixed it by adding a 2nd CF bridge ( no idea how or why ), however those with 7990's aren't so lucky. Dropping down to 120hz fixes it. Will this be resolved in a future driver?

I've taken your post and handed it directly to the project lead for our drivers. A fix will depend on other priorities, but I can confirm personally that they are aware.

Hey guys, thanks for your answer to my last question. I have another to ask you:
What happens when a 280X and a 7970 are in crossfire and you start a mantle-enabled game. Do you not get mantle? Or does it only run on the 280X?

Both GPUs are Mantle-ready. You would have Mantle and CrossFire.

My questions 9 and 10 are no HPC/GPGPU specific questions, are you able to answer them? I think many people are really interested in the HPM<->HP question ;)

And FCAT is in the german community a BIG thing.

The answer to #9: it's 28nm HP. HPM is for mobile solutions.

The answer to the FCAT question, FCAT is not the end-all-be-all of frame pacing. There are many provable scenarios where the observed performance is stutter-free, whereas FCAT results suggest that it should be stuttering like crazy. If I recall correctly, Tomb Raider is an instance of this. People should use their own eyeballs vs. relying on an automated test, because the automated testing absolutely does not tell the whole story.

What's clear, however. is that we had an issue with the consistency in frame delivery, and we've largely resolved that problem. The remaining scenarios will be resolved this quarter with a driver update that intelligently and algorithmically normalizes frame times.

Hey there friends!

It's getting on in the day, and so we're going to break for the evening to provide our guests with an opportunity to eat, rest, and relax for a bit from all the great questions. Please feel free to continue posting your questions here to this thread, and the reps from AMD will back back tomorrow morning to follow-up with responses from the overnight. Thanks all - and keep the great questions rolling!

-JP



And we're back...

Will MAntle support an official / semi-official GCN Isa Assembler, expose GPU more of what is avaible today? You have things that can only do with Isa Assmebler where it is impossible to do in OpenCL(not to mention dx or directcompute). So that kind of access to GPU is vital sometimes that can boost perf. or accelerate some implementation. Will it have some kind of Isa Assembler directly to GCN Isa commands, or will it compile to AMD_IL?

I’m not a developer, so I’m unfortunately unable to intelligently answer that question. What I can say is that we’re unveiling the architecture of the API next week at the AMD Developer Conference, and that may answer more of your question.

Hello AMD Reps,
I'm surprised I haven't seen this been asked yet, and If I missed your reply, I'm sorry. But here it goes:
Question 1: Would I be safe investing into a 990FX motherboard for a SteamRoller CPU or what ever is next in your line up (some suspect you guys may be releasing something else for)? Or is it a dead socket?
Question 2: Could I crossfire a r9 270x with my HD 7870? If so, would I need a crossfire bridge?
Question 3: Would you recommend I go with a secondary GPU for 1440P or sell my HD 7870 and upgrade to a r9 280x?
Question 4: What is the expected increase in performance with MANTLE?
Question 5: What's the point of this thread if you can't get answers on CPUs? Lol, Half the threads on this website want to know what's going down with steamroller so we can prepare our wallets!!!

1) I’m not on the CPU team, so I don’t know the answer to this question.

2) Yes, but we do not test or qualify such configurations so I cannot guarantee that it will work properly. You would need a CrossFire bridge.

3) I know one GPU versus two is contentious, and always will be, but I think two 7870 GPUs for 1440p will ultimately provide more performance than a single 280X.

4) Stay tuned for the AMD Developer Conference next week. On the 13th, one of the Mantle-supporting game developers will be introducing the first public demonstration with performance figures.

5) Because lots of people have graphics questions, and THG asked us in Hawaii to participate. :)

@ojas: Again, there is so much in this thread. It's hard to read through everyone's posts, this late at night especially. Thank you for answering my questions.

@amd Rep: Another thing I've wondered, are we going to see R9 290s and 290Xs with non reference design coolers available to the public anytime soon?
That would be the ticket for me! Not voiding my warranty to get decent cooling performance! I'd probably sell my sad little HD 7870 in a instant! lol

I’m sorry, I don’t sit in on the meetings that determine the roadmap for partner solutions. I don’t know.

Hi AMD staff!
Have a few questions (had more, but others have already asked):

1. What's the minimum guaranteed base clock on the 290 and 290X?

There is no minimum clock, and that is the point of PowerTune. The board can dither clockspeed to any MHz value permitted by the user’s operating environment.

2. We've seen reports from Tom's Hardware that retail 290X cards are clocking much lower (someone posted a chart on this page above), and even a user on Tech Report claiming much lower clocks than review samples have.

Is this simply because the current PowerTune implementation is heavily dependent on cooling (which will be variable from card to card)?
This issue with the 290X is causing people to be cautious regarding the 290 as well.

Plain and simple, THG and Tech Report have faulty boards. You can tell because Sweclockers performed the same retail board test and got the expected results: performance in identical to the AMD-issued samples.

Every 290X should be running 2200 RPM in quiet mode, and every 290 should be running 2650 RPM. We will be releasing a driver today or tomorrow that corrects these rare and underperforming products, wherever they may exist.

3. In light of (2) and the fact that AnandTech went so far as to recommend AGAINST the 290 due to the noise it made (i think they measured over 55 dBA), wouldn't it have been a better idea to re-do the reference cooler? Maybe make it a dual-fan blower?

aving addressed #2, we’re comfortable with the performance of the reference cooler. While the dBa is a hard science, user preference for that “noise” level is completely subjective. Hundreds of reviewers worldwide were comfortable giving both the 290 and 290X the nod, so I take contrary decisions in stride.

4. Partly because of (2) and (3), doesn't the 290 make the 290X pointless?

Hardly! The 290X has uber mode and a better bin for overclocking.

5. Wouldn't it have been a better idea to keep the 290 at a 40% fan limit (and thus be quieter) and allow partner boards to demonstrate Titan-class performance at $425-450?

No, because we’re very happy with every board beating Titan.

6.a.) Open? How? It's a low-level API, exclusive to GCN. How's it going to be compatible with Fermi/Kepler/Maxwell etc. or Intel's HD graphics? For that matter, will you be forced to maintain backwards compatibility with GCN in future?

You’re right, Mantle depends on the Graphics Core Next ISA. We hope that the design principles of Mantle will achieve broader adoption, and we intend to release an SDK in 2014. In the meantime, interest developers can contact us to begin a relationship of collaboration, working on the API together in its formative stages.

As for “backwards compatibility,” I think it’s a given that any graphics API is architected for forward-looking extensibility while being able to support devices of the past. Necessary by design?

6.b.) All we know from AMD as yet about Mantle is that it can provide up to 9x more draw calls. Draw calls on their own shouldn't mean too much, if the scenario is GPU bound. You suggest that it'll benefit CPU-bound and multi-GPU configs more (which already have 80%+ scaling).

That said, isn't Mantle more of a Trojan horse for better APU performance, and increased mixed APU-GPU performance? AMD's APUs are in a lot of cases CPU bottle-necked, and the mixed mode performance is barely up to the mark.

I suggested that it’ll benefit CPU bottlenecking and multi-GPU scaling as examples of what Mantle is capable of. Make no mistake, though, Mantle’s primary goal is to squeeze more performance out of a graphics card than you can otherwise extract today through traditional means.

6.c.) All said and done, will Mantle see any greater adoption than GPU accelerated PhysX? At least GPU PhysX is possible on non-Nvidia hardware, should they choose to allow it.
Wouldn't it have been better to release Mantle as various extensions to OpenGL (like Nvidia does), given the gradual rise of *nix gaming systems? And Microsoft's complete disinterest in Windows as a gaming platform...or heck, even in the PC itself.

It’s impossible to estimate the trajectory of a graphics API compared to a physics library. I think they’re operating on different planes of significance.

I will also say that API extensions are insufficient to achieve what Mantle achieves.

6.d.) Developers have said they'll "partner" with you, however the only games with confirmed (eventual) support are BF4 and Star Citizen. Unreal Engine 4 and idTech don't seem to support Mantle, nor do their creators seem inclined to do that in the near future.
Is that going to change? Are devs willing to maintain 5 code paths? It would make sense if they could use Mantle on consoles, but if they can't...

The work people are doing for consoles is already interoperable, or even reusable, with Mantle when those games come to the PC. People may have missed that it’s not just Battlefield 4 that supports Mantle, it’s the entire Frostbite 3 engine and any game that uses it. In the 6 weeks since its announcement, three more major studios have come to us with interest on Mantle, and the momentum is accelerating.

7. With TSMC's 20nm potentially unavailable till late next year, is AMD considering switching to Intel's 22nm or 14nm for its GPUs? Sounds like heresy, but ATI and Intel weren't competitors.


8.Regarding G-Sync, what would be easier: Licensing Nvidia's tech and eventually getting them to open it up, or creating an open alternative and asking them to contribute? There is, after all, more excitement about G-Sync than stuff like 4K.

We fundamentally disagree that there is more excitement about G-Sync than 4K. As to what would be easier with respect to NVIDIA’s technology, it’s probably best to wait an NVIDIA AMA.

9.Is AMD planning on making a OpenCL based physics engine for games that could hopefully replace PhysX? Why not integrate it with Havok?

No, we are not making an OpenCL physics library to replace PhysX. What we are doing is acknowledging that the full dimension of GPU physics can be done with libraries like Havok and Bullet, using OpenCL across the CPU and GPU. We are supporting developers in these endeavors, in whatever shape they take.

10. We've seen that despite GCN having exemplary OpenCL performance in synthetic benchmarks, however in real-world tests GCN cards are matched by Nvidia and Intel solutions. What's going on there?

You would need to show me examples. Compute is very architecturally-dependent, however. F@H has a long and storied history with NVIDIA, so the project understandably runs very well on NVIDIA hardware. Meanwhile, BitCoin runs exceptionally well on our own hardware. This is the power of software optimization, and tuning for one architecture over another. Ceteris paribus, our compute performance is exemplary and should give us the lead in any scenario.

11.Are the video encoding blocks present in the consoles (PS4, Xbone) also available to GCN 1.1 GPUs?

You would have to ask the console companies regarding the architecture of the hardware.

12. What is it the official/internal AMD name for GCN 1.1? I believe it was Anand of AnandTech that called it that.

We do not have an official or internal name. It’s “graphics core next.”

13. I remember reading that GPU PhysX will be supported on the PS4. Does that mean PhysX support will be added to Catalyst drivers on the PC? Or rather, will Nvidia allow AMD GPUs to run PhysX stuff?
A lot of questions, but I've had them for a long time. Thanks![/quotemsg]

No, it means NVIDIA extended the PhysX-on-CPU portion of their library to developers interested in integrating those libraries into console titles.

Hello AMD representatives, and thank you for this opportunity. By the way you should do this more often!
I am a bit of a AMD fan because I've always seen AMD as the "Robin Hood" of IT technology, AMD always focused on giving good performance at very affordable prices. Now for the questions:
1) It's obvious that AMD had a great vision that matured into a strategy spanning over several years:
a) first you win all the major consoles out there and by that you ensure that most games will (have to) be optimized for AMD hardware.
b) then you develop Mantle in tight cooperation with (some of) the major game designers out there to solidify your gains.
It is obvious that souch strategy involves commitment of significant resources (especially since it covers both the CPU and GPU side of business). Is this vision from the Dirk Meyer era or has it grown under Rory Read's tenure ?
2) According to at least two reviews that I read on the 290 (from Anandtech AMD center and Tom's Chris Angelini), the reference cards seem to outperform the 290x despite the 4 CU that have been cut down. That's (as I read) due to increase of fan speed to 47% which made both reviewers complain about the noise of the card. This means that the 290x performance gain over the 290 will not justify (if at all) the extra cost. Do you plan to improve the 290x later on ?

3) Any plans to change the referrence stock coolers or alternatively to offer a premium option for cooling (for example: water cooling as was the case with FX-9590)

4) If I remember correctly 7990 appeared very late (1 year later than single chip options). Is there a 299X (dual chip sollution) in the works ?

5) I hear a lot about advantages that general processing will have for bringing the GPU closer to the CPU (parallel workloads can be executed more efficiently by a graphics card) but is there any advantage for the GPU in having this close integration with the CPU (workloads that can be easier delegated from the GPU to the CPU) if yes please give some examples..

6) Are there any plans for developing a Radeon GPU specific for the mobile (mobile phones, tablets, smart wearables) segment ?

7) Will there be a GCN 1.2, 2.0 or are you already working on a future architecture ?

8) Steam has a huge number of subscribers and definitely has a working model that can rival that of gaming consoles. The fact that they are serious about building a console ecosystem around their service is not to be taken lightly. Nvidia was very quick to rally to Steam in order to counter your design wins for the console market. I know it's been asked before, but do you plan to sit this one out or will we see AMD getting involved in the Steam console(s) project.

9) And the last one: Did you have to make any sacrifices in the GPU architecture in order to ease the unification with the CPU ? If yes please give a few examples, if no please motivate.

Thanks in advance for your responses, and keep up the good work! :)
I hope that the editors from Tom's will create a nice "front page" article from all the information you gave out to us today for all the readers who missed this talks can also.

1) The gaming strategy you’re seeing today is the brainchild of the graphics GM Matt Skynner, along with his executive colleagues at AMD. It comes with the full support of the highest echelons at AMD.

2) I want to reiterate this answer: Plain and simple, THG and Tech Report have faulty boards. You can tell because Sweclockers performed the same retail board test and got the expected results: performance in identical to the AMD-issued samples.

Every 290X should be running 2200 RPM in quiet mode, and every 290 should be running 2650 RPM. We will be releasing a driver today or tomorrow that corrects these rare and underperforming products, wherever they may exist.

3) I don’t sit in on these engineering meetings.

4) I cannot speculate on future products, I’m sorry.

5) I cannot think of any reverse examples, where offloading from the GPU to the CPU would be beneficial.

6) We have no plans to enter the smartphone market, but we’re already in tablets from companies like Vizio with our 4W (or less) APU parts.

7) Graphics Core Next is our basic architecture for the foreseeable future.

8) You will see AMD-powered Steam machines in 2014.

9) No, it’s more about changing the direction of CPU architecture to be harmonious with GPUs. Of course the GPU ISA has to be expanded to include things like unified memory addressing and C++ code execution, but this capability already exists within Graphics Core Next. So, on the GPU side, it’s all about extending the basic capabilities of the GPU, rather than changing the fundamentals to get GPGPU.
Thracks has edited this

Hi there! Are you working with Stanford in getting GPU folding working under Linux? I'd like to migrate everything I can away from Windows while being on the red team.

And regarding your co-operation with EA, will we see the next-gen sports game engine supporting Mantle?

1: This is something for Stanford to undertake, not really something we can “help them” with, as we already provide the necessary tools on our developer portal.

2: Mantle is in the Frostbite 3 engine. EA/Dice have disclosed that the following franchises will soon support Frostbite: Command & Conquer, Mass Effect, Mirror’s Edge, Need for Speed, PvZ, Star Wars, Dragon Age: Inquisition. With respect to unannounced titles, I guess we all have to wait and see what they have in store!

Thank you for answering questions.
1. Do any of you play games?
If so what games do each of you play and what is your favorite game?

2. What Thermal Paste was used on 7000 Series and on R series?

1: Right now I’m playing Tomb Raider, Dishonored, and Chivalry: Medieval Warfare.

2: I don’t remember the name, but it’s a special and easily-applied compound that cures during manufacturing.

I would like to have your thoughts on the price disparity between different countries.
For instance, the R9-290 can't be found for less then 500 US dollars in Australia, but can easily be found in America (Newegg/Amazon) for 400 US dollars. This is also seen in India, where the 7850 still costs around about 360 US dollars.

Also, do price drops issued by AMD apply to everywhere, as even old products are way more expensive in Australia compared to the United States. The 7870 is supposed to be 200 USD, but you'd be hard pressed to find a 7870 for less then 250 USD in Australia. I know some of it is because of Taxing, but it is an exorbitant premium we have to pay (some 300 dollars) more for the exact same products.

We set suggested prices for our GPUs in US dollars. The prices you see in any other country are the product of tax, duty, import, and the strength of a currency compared to US dollars. Once a retailer purchases the board from us, we have absolutely no control over what they do with the product.

I currently live in Canada, and as a nation we are struggling with the same problem on premium electronics. We're a nation of 35 million people, looking longingly across the border to a country of 350,000,000 people and some of the least expensive electronic prices in the world. I come from the US, and it was immediately obvious that Canada's fiscal policies create more expensive electronic products than what I'm accustomed to. I hear about this struggle every day in the news, but I accept (with frustration) that it is a product of the fact that Canada imports everything with higher tax/duty/import fees than the US.
 
Last edited:
When will AMD have quality drivers better than Nvidia linux drivers?

Has the Steam linux client meant that AMD has committed more resources to this undervalued department?

Is there a roadmap for linux driver improvements?

You need to ask the questions here mate Lol. I believe there was a Linux question or two answered, look above.

gm!
who is the amd guy talking please?
i want to ignore him in future :)

Its ocuk's very own Thracks. Now wash your mouth out with soap and water. :p
 
Why are you spamming all this then? Why not just post a link to the transcript?

Because there is a lot of repeated questions and quotes between each reply from Thracks. Thought it would make easier reading for people if i spent the time going through and picking out the specific questions and answers. Nice to see its appreciated though. :D

I actually linked the transcript in the first post. I guess you didn't bother to look though.
 
Spamming?

Oh dear some people really hate you taking a space on this forum dont they.

Great reading and thanks for posting it all up, I hope Mantle lives up to the hype.
 
The 3gb ram making it great at 4k! :p

Yep I guess they are about as useless as the 290Xs @4K, long live the 6gb Titans.

Here is a prediction, people will still be using Titans long after everyone else have given up on the GTX 780, 780ti, 290 and 290X.:D
 
Spamming?

Oh dear some people really hate you taking a space on this forum dont they.
:p


Great reading and thanks for posting it all up, I hope Mantle lives up to the hype.

I think its going to be a big thing for AMD users. Anything that gives me more performance without having to physically upgrade the gpu is a good thing. Lets hope it lives up.

Yep I guess they are about as useless as the 290Xs @4K, long live the 6gb Titans.

Here is a prediction, people will still be using Titans long after everyone else have given up on the GTX 780, 780ti, 290 and 290X.:D

It must be a sucker punch with all the reviews out there showing a 290X comfortably ahead of a titan at 4k though, given its only got 4gb vs 6gb.
 
Its ocuk's very own Thracks. Now wash your mouth out with soap and water. :p

is thracks male or female? i shouldnt gender assume on my replies i hate when its done to me lol

but apart from that i cant believe the person talking believes what they are saying about powertune and running at 95c being genius and the rest of the world doesnt get it yadayada

i appreciate them bringing cheaper gpu's to us i really do but they should be honest about its downfalls instead of trying to dress it up as a *feature*

but i guess its kinda funny too... :p
 
Spamming?

Oh dear some people really hate you taking a space on this forum dont they.

Great reading and thanks for posting it all up, I hope Mantle lives up to the hype.

+1

On a serious note, if people don't find the content of this thread interesting they don't have to read it.
 
is thracks male or female? i shouldnt gender assume on my replies i hate when its done to me lol

but apart from that i cant believe the person talking believes what they are saying about powertune and running at 95c being genius and the rest of the world doesnt get it yadayada

i appreciate them bringing cheaper gpu's to us i really do but they should be honest about its downfalls instead of trying to dress it up as a *feature*

but i guess its kinda funny too... :p

Heres a link to his ocuk profile drop him a trust and ask. Or you can send him a tweet. Hes a friendly chap.
 
Back
Top Bottom