Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.
You know what makes April seem likely to me?- every time almost that DM goes on a rant about how they don't have silicon yet, its gonna be delayed atleast a year, its gonna be hot, slow and unfixable and then 3 months later nVidia come out with the card.
Since when? Have you got any evidence to back that up? My experience with GPU company’s is not like that, ok it does happen that the first silicon chips fail but when dealing with limited numbers of first test chips you can have a 100% yield. It doesn’t happen every time but on the other hand first run silicon should not produce dozens of dead chips unless you did something majorly wrong. I do sometimes wonder, have you ever been inside a GPU company? Have you ever seen the R&D departments and how they work?
Yet there are so many examples of company’s having silicon but not showing it to the public for various reasons. Now I don’t always agree with the reasons but company’s do have silicon and keep it hidden. IMG did the same thing, they never hold up one product and said it’s another but they had silicon kept hidden for 6 months. AMD have done the same thing kept silicon hidden. What if the first silicon they have isn't suitable to be shown to the public? What if the first chips are extra big as one example.
You know what makes April seem likely to me?- every time almost that DM goes on a rant about how they don't have silicon yet, its gonna be delayed atleast a year, its gonna be hot, slow and unfixable and then 3 months later nVidia come out with the card.
Thanks Rroff, saved me quite a bit of time this morning.
That's not factually correct enough Gregster, you haven't provided his age or when he was delivered![]()
It's funny that, because I've not said Pascal is delayed anywhere, nor that it is hot slow and unfixable....
Unless things have changed a lot since I was last in a GPU R&D building you don't do large batches of chips on a wafer for the first test chips. Depending on how you do the first test chips due to the small numbers you sometimes get 100% working yields. What you shouldn’t get is this big pile of 50+ chips that do not work. You are doing something wrong if that’s the case. Using IMG as an example they achieve first-pass silicon success on the newest GPU. If they can do it then NVidia can do it.Thanks for the TLDR on the previous page Rroff. Everything else was useless.
Got to laugh at whoever thinks you get 100% yield from an initial batch of test chips![]()
As I said already you often do not run the new GPU architecture on the new process for the first test chip. Sometimes you run the new architecture on an older process to work out the kinks then move to the new process. GPU’s are done via iterations. You don’t jump right to the end trying to run the new process node & new architecture at max speed on the first run silicon. The sensible way to do it is to use iterations working your way up to max speed. Using modern methods like IC Compiler II lets you cut the amount of iterations needed in half to hit target performance but you still do it that way. Often those early iterations of chips are not shown to the public. If NVidia are near the start of the iterations process then they can have Pascal silicon that works but isn't suitable to be shown off yet.“I have no idea why you would believe you can get this, it is both also significantly less likely to have good yields at the start of a new process.” .
How do you turn me agreeing with you into I am ignoring a huge big issue? Well it isn’t really a huge big issue. It’s a small issue but I already agreed holding up one card and saying it’s something else is wrong.“Likewise, you keep ignoring a huge big issue in regards to showing silicon.”
Unless things have changed a lot since I was last in a GPU R&D building you don't do large batches of chips on a wafer for the first test chips. Depending on how you do the first test chips due to the small numbers you sometimes get 100% working yields. What you shouldn’t get is this big pile of 50+ chips that do not work. You are doing something wrong if that’s the case. Using IMG as an example they achieve first-pass silicon success on the newest GPU. If they can do it then NVidia can do it.
How do you turn me agreeing with you into I am ignoring a huge big issue? Well it isn’t really a huge big issue. It’s a small issue but I already agreed holding up one card and saying it’s something else is wrong.
Personally I never liked NVidias PR methods. I find them overly aggressive and they bend the truth to much.
If that’s true then why do we have cases of GPU company’s doing first runs on a higher process then the products which end up on a smaller process? What are you saying seems to be directly opposite to what I am seeing being done.“For the record, no, you won't routinely tape something out on a higher process unless that higher process was all that was available. It can cost millions to tape out a device, no company chooses to do it multiple times for no reason. The way tape outs work in general is that you take IP blocks(shaders one block, rops another, video another, mem controller, etc) and design how to make them on a specific process with specific transistor designs, finfet designs are entirely different to planar. You gain nothing taping out a chip that will only be released on 16nm but taping it out at 28nm first, it's throwing away millions for no reason.”
The relevance to what NVidia did is I gave possible reasons for why the real silicon might not have been shown based on the reasons other GPU company’s choose not to show real silicon. There are any number of reasons not to show real silicon. As I said before I don’t agree with all those reasons and I don’t agree with lying like NVidia did. But just because they didn’t show real silicon doesn’t automatically mean they do not have any.“You've brought it up multiple times and again I can't see a reason because it has no relevance to what Nvidia has done. Nvidia did 'show' silicon, they just lied about what it was. Comparing them to a company that didn't show silicon at all isn't relevant because it's not the same situation.”
Well I was under the impression from my tours of GPU companies that they only do a small yield of 1 to 5 test chips and when they get first pass under such small yields they sometimes end up with 100% working yields. Not always mind you but when you only make 1 to 5 chips sometimes they all work. They only move onto the bigger wafers after the first few iterations have gone well.“As for test chips, I honestly don't know how you think it works. There aren't secret fab equipment out there where you can make 100% yield chips... if there were, why would there be the 'other' worse type of fab at all.”
The Wizard GPU for one and the PowerVR Series 6 & 6XT spring to mind which had early test chip(s) are very different to the final chip.“What products?”
From my point of view 100% of early first run test chips I have seen or read about are different. One of the reasons I asked had you ever been inside a GPU company is to see if you just telling us how you think it is or do you really know. Most of what you have told me so far the opposite of what I have been seeing.“can't think of a single chip off the top of my head that was tested at one process node but produced and released on another.”
That’s not how I understood it. Yes a tiny version with less capacity in terms of size/production capability but also a smaller wafer making fewer chips to keep costs down. Surly it’s a waste of money making a full 300mm wafer of a test chip that might not even work. From what I have seen you start with a very low speed FPGA test silicon chip then you move away from FPGA start on higher but still low speed node and use iterations to work up to a full speed finale node chip. In short you start with a single FPGA chip, go to a small wafer; make few chips then after a few iterations move onto the larger wafers.“You test a new process in what is effective a test fab, a tiny version with say 1/100th of the capacity in terms of size/production capability. You still have the same equipment, you still have to tape out chips, it's still 300mm wafers”
All I can find are images of test wafers for new nodes which one would expect to be 300mm. I cannot find any images of early test chips being done on a full 300mm wafer.“When AMD, ARM, Samsung or Hynix show test chips on a wafer, it's a 300mm wafer. Google images of samsung test wafer 20nm, or any other manufacturer.”
Quotes from much earlier in this very thread. (page 31)
So you do admit then that it is standard practice to show mock ups then.
Before you go accusing me of flat out lying again, would you be so kind to point out just where I said he was telling the truth, I just said he could hold up another mock up and tell us here is the new Pascal. How true that statement would be is irrelevant to what I posted.
Let the deflection begin.