• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

** The Official Nvidia GeForce 'Pascal' Thread - for general gossip and discussions **

Man of Honour
Joined
13 Oct 2006
Posts
92,172
You know what makes April seem likely to me? :D - every time almost that DM goes on a rant about how they don't have silicon yet, its gonna be delayed atleast a year, its gonna be hot, slow and unfixable and then 3 months later nVidia come out with the card.
 
Caporegime
Joined
18 Oct 2002
Posts
33,188
Since when? Have you got any evidence to back that up? My experience with GPU company’s is not like that, ok it does happen that the first silicon chips fail but when dealing with limited numbers of first test chips you can have a 100% yield. It doesn’t happen every time but on the other hand first run silicon should not produce dozens of dead chips unless you did something majorly wrong. I do sometimes wonder, have you ever been inside a GPU company? Have you ever seen the R&D departments and how they work?


Yet there are so many examples of company’s having silicon but not showing it to the public for various reasons. Now I don’t always agree with the reasons but company’s do have silicon and keep it hidden. IMG did the same thing, they never hold up one product and said it’s another but they had silicon kept hidden for 6 months. AMD have done the same thing kept silicon hidden. What if the first silicon they have isn't suitable to be shown to the public? What if the first chips are extra big as one example.

I'm not sure you have a clue what you're talking about in either of these cases. Silicon has defects in it, you categorically do not get a 100% working wafer, no one does, today you won't get 100% of working chips off a wafer of 28nm, or 40nm node. There will always be defects and always, absolutely without fail always be non working chips off a wafer. I have no idea why you would believe you can get this, it is both also significantly less likely to have good yields at the start of a new process.


Likewise, you keep ignoring a huge big issue in regards to showing silicon. I have no where claimed people always show silicon, I have no where claimed most companies do so. But Nvidia went up on stage and said "look here at the new silicon", I didn't do that, I didn't claim they did that... they actually physically did that. Their CEO stood up there waving around a PCB with a couple of visible chips on it and told everyone it was Pascal.

Which is precisely why if you read my post properly that I said it's completely normal to NOT show silicon, it's completely normal to show a card with how a future product will look, it's only not normal to hold up a card proclaiming it's the new product when in fact lying about it.

The issue all comes with the fact that Nvidia claimed it was Pascal but wasn't.
 
Caporegime
Joined
18 Oct 2002
Posts
33,188
You know what makes April seem likely to me? :D - every time almost that DM goes on a rant about how they don't have silicon yet, its gonna be delayed atleast a year, its gonna be hot, slow and unfixable and then 3 months later nVidia come out with the card.

It's funny that, because I've not said Pascal is delayed anywhere, nor that it is hot slow and unfixable....
 
Soldato
Joined
29 May 2006
Posts
5,354
Thanks for the TLDR on the previous page Rroff. Everything else was useless.
Got to laugh at whoever thinks you get 100% yield from an initial batch of test chips :rolleyes:
Unless things have changed a lot since I was last in a GPU R&D building you don't do large batches of chips on a wafer for the first test chips. Depending on how you do the first test chips due to the small numbers you sometimes get 100% working yields. What you shouldn’t get is this big pile of 50+ chips that do not work. You are doing something wrong if that’s the case. Using IMG as an example they achieve first-pass silicon success on the newest GPU. If they can do it then NVidia can do it.
 
Last edited:
Soldato
Joined
22 Aug 2008
Posts
8,338
This isn't a matter of opinion, got to love that old desperation move. Win any argument by claiming opinion and calling for backup so your "opinion" shouts down the facts.

The people who follow the semi industry and have facts vs blind faith/hope & wccf.

But because this is an NV thread and people coming here don't want to hear bad news, are we supposed to not share it? Well, it's out there for anyone who wants to know the truth. Eventually reality will find this little corner of the Internet.
 
Soldato
Joined
30 Mar 2010
Posts
13,117
Location
Under The Stairs!
Pascal could be sooner than we think, Zotac don't have any 970's for rma replacements:eek::
Zotac GTX 970 4GB GDDR5 Dual DVI HDMI DisplayPort PCI-E Graphics Card...

Reason For Return: Faulty Under Warranty Fault Description: Blown
Requested Action: Refund Original Order Number: 2*******
Item Status: Closed
Item Notes
***** ******* RMA Number:13*****



Hi tom ***********,



Unfortunately the manufacturer was unable to replace or repair your item, therefore a refund will be issued once the RMA has closed.

More details can be found in the Your Account section of www.******.com or alternatively you can call our Customer Support Team on 0*********.

Please note; debit/credit card refunds can take up to three working days to clear.

Kind regards,

****** Customer Support
www.******.com

Oh well, not getting to try 970SLi now, dodged a bullet.:D
 
Soldato
Joined
29 May 2006
Posts
5,354
“I have no idea why you would believe you can get this, it is both also significantly less likely to have good yields at the start of a new process.” .
As I said already you often do not run the new GPU architecture on the new process for the first test chip. Sometimes you run the new architecture on an older process to work out the kinks then move to the new process. GPU’s are done via iterations. You don’t jump right to the end trying to run the new process node & new architecture at max speed on the first run silicon. The sensible way to do it is to use iterations working your way up to max speed. Using modern methods like IC Compiler II lets you cut the amount of iterations needed in half to hit target performance but you still do it that way. Often those early iterations of chips are not shown to the public. If NVidia are near the start of the iterations process then they can have Pascal silicon that works but isn't suitable to be shown off yet.

Since when are tape out first silicon test chips done on full large wafers? That's why I didn't get the ton off failed chips comment.



“Likewise, you keep ignoring a huge big issue in regards to showing silicon.”
How do you turn me agreeing with you into I am ignoring a huge big issue? Well it isn’t really a huge big issue. It’s a small issue but I already agreed holding up one card and saying it’s something else is wrong.

Personally I never liked NVidias PR methods. I find them overly aggressive and they bend the truth to much.
 
Last edited:
Caporegime
Joined
18 Oct 2002
Posts
33,188
Unless things have changed a lot since I was last in a GPU R&D building you don't do large batches of chips on a wafer for the first test chips. Depending on how you do the first test chips due to the small numbers you sometimes get 100% working yields. What you shouldn’t get is this big pile of 50+ chips that do not work. You are doing something wrong if that’s the case. Using IMG as an example they achieve first-pass silicon success on the newest GPU. If they can do it then NVidia can do it.

You seem to be confusing different things. First pass success is simply saying that the first tape out produces masks that when used to make full wafers returns working chips with no major problems, either with yields or with any potential bugs. It has nothing to do with 100% yields.

Also no, you don't do small test chips, you do small numbers of wafers, but not 'small wafers'.

The tape out of a design involves laying out the floorplan of the chip then translating that floor plan into a bunch of masks the fab can use to produce chips. These masks are the final masks, the only way to test them is to produce full wafers, a test batch would usually be say 5-20 wafers which take 6-8 weeks to come back.

The goal is that the design choices, like density, and the goal characteristics like required clock speed, power usage and thermal output are all within a predefined range that enable the device to be used. So if 80%+ of chips come back working, hit within 5% of your targeted clock speed and produce the correct results under testing you have achieved first pass success. If the chip is unstable at clock speeds required and have to go 40% lower, maybe because you've packed it too densely in an attempt to get more chips per wafer, you have to respin, which means relaying the floor plan to fix problems and then make up a new mask set and after that is done wait another 8 weeks.

You don't hand lay out transistor by transistor on some small wafer and get perfectly working chips, that isn't remotely how it works. First pass success has nothing to do with working yield coming off wafers except that if you have dire yields you would respin and certainly wouldn't achieve first pass success.

Img would certainly have made multiple full wafers and absolutely had multiple non working dies come back. What would let them achieve first pass success is the yield being high enough that they hit the profitability levels required, that the performance characteristics were within target range, IE they targeted 900Mhz clock speeds at 1.2W and got either 900Mhz at 1.23W or 850Mhz at 1.15W, both would likely be acceptable to them. It means they didn't find major bugs either, like one of the test calculations performed didn't come back with the wrong answer, or at least they didn't find a bug that couldn't be solved in software/bios without significantly effecting performance.


For the record, no, you won't routinely tape something out on a higher process unless that higher process was all that was available. It can cost millions to tape out a device, no company chooses to do it multiple times for no reason. The way tape outs work in general is that you take IP blocks(shaders one block, rops another, video another, mem controller, etc) and design how to make them on a specific process with specific transistor designs, finfet designs are entirely different to planar. You gain nothing taping out a chip that will only be released on 16nm but taping it out at 28nm first, it's throwing away millions for no reason. There isn't any chance Pascal was taped out at 28nm first, watercooling was used because they are going to use a 650W tec, nothing more or less. You wouldn't cool a chip anything over , well theoretically a 325W chip with a 650W tec, but realistically 200-250W. A tec gives you an incredible ability, with a proper variable power supply, to change the ampage and change the cooling power over anything from 20-650W extremely accurately. This allows you to do thermal testing like changing temp of loaded or idle chip from 20-100c hundreds/thousands of times to check for package cracking and other problems. To check what temps you would need or what clock speeds to hit certain power usage numbers. So a 650W tec and watercooling for it doesn't indicate taping it out on a higher process.

The times companies have done it, for instance the AMD 4770, it gives AMD a chance to learn about the new process before putting a new architecture on the new process and moving the whole line. It was small production early on, it helps them learn a bit about how best to design for tape out the next chips. Occasionally if a process is delayed you might do a new architecture on an older process to learn about your design, release usually a smaller chip with enough time to find any bugs in a new architecture. These things generally only happen when an architecture isn't ready but a new process is, a new process is ready but the new architecture isn't and in both cases usually when the new process is similar to the old process with similar design rules.

28nm is single patterning planar process, 16nm is a double patterning finfet process, there would be absolutely nothing to learn from taping out a 28nm Pascal last year before taping out a 16nm finfet design. There would be little to no point doing it at 20nm, while the metal layers could potentially be the same, the reality is finfets being smaller and with different geometry would almost certainly necessitate a different metal layer design anyway.

How do you turn me agreeing with you into I am ignoring a huge big issue? Well it isn’t really a huge big issue. It’s a small issue but I already agreed holding up one card and saying it’s something else is wrong.

Personally I never liked NVidias PR methods. I find them overly aggressive and they bend the truth to much.

You've brought it up multiple times and again I can't see a reason because it has no relevance to what Nvidia has done. Nvidia did 'show' silicon, they just lied about what it was. Comparing them to a company that didn't show silicon at all isn't relevant because it's not the same situation.
 
Last edited:
Soldato
Joined
29 May 2006
Posts
5,354
“For the record, no, you won't routinely tape something out on a higher process unless that higher process was all that was available. It can cost millions to tape out a device, no company chooses to do it multiple times for no reason. The way tape outs work in general is that you take IP blocks(shaders one block, rops another, video another, mem controller, etc) and design how to make them on a specific process with specific transistor designs, finfet designs are entirely different to planar. You gain nothing taping out a chip that will only be released on 16nm but taping it out at 28nm first, it's throwing away millions for no reason.”
If that’s true then why do we have cases of GPU company’s doing first runs on a higher process then the products which end up on a smaller process? What are you saying seems to be directly opposite to what I am seeing being done.

As for all your stuff on producing full wafers and 5-10 wafers along with anything else can you back any of that up? I have not been able to find any modern articles on producing test chips or anything to say it’s done like that. I was under the impression first run test chip runs are normally a lot smaller scale than that. If that’s wrong I would like to read into it.



“You've brought it up multiple times and again I can't see a reason because it has no relevance to what Nvidia has done. Nvidia did 'show' silicon, they just lied about what it was. Comparing them to a company that didn't show silicon at all isn't relevant because it's not the same situation.”
The relevance to what NVidia did is I gave possible reasons for why the real silicon might not have been shown based on the reasons other GPU company’s choose not to show real silicon. There are any number of reasons not to show real silicon. As I said before I don’t agree with all those reasons and I don’t agree with lying like NVidia did. But just because they didn’t show real silicon doesn’t automatically mean they do not have any.

Although we might not agree on timeframes I think we can both agree NVidia need first silicon x amount of months before shipping something to partners. So assuming the April date has not been delayed we can work backwards and estimate when we expect first silicon to be done by. Based on how IMG work I was going with a 6 month ish timeframe from first silicon to dev boards being sent to early automotive partners for NV if things go smoothly. 8 months if they have to respin extra times.
 
Caporegime
Joined
18 Oct 2002
Posts
33,188
What products? THere are exceptionally few straight up optical shrink chips, the last GPU I can think of off the top of my head was the 280 > 285gtx and a large reason for that was simply cost of the die for the 280 and the fact that a 4870 beating 260/270 and almost matching a 280 at about half the die size. Nvidia needed something more price competitive asap and couldn't wait all the way to 55nm. I can't think of a single chip off the top of my head that was tested at one process node but produced and released on another.

99% of the time either AMD or Nvidia put out a chip on new process with old architecture, or new architecture on old process it's something small, it's to gain some experience or test something out, it's done before the new process is ready for full production so they would otherwise be waiting a long time for something to be ready.


As for test chips, I honestly don't know how you think it works. There aren't secret fab equipment out there where you can make 100% yield chips... if there were, why would there be the 'other' worse type of fab at all.

You have the same equipment but rather than in a full scale production fab, because to install equipment in a full scale fab you have to shut down production, open a clean room, remove existing equipment and replace it. You test a new process in what is effective a test fab, a tiny version with say 1/100th of the capacity in terms of size/production capability. You still have the same equipment, you still have to tape out chips, it's still 300mm wafers and definitely not good yields until they tune the process. Once they approach sensible yields is when they start moving the equipment into full scale fabs.

When AMD, ARM, Samsung or Hynix show test chips on a wafer, it's a 300mm wafer. Google images of samsung test wafer 20nm, or any other manufacturer. It's all pictures of 300mm wafers full of chips. Foundry research will use smaller capacity wafers, but you're talking about today attempting to make a few chips on 5-7nm in some research lab. Nvidia/AMD do not work at that level(AMD did before spinning off foundry business). When it comes to foundry customers, they don't screw around with research, they have a process presented to them, they get design rules to follow and they create a set of plans to produce a set of masks to make 300mm wafers. That is how chips get made, they don't really make 'test' chips, they simply tape out, get the first fairly small batch back and test the hell out of them to make sure they work as required. If they do, production goes into full swing using the same masks, nothing else changes.


In terms of tape out to release time, with double patterning a respin would likely add more than 2 months now, the tape out process itself has notably increased in complexity and time frame. Without having seen any official confirmation of Pascal going anywhere in April extrapolating back from there isn't going to help much. If Nvidia is planning on shipping PX2 modules to automotive partners in April... for all we know these won't contain Pascal and will continue to be Maxwell dev kit versions and they will only be ready to ship in April.
 
Soldato
Joined
29 May 2006
Posts
5,354
“As for test chips, I honestly don't know how you think it works. There aren't secret fab equipment out there where you can make 100% yield chips... if there were, why would there be the 'other' worse type of fab at all.”
Well I was under the impression from my tours of GPU companies that they only do a small yield of 1 to 5 test chips and when they get first pass under such small yields they sometimes end up with 100% working yields. Not always mind you but when you only make 1 to 5 chips sometimes they all work. They only move onto the bigger wafers after the first few iterations have gone well.



“What products?”
The Wizard GPU for one and the PowerVR Series 6 & 6XT spring to mind which had early test chip(s) are very different to the final chip.



“can't think of a single chip off the top of my head that was tested at one process node but produced and released on another.”
From my point of view 100% of early first run test chips I have seen or read about are different. One of the reasons I asked had you ever been inside a GPU company is to see if you just telling us how you think it is or do you really know. Most of what you have told me so far the opposite of what I have been seeing.



“You test a new process in what is effective a test fab, a tiny version with say 1/100th of the capacity in terms of size/production capability. You still have the same equipment, you still have to tape out chips, it's still 300mm wafers”
That’s not how I understood it. Yes a tiny version with less capacity in terms of size/production capability but also a smaller wafer making fewer chips to keep costs down. Surly it’s a waste of money making a full 300mm wafer of a test chip that might not even work. From what I have seen you start with a very low speed FPGA test silicon chip then you move away from FPGA start on higher but still low speed node and use iterations to work up to a full speed finale node chip. In short you start with a single FPGA chip, go to a small wafer; make few chips then after a few iterations move onto the larger wafers.



“When AMD, ARM, Samsung or Hynix show test chips on a wafer, it's a 300mm wafer. Google images of samsung test wafer 20nm, or any other manufacturer.”
All I can find are images of test wafers for new nodes which one would expect to be 300mm. I cannot find any images of early test chips being done on a full 300mm wafer.
 
Soldato
Joined
5 Sep 2011
Posts
12,864
Location
Surrey
Quotes from much earlier in this very thread. (page 31)












So you do admit then that it is standard practice to show mock ups then.

Before you go accusing me of flat out lying again, would you be so kind to point out just where I said he was telling the truth, I just said he could hold up another mock up and tell us here is the new Pascal. How true that statement would be is irrelevant to what I posted.


Let the deflection begin.



See, more of this. That becomes the problem when one guy posts so much vitriol, it becomes difficult to read between it.
 
Back
Top Bottom