• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

The thread which sometimes talks about RDNA2

Status
Not open for further replies.
You are beyond obtuse and petty. On ignore you go.

Name calling how cultured of you. Sure ignore me if you are going to be like that, I am really cut up about it.

Getting Valhalla to work with SAM is a pretty significant feat IMO. Open world games have always been a struggle with Radeon. To actually see them reverse course with a performance improvement like that and again with SAM shows the potential AMD can extract from it with their setups.

I'm not convinced that reducing IQ to "upscale it" for a performance increase is as impressive as getting SAM working in an open world game. It's going to be interesting to see what GTA VI will offer. I seriously doubt it will be a repeat of GTA V with Radeon on the short end.

One game is just an outlier. If you base any conclusion on an outlier its called the logical facility hasty generalization. It is basically making a claim based on evidence that it just too small. What can you claim about SAM and good performance in one game. Only that it performs well in that game. Anything else is a hasty generalization. There is none of this bright future crap, significcant feature, potential or other non sense. You cant claim that, you have not got the evidence.

Give the fact is peformance is not as high as AMD stated which is up to 11% in games. I would be lactluster about it. Many games are 1-2fps more at 4k, many stay the same fps SAM or no SAM and one game has reduced fps. This is me expecting there is a good chance nvidia cards will have a SAM feature to.

One has to ask why you are so upbeat about it given the truth?

source https://www.eurogamer.net/articles/digitalfoundry-2020-amd-radeon-rx-6800-and-6800-xt-review?page=6

There are more SAM benchmarks here https://www.techpowerup.com/review/amd-radeon-sam-smart-access-memory-performance/2.html
 
Last edited:
@7:50 Cudo's to AMD for launching competitive Card. With a trifecta kind of staggered launch: AMD is launching CPU's/GPUs/Consoles.

@9:35 both CPU and GPU collaborated together to bring the RX 6000 series. Better power to performance matrix.

@14:25 AMD will have laptop variants but wouldn't comment about it nor elaborate on it/when/how/etc

@18:02 Reason for using Infinity Cache. In particular they have another announcement they are making about Infinity Cache. He didn't commit to a time frame. But said it will pretty soon (tm).

@19:49 Vram Discussed: Using 16GB Vram and infinity cache and the 6000 series still uses less power then Ampere

@22:00: AMD will ensure that developers will take advantage of AMD Ecosystem between consoles/PC. As developers don't want to worry about a myrid of pc configurations. They prefer a closed form factor. Because AMD is Uarch used in console games developers will use more vram, RT (based on how AMD wants it), and all the other things AMD wants in games.

@26:40 RT/DLSS Discussed. He stated as new games are launched they will improve RT performance. But he emphasized new titles. He also emphasized that when you are coding for the console you are coding for AMD.

@28:30 DLSS Discussed. Originally was going to develop their own API. Developers begged AMD not to create another API (would only work with Radeon). So they are going for an open solution. Developers do not like having to fetch AMD/Nvidia reps to come on site to help code for their games (that's some juicy gossip right there, I didn't know that). They are still working on this with developers which is why it's not ready yet. (Or else they would have launched this API he speaks of.)

@32:25 (several minutes) Smart Access Memory Discussed. AMD never said that SAM wouldn't work with other hardware. AMD simply focused their efforts on this generation of hardware. Validation work, communication protocol work, tweaking, etc. He said they are still undecided on older hardware but they love backward compatibility... They are still evaluating it. MORE PERFORMANCE IS COMING AS IT MATURES. (Is that tied into infinity cached he mentioned earlier??? Hmmm...) But it is clear that Intel has to be involved with their bios, etc to get it to work for Nvidia. It's not just a nvidia thing. It's more then just a driver update. Nvidia told PCWorld they felt that AMD would hard block them. Scott said they wouldn't hard block them. (perhaps soft block them, lol /s)

@37:45 Question: Why are you doing that (allowing Nvidia in for SAM on AMD) when nvidia tried to flip freesync to g-sync as they out marketed AMD in branding. Do you run the risk of Nvidia re-market/rebranding SAM to something Nvida will claim as their own? Answer: To be determined...
(I'm flabbergasted. Perhaps he can't say in front of the camera but I hope AMD isn't dumb enough to let nvidia in w/o paying monthly royalty fees/SAM for rent.)

@38:40 Smart Shift Discussion

@41:00 Infinity Cache just L3 cache? Mainly L3 Cache with special Sauce. He won't reveal exactly what it is yet.

@45:00 Sapphire does not help design reference pcb

@51:45 Availability discussed. They are shipping cards everyday. They like the EVGA queue system. But wouldn't elaborate. AMD website was able to stop scalpers. Other partners were able to stop some of them.

@55:50 RT will be rolled out there the entire RX6000 series stack.

@56:30 Varible Rate Shading Discussed (a few minutes) with Radeon boost with Ray Tracing to improve performance

@57:10 Direct Storage coming...but not elaborated on



Enjoy
 
Last edited:

Its a PR session for AMD. I have got as far as the AMD engineer stating there is no DLSS for AMD hardware and they are still developing a solution. That they want an open sourced solution which is a way of saying, we cant match DLSS we need to undermined its adoption. Plus we need to work with partners, means nothing has been developed yet and we are working to finalize what we need to do. They are basically still working out how to undermined DLSS adoption with game developers.

Open source is good for the industry until it comes to SAM which will only work on their latest hardware. They took PCIe bar (nvidia statement) and renamed it SAM for marketing purposes and made their new cpu hardware the gate to entry. Nvidia countered by saying they will just enable BAR or a SAM like feature in drivers and provide it to everyone. Its a good laugh it really is.

With RT AMD are just going to optimise, more performance as drivers matures which means they are slower please buy our product it will get better. Meanwhile this most likely can never happen on a faster cards from other manufactures. Perish the thought. Also its not like they are close to nvidia in RT benchmarks like Control. Its a big lead. There is a lead in port royal over the 2080ti FE https://www.overclock3d.net/reviews/gpu_displays/amd_radeon_rx_6800_and_rx_6800_xt_review/20 but even when the 6800xt overclocks it cant get close to the 3080. The 3080 is 10 fps faster in port royal. https://images.hothardware.com/contentimages/article/3043/content/port-1-radeon-6800-amd.png

Time will tell. Benchmark will prove. The games will come.

By the way thanks for the timestamps and the video.
 
Last edited:
Its a PR session for AMD. I have got as far as the AMD engineer stating there is no DLSS for AMD hardware and they are still developing a solution. That they want an open sourced solution which is a way of saying, we cant match DLSS we need to undermined its adoption. Plus we need to work with partners, means nothing has been developed yet and we are working to finalize what we need to do. They are basically still working out how to undermined DLSS adoption with game developers.

Open source is good for the industry until it comes to SAM which will only work on their latest hardware. They took PCIe bar (nvidia statement) and renamed it SAM for marketing purposes and made their new cpu hardware the gate to entry. Nvidia countered by saying they will just enable BAR or a SAM like feature in drivers and provide it to everyone. Its a good laugh it really is.

With RT AMD are just going to optimise, more performance as drivers matures which means they are slower please buy our product it will get better. Meanwhile this most likely can never happen on a faster cards from other manufactures. Perish the thought. Also its not like they are close to nvidia in RT benchmarks like Control. Its a big lead. There is a lead in port royal over the 2080ti FE https://www.overclock3d.net/reviews/gpu_displays/amd_radeon_rx_6800_and_rx_6800_xt_review/20 but even when the 6800xt overclocks it cant get close to the 3080. The 3080 is 10 fps faster in port royal. https://images.hothardware.com/contentimages/article/3043/content/port-1-radeon-6800-amd.png

Time will tell. Benchmark will prove. The games will come.

By the way thanks for the timestamps and the video.

This lad doesn't come across as bitter. Thanks for stating your opinions as fact......
 
This lad doesn't come across as bitter. Thanks for stating your opinions as fact......

Why are you expecting AMD to knock out a DLSS feature next week? When they are talking to game developers about it. That sounds like real progress right there. Then add 10 fps in RT benchmarks like port Royal with driver updates? Or is the AMD sales rep gospel truth to you?

Or is the only type of constructive post you can manage, that of a unconstructive person nature?
 
Not a valid result, its meaningless. What's the point of modifying your driver to reduce image quality to get a higher score? 3d Mark won't let you post your cheating score anyway

As of right now, the 6800xt does not hold a WR - if you think it does, go find it :) https://www.3dmark.com/hall-of-fame-2/fire+strike+3dmark+score+performance+preset/version+1.1/1+gpu
Any proof of the claim? I do know they will not validate a score simply because you are using a particular AMD Driver. And are known to hamper Radeon cards using nvidia implementation of asynchronous compute with emphasis of pre-emption/context switching in TimeSpy. At one time they even allowed gpu physx in Vantage as nvidia was the only one that had it. And other such meaningless benchmarks that allowed Nvidia to cheat. And you want to call this cheating? When anyone who has Nvidia uses Inspector to reduce the LOD which has been allowed in 3DMark? Really now...you don't say.

Post me a link of when and how that person was found cheating please. I certainly want to compare and contrast what you call cheating.
:p
 
Any proof of the claim? I do know they will not validate a score simply because you are using a particular AMD Driver. And you want to call this cheating?

Post me a link of when and how that person was found cheating please.
:p

Done - The overclocker's name is lucky noob and he admitted that he disabled all tesselation features in the AMD driver

And you want to call this cheating?

yes it is, Kaapstad on this forum also does not allow unvalidated benchmarks to be recorded, he throws them away if you try it
 
All I could find was Lucky_n00b 2558MHz 45054 score. Much less than the video. Also when you go into the details it states, "The result is hidden and will not be shown for example on leaderboards or search." Yet the video states 47932 which would put him top with the 3090 at 47725 2nd. Why not validate the score and take #1 or if he tried why does it not appear on the leadboard. It begs the question. If you don't validate there is no glory.

Well thats one way of getting it.

  • 2650Mhz is the boost setting applied from Performance Tuning panel, not the real clock during load (GPU is power-limited)
  • Tesselation modified, Not 3DMark Hall of Fame valid (valid score with similar setup around 45K – https://www.3dmark.com/fs/24052851
  • It’s rather difficult managing the cold bug on the 2-CCD CPU, this one lost the GPU (d6 error) and generally having errors at -100C, changing FCLK to lower doesn’t help, not sure exactly what’s holding it back
  • It’s been a LONG time since we saw an AMD CPU and GPU on any of the 3DMark, this is refreshing
Yes, it was weird to me too, even with Tess ON, the default score is close to 50K Graphics on the 6800 XT. TS and PR is pretty OK, but not this strong.

On raster-based games it was around 3080-level, not 3090. (On DXR games beating 2080 Ti FE by small percentage,but still behind 3080 by a decent margin)
https://hwbot.org/submission/4606724_lucky_n00b_3dmark___fire_strike_radeon_rx_6800_xt_47932_marks
 
Done. The overclocker's name is lucky noob and he admitted that he disabled all tesselation features in the AMD driver




https://videocardz.com/newz/amd-radeon-rx-6800xt-breaks-hwbots-3dmark-fire-strike-world-record
yes it is, Kaapstad on this forum also does not allow unvalidated benchmarks to be recorded, he throws them away if you try it


He disabled tessellation which is a valid thing to do as FireStrike uses an inordinate amount of it that doesn't simulate what you see in games. It only caters to nvidia. Just like GPU physx in Vantage...just like pre-emption and context switching for Time Spy. He made #1 in HWBot and that's what I go by.
https://hwbot.org/submission/4606724_lucky_n00b_3dmark___fire_strike_radeon_rx_6800_xt_47932_marks

Edit:
Noticed the edit but I still got the link.
 
Last edited:


He disabled tessellation which is a valid thing to do as Timespy uses an inordinate amount of it that doesn't simulate what you see in games. It only caters to nvidia. Just like GPU physx in Vantage...just like pre-emption and context switching for Time Spy. He made 1 in HWBot and that's what I go by.
https://hwbot.org/submission/4606724_lucky_n00b_3dmark___fire_strike_radeon_rx_6800_xt_47932_marks

Edit:
Noticed the edit but I still got the link.

So disable the settings that slow AMD cards down... He is number 18 btw below 8-pack's air cooled titan x's. https://hwbot.org/benchmark/3dmark_-_fire_strike/halloffame Completely not the same as being no 1 on 3dmark's leaderboard.

Still 45k is still a great score.
 


He disabled tessellation which is a valid thing to do as Timespy uses an inordinate amount of it that doesn't simulate what you see in games. It only caters to nvidia. Just like GPU physx in Vantage...just like pre-emption and context switching for Time Spy. He made #1 in HWBot and that's what I go by.
https://hwbot.org/submission/4606724_lucky_n00b_3dmark___fire_strike_radeon_rx_6800_xt_47932_marks

Edit:
Noticed the edit but I still got the link.

Timespyr or firestrike? which one are you talking about?
 
Timespyr or firestrike? which one are you talking about?
The results for him was Firestrike. Timespyr is just a joke of a benchmark. :p

I've used 3dMark for years. Going all the way back to 2001/SE days and it was a fun benchmark to use but in the past 10 or so years they've been heavily slanted towards Nvidia. And for me Vantage was the 1st time I started believing what others were saying about it. They showed no empathy towards the many complaints of allowing nvidia to use gpu physx to beat AMD/ATI users until the media started to get wind of it and reported it.

Oliver Baltuch:“As usual I can’t really comment on any rumors, but, our Benchmark Development Process does have a transparency system that allows members to request changes to the specification. Those change requests are then seen by all the other members whose written opinions are taken into account by our technology committee who then decide whether to make the change to the specification. Outside of this matter, we have been introduced to this technology from NVIDIA and it is truly innovative for future games.”

Legit Reviews: Now that NVIDIA has enabled PhysX on the GPU, what will this mean in the benchmarking world? Will there be such a thing as apple to apple video card comparisons with one company using HAVOK and the other using PhysX for handling Physics?

Oliver Baltuch: ** Oliver Declined To Comment**
https://www.legitreviews.com/are-nvidia-physx-drivers-cheating-on-3dmark-vantage_733


Translation: We allowed GPU physx from nvidia and told AMD and other partners about the decision and just like I declined to comment to you, also declined to answer compliants that it was unfair for AMD/ATI users that didn't have the capability to use GPU Physx. As they are stuck on a slower, outdated x87 code to further hobble cpu physx so GPU physx look good.
-----------------

All of the current games supporting Asynchronous Compute make use of parallel execution of compute and graphics tasks. 3D Mark Time Fly support concurrent. It is not the same Asynchronous Compute....

So yeah... 3D Mark does not use the same type of Asynchronous compute found in all of the recent game titles. Instead.. 3D Mark appears to be specifically tailored so as to show nVIDIA GPUs in the best light possible. It makes use of Context Switches (good because Pascal has that improved pre-emption) as well as the Dynamic Load Balancing on Maxwell through the use of concurrent rather than parallel Asynchronous compute tasks. If parallelism was used then we would see Maxwell taking a performance hit under Time Fly as admitted by nVIDIA in their GTX 1080 white paper and as we have seen from AotS.
https://steamcommunity.com/app/223850/discussions/0/366298942110944664/

That's a pretty large thread. 3Dmark rep jarnis got verbally throttled pretty hard in that thread but still toed the line that "nv way is better". Some of those comments were deleted as he had some control of that subform back then. Even tried to get the thread perma deleted but higher up Steam Reps brought it back.

Instead of 3dmark creating an additional version that provided pure/true parallel asynchronous compute they held fast that nvidia Uarch was right. Yet we all know that parallel asynchronous compute is more widely used in games do to consoles having more games then on PC. But to this day have they changed there stance? No, to them nvidia was better. :rolleyes:

So at the end of the day I no longer use 3Dmark and haven't looked back in several years.
 
Last edited:
@7:50 Cudo's to AMD for launching competitive Card. With a trifecta kind of staggered launch: AMD is launching CPU's/GPUs/Consoles.

@9:35 both CPU and GPU collaborated together to bring the RX 6000 series. Better power to performance matrix.

@14:25 AMD will have laptop variants but wouldn't comment about it nor elaborate on it/when/how/etc

@18:02 Reason for using Infinity Cache. In particular they have another announcement they are making about Infinity Cache. He didn't commit to a time frame. But said it will pretty soon (tm).

@19:49 Vram Discussed: Using 16GB Vram and infinity cache and the 6000 series still uses less power then Ampere

@22:00: AMD will ensure that developers will take advantage of AMD Ecosystem between consoles/PC. As developers don't want to worry about a myrid of pc configurations. They prefer a closed form factor. Because AMD is Uarch used in console games developers will use more vram, RT (based on how AMD wants it), and all the other things AMD wants in games.

@26:40 RT/DLSS Discussed. He stated as new games are launched they will improve RT performance. But he emphasized new titles. He also emphasized that when you are coding for the console you are coding for AMD.

@28:30 DLSS Discussed. Originally was going to develop their own API. Developers begged AMD not to create another API (would only work with Radeon). So they are going for an open solution. Developers do not like having to fetch AMD/Nvidia reps to come on site to help code for their games (that's some juicy gossip right there, I didn't know that). They are still working on this with developers which is why it's not ready yet. (Or else they would have launched this API he speaks of.)

@32:25 (several minutes) Smart Access Memory Discussed. AMD never said that SAM wouldn't work with other hardware. AMD simply focused their efforts on this generation of hardware. Validation work, communication protocol work, tweaking, etc. He said they are still undecided on older hardware but they love backward compatibility... They are still evaluating it. MORE PERFORMANCE IS COMING AS IT MATURES. (Is that tied into infinity cached he mentioned earlier??? Hmmm...) But it is clear that Intel has to be involved with their bios, etc to get it to work for Nvidia. It's not just a nvidia thing. It's more then just a driver update. Nvidia told PCWorld they felt that AMD would hard block them. Scott said they wouldn't hard block them. (perhaps soft block them, lol /s)

@37:45 Question: Why are you doing that (allowing Nvidia in for SAM on AMD) when nvidia tried to flip freesync to g-sync as they out marketed AMD in branding. Do you run the risk of Nvidia re-market/rebranding SAM to something Nvida will claim as their own? Answer: To be determined...
(I'm flabbergasted. Perhaps he can't say in front of the camera but I hope AMD isn't dumb enough to let nvidia in w/o paying monthly royalty fees/SAM for rent.)

@38:40 Smart Shift Discussion

@41:00 Infinity Cache just L3 cache? Mainly L3 Cache with special Sauce. He won't reveal exactly what it is yet.

@45:00 Sapphire does not help design reference pcb

@51:45 Availability discussed. They are shipping cards everyday. They like the EVGA queue system. But wouldn't elaborate. AMD website was able to stop scalpers. Other partners were able to stop some of them.

@55:50 RT will be rolled out there the entire RX6000 series stack.

@56:30 Varible Rate Shading Discussed (a few minutes) with Radeon boost with Ray Tracing to improve performance

@57:10 Direct Storage coming...but not elaborated on



Enjoy
Cheers for this.
 
The results for him was Firestrike. Timespyr is just a joke of a benchmark. :p

I've used 3dMark for years. Going all the way back to 2001/SE days and it was a fun benchmark to use but in the past 10 or so years they've been heavily slanted towards Nvidia. And for me Vantage was the 1st time I started believing what others were saying about it. They showed no empathy towards the many complaints of allowing nvidia to use gpu physx to beat AMD/ATI users until the media started to get wind of it and reported it.




Translation: We allowed GPU physx from nvidia and told AMD and other partners about the decision and just like I declined to comment to you, also declined to answer compliants that it was unfair for AMD/ATI users that didn't have the capability to use GPU Physx. As they are stuck on a slower, outdated x87 code to further hobble cpu physx so GPU physx look good.
-----------------



That's a pretty large thread. 3Dmark rep jarnis got verbally throttled pretty hard in that thread but still toed the line that "nv way is better". Some of those comments were deleted as he had some control of that subform back then. Even tried to get the thread perma deleted but higher up Steam Reps brought it back.

Instead of 3dmark creating an additional version that provided pure/true parallel asynchronous compute they held fast that nvidia Uarch was right. Yet we all know that parallel asynchronous compute is more widely used in games do to consoles having more games then on PC. But to this day have they changed there stance? No, to them nvidia was better. :rolleyes:

So at the end of the day I no longer use 3Dmark and haven't looked back in several years.

I used a physx card, should I get a lower cpu score as well? That way AMD cards can get the same score for physx?

DirectX 12 features in Time Spy https://s3.amazonaws.com/download-aws.futuremark.com/3dmark-technical-guide.pdf page 33-35, page 37

Command lists and asynchronous compute
Unlike the Draw/Dispatch calls in DirectX 11 (with immediate context), In DirectX 12, the recording and execution of command lists are decoupled operations. There is no thread limitation on recording command lists. Recording can happen as soon as the required information is available.

Quoting from MSDN:"Most modern GPUs contain multiple independent engines that provide specialized functionality. Many have one or more dedicated copy engines, and a compute engine, usually distinct from the 3D engine. Each of these engines can execute commands in parallel with each other. Direct3D 12 provides granular access to the 3D, compute and copy engines, using queues and command lists."The following diagram shows a title's CPU threads, each populating one or more of the copy, compute and 3D queues. The 3D queue can drive all three GPU engines, the compute queue can drive the compute and copy engines, and the copy queue simply the copy engine.........

Once initiated, multiple queues can execute in parallel. This parallelism is commonly known as ‘asynchronous compute’ when COMPUTE queue work is performed at the same time as DIRECT queue work. It is up to the driver and the hardware to decide how to execute the command lists. The application cannot affect this decision through the DirectX 12 API. Please see MSDN for an introduction to the Design Philosophy of Command Queues and Command Lists, and for more information on Executing and Synchronizing Command Lists. In Time Spy, the engine uses two command queues: a DIRECT queue for graphics and compute and a COMPUTE queue for asynchronous compute. The implementation is the same regardless of the capabilities of the hardware being tested. It is ultimately the decision of the underlying driver whether the work in the COMPUTE queue is executed in parallel or in serial. There is a large amount of command lists as many tasks have their own command lists,(several copies so that frames can be pre-recorded).

Disabling asynchronous compute in benchmark settings

The asynchronous compute workload per frame in Time Spy varies between 10% and 20%. To observe the benefit on your own hardware, you can optionally choose to disable asynchronous compute using the Custom run settings in 3DMark Advanced and Professional Editions. Running with asynchronous compute disabled in the benchmark forces all work items usually associated with the COMPUTE queue to instead be put in the DIRECT queue.

Some nvidia drivers would disable asynchronous compute, mainly because Maxwell cards loved the feature. AMD also may have disabled https://www.legitreviews.com/amd-disabled-async-compute-older-gcn-1-0-video-cards_188713 asynchronous-compute on older cards via driver update. https://www.reddit.com/r/Amd/comments/5gqm2u/async_compute_disabled_on_gcn_10_since_1692/

Nice video on asynchronous compute that I use for plagiarism.

In Time Spy, the engine uses two command queues: a DIRECT queue for graphics and compute and a COMPUTE queue for asynchronous compute. Queues used to be individual and you could execute them one at a time. The queues were graphics, compute and copy. There were three engines, graphics, compute and copy. Each queue would have their command lists.

Asynchronous compute just allows all three to be executed concurrently. 3:01 These are now called the 3d queue, compute queue and copy queue. https://youtu.be/8OrHZPYYY9g 8:37 The 3D queue could list commands for the graphics, compute and copy engines. The compute queue could list commands for the compute and copy engines. Copy queue command lists just for the copy engine.

So if the card has asynchronous compute both the DIRECT (3d queue) and COMPUTE queues would be executed concurrently in Time spy. 3:35 Thus time spy can use Asynchronous compute and the compute queue can be executed in parallel with the direct queue. Note it is entirely up to the driver and the hardware to decide when to actually execute the given command list so long as it is executed in order in its queue. There is no need for a copy queue in time spy, its for streaming assets, which is not needed in Time Spy as they get loaded before the benchmark start. Note running time spy with asynchronous compute disabled in the benchmark forces all work items usually associated with the COMPUTE queue to instead be put in the DIRECT queue.

What each queue handles.

DIRECT queue - G-buffer draws, shadow map draws, shadowed illumination resolve, and post-processing are executed on the direct queue. G-buffer draws, shadow maps and some parts of post-processing are done with graphics shaders, while illumination resolve and the rest of the post-processing is done in compute shaders.

COMPUTE queue - Particle simulation, light culling and tiling, environment reflections, HBAO and unshadowed surface illumination resolve are executed on the compute queue. All tasks in compute queue must be done in compute shaders.

So yes this is an Asynchronous compute setup as per MSDN DX12 documentation. The work that Time Spy places into the COMPUTE queue and the specific implementation of that work is the result of deep co-operation with AMD, Intel, Microsoft, and NVIDIA.

liberal plagiarism from https://benchmarks.ul.com/news/a-closer-look-at-asynchronous-compute-in-3dmark-time-spy video here https://youtu.be/8OrHZPYYY9g
 
Last edited:
The results for him was Firestrike. Timespyr is just a joke of a benchmark. :p

I've used 3dMark for years. Going all the way back to 2001/SE days and it was a fun benchmark to use but in the past 10 or so years they've been heavily slanted towards Nvidia. And for me Vantage was the 1st time I started believing what others were saying about it. They showed no empathy towards the many complaints of allowing nvidia to use gpu physx to beat AMD/ATI users until the media started to get wind of it and reported it.




Translation: We allowed GPU physx from nvidia and told AMD and other partners about the decision and just like I declined to comment to you, also declined to answer compliants that it was unfair for AMD/ATI users that didn't have the capability to use GPU Physx. As they are stuck on a slower, outdated x87 code to further hobble cpu physx so GPU physx look good.
-----------------



That's a pretty large thread. 3Dmark rep jarnis got verbally throttled pretty hard in that thread but still toed the line that "nv way is better". Some of those comments were deleted as he had some control of that subform back then. Even tried to get the thread perma deleted but higher up Steam Reps brought it back.

Instead of 3dmark creating an additional version that provided pure/true parallel asynchronous compute they held fast that nvidia Uarch was right. Yet we all know that parallel asynchronous compute is more widely used in games do to consoles having more games then on PC. But to this day have they changed there stance? No, to them nvidia was better. :rolleyes:

So at the end of the day I no longer use 3Dmark and haven't looked back in several years.

so 3D mark is biased because Nvidia keeps winning? So game developers are also biased cause Nvidia keeps winning? Do you hear yourself
 
so 3D mark is biased because Nvidia keeps winning? So game developers are also biased cause Nvidia keeps winning? Do you hear yourself
East is always biased because he is always defending amd, and slating nvidia ;)

I just don't get why people do it. None of these companies give a damn about you. If AMD had 80% market share then Lisa would probably put a leather jacket on, strap on a huge unforgiving cucumber and take you dry from behind also.
 
East is always biased because he is always defending amd, and slating nvidia ;)

I just don't get why people do it. None of these companies give a damn about you. If AMD had 80% market share then Lisa would probably put a leather jacket on, strap on a huge unforgiving cucumber and take you dry from behind also.
 
Status
Not open for further replies.
Back
Top Bottom