• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Intel Arc series unveiled with the Alchemist dGPU to arrive in Q1 2022

Suspended
Joined
17 Mar 2012
Posts
48,327
Location
ARC-L1, Stanton System
Pretty sure it's for Tier1 (new or recent games), so for Tier 2 (oldies but still very popular) you can take the whole slide and slide the Intel section down one, while for Tier 3 (old classic games not as popular anymore) you slide it down two tiers. Something like this:
l4Ojxl8.png

However, a 400mm² 6nm GPU with 16GB selling for $200 is a bit unlikely. If its performance for Tier 3 games is that bad, then maybe 16GB makes no sense, so they could save a bit on the BOM. The A310 might have to be a OEM "giveaway".

The A380 will be "Priced from $129" if you look at the line up on that slide, $100 - $149 its right in the middle of that.

Can we take it from that Intel's bars also align with the pricing on the left? Because if they do, they are over priced.
 
Associate
Joined
31 Dec 2010
Posts
2,486
Location
Sussex
The A380 will be "Priced from $129" if you look at the line up on that slide, $100 - $149 its right in the middle of that.

Can we take it from that Intel's bars also align with the pricing on the left? Because if they do, they are over priced.
I think so, that's why I moved their cards down. Twice. But that babbling brook youtube star said that Intel were going to price based on their performance in Tier3 games (that is the ones they perform the worst). And from what we know about A380 that's not the case.

Hence I too think they need to adjust the prices way down, that is knock at least 25% off to be half-way competitive.
 
Suspended
Joined
17 Mar 2012
Posts
48,327
Location
ARC-L1, Stanton System
I think so, that's why I moved their cards down. Twice. But that babbling brook youtube star said that Intel were going to price based on their performance in Tier3 games (that is the ones they perform the worst). And from what we know about A380 that's not the case.

Hence I too think they need to adjust the prices way down, that is know at 25% off to be half-way competitive.

Its one of those things where you have to ask what exactly they define as Tier 3, the marketing sounds like its based on the worst performance games, but marketing doesn't necessarily match reality, especially where Intel are concerned, doubly so with Ryan Shrout in the room.

For years Intel always claimed 15% performance improvements on every of its quad core generation, in reality that was around 3%, it was 15% in Sandra, and only in Sandra, which Intel own.
 
Associate
Joined
25 May 2010
Posts
359
It's a case of wait and see and be glad / in uproar at that point, it's so far just a discussion about our opinions of what may or may not happen and people saying that is bad....
 
Associate
Joined
17 Oct 2009
Posts
2,376
If it's rubbish for gaming generally, it could be good for a plex/jellyfin/ember server. Assuming they will support AV1 hardware encoding.
 
Associate
Joined
25 May 2010
Posts
359
16GB vram for the A770
8GB vram for the A750

I'd have preferred 10-12GB of faster ram for the A770 considering this isn't a high tier 4k graphics card. 16GB seems a waste and somewhere they could have spent a bit more money elsewhere.

 
Don
Joined
19 May 2012
Posts
17,486
Location
Spalding, Lincolnshire
16GB vram for the A770
8GB vram for the A750

I'd have preferred 10-12GB of faster ram for the A770 considering this isn't a high tier 4k graphics card. 16GB seems a waste and somewhere they could have spent a bit more money elsewhere.

Both 10gb and 12gb requires a different width memory bus. 16gb can be achieved with nothing but bigger modules.
 
Associate
Joined
31 Dec 2010
Posts
2,486
Location
Sussex
16GB vram for the A770
8GB vram for the A750

I'd have preferred 10-12GB of faster ram for the A770 considering this isn't a high tier 4k graphics card. 16GB seems a waste and somewhere they could have spent a bit more money elsewhere.

Problem is, how would 10GB or 12GB work with a 256-bit bus?

Sometimes there just are actual technical reasons why something isn't possible and this is one of them.

Beaten by @Armageus due to forgetting to hit send.
 
Associate
Joined
25 May 2010
Posts
359
Instead of having 8x2GB chips, have 4x2 and 4x1 (12GB), or 2x2 and 6x1 (10GB), the 1's and 2's can be mixed. I believed it would be a cost saving which could go to faster memory.

Seems i'm misinformed and living a few years in the past, but i thought it could be done quite easily.

Well i think we can agree at least that 16GB is too much for such a card, so in this case i should have said it would have been better to have had a 192-bit or 160-bit bus :)
 
Last edited:
Associate
Joined
30 Jun 2016
Posts
322
Instead of having 8x2GB chips, have 4x2 and 4x1 (12GB), or 2x2 and 6x1 (10GB), the 1's and 2's can be mixed. I believed it would be a cost saving which could go to faster memory.

Seems i'm misinformed and living a few years in the past, but i thought it could be done quite easily.

Well i think we can agree at least that 16GB is too much for such a card, so in this case i should have said it would have been better to have had a 192-bit or 160-bit bus :)

All your sugestions makes the card slower, so no I don't agree
 
Soldato
Joined
26 May 2014
Posts
2,959
Was this the 970 issues ? or is mixing a different problem
It's a different problem (the GTX 970's issue was caused by the ROP layout of the chip itself), but the end result would be essentially the same, i.e. drastically reduced bandwidth on the smaller modules. Nvidia has done that a few times in the past. The GTX 660 (and the 2GB variant of the GTX 660 Ti) for example has a 192-bit bus, leaving it with 512MB of memory that tops out at 48GB/s bandwidth instead of the 144GB/s of the other 1.5GB. Obviously that's not ideal for performance, though frankly having owned a GTX 970 for a long time, I never once managed to make it crack and actually kill its performance just by filling up the VRAM (which happened almost constantly towards the end of my time using it). Whatever Nvidia are doing at a driver level to manage it seems pretty effective.
 
Soldato
Joined
19 Dec 2003
Posts
7,221
Location
Grimsby, UK
I'm not sure how much can read into this, just had a quick scan, looked interesting anyway.

25 July 2022 | PC Gamer said:
Overclocker gives Intel's Arc A380 GPU more power and sees huge gains

We've all been waiting to see how Intel's launch into the consumer graphics card market would go. The Arc A380 GPU is a lower level offering by the renowned chip maker which had its entry level performance stats leaked long before launch. Though, it turns out that with the right bit of tweaking this card might be able to pull off far more than expected.

Tom's Hardware shared a YouTube video by the prolific Russian (so you may need subtitles) Overclocker Pro Hi-Tech showing his results including attempts to get more out of an Arc A380 graphics card. Pro Hi-Tech managed to achieve huge performance gains with framerates boosted by up to 60% on some games, without even using traditional overclocking methods.

Often when we see a boastful overclock like this, the end result is a mangled machine covered in coolant. Nothing so severe appears to be required when overclocking the A380. Instead, Pro Hi-tech merely changed the power targets and voltage offsets within Intel's graphics utility. The GPU performance boost slider was set to 55% as well as a +0.255mv change to voltage offset.

Just changing these settings were enough for a very small percentage boost, but it wasn't until the power use was increased that real results started appearing. The A380 is a bit of a weird beast when it comes to power draw. While the official Intel documents report a 75W TDP the A380 Pro, Hi-Tech was using was only reporting a 35W draw. It's no surprise the card has been providing lacklustre results with that drip feed of watts.

Instead, the draw was maxed out to 55W in the overlocked version shown in the video. This boosted the card's performance by 43-57%, measured by FPS achieved. That's already a a huge improvement for the A380 with the geomean of games tested receiving a boost of 37%. Doom Eternal managed to score a 60% increase going from 64 FPS to 102 with the changes. All of this is huge for a GPU, especially with such little tinkering . . . . . . . . .

Further reading here: https://www.pcgamer.com/overclocker-gives-intels-arc-a380-gpu-more-power-and-sees-huge-gains/
 
Associate
Joined
25 May 2010
Posts
359
I'm not sure how much can read into this, just had a quick scan, looked interesting anyway.



Further reading here: https://www.pcgamer.com/overclocker-gives-intels-arc-a380-gpu-more-power-and-sees-huge-gains/
My thinking from this is that they don't want their cards to be pushed that hard and have them break too early - this is still their first gen of cards that don't have huge numbers of tests over many years (ie, us using them) - if they were super quick now but broke down after 6 months, that's a reputation people won't come back to, so it's better to underperform now but at least have reliability on your side and also a gap for people to push the cards a bit if they want knowing it's 'them' that pushed it to breaking, not intel making crappy products. That's how i'd read it, or try to sell it if i were leading this project.
 
Back
Top Bottom