hey guys!
how do you guys see this: me and my buddy both got the msi 5070 ti trio oc. when we compare the stock curve (in afterburner) it is obvious that one card has generally higher voltages per frequency.
So we have for example card1 900mv/2422Mhz, 960/2625 or 1075/2932 for the one
and
card2 900mv/2497Mhz, 960/2692Mhz or 1075/3007 for the other.
Would the latter gpu be the better one chip wise, since it needs lower voltages for the same clocks (as the other)?
Just trying to figure out the OC/UV game.
We ran benchmarks (e.g. 3DMark Steel Nomad) and card1 draws at the power limit of 300w (with 1025mv 2770mhz) while card2 just draws max 270w (with 980mv and 2700Mhz). Why is card2 not set up for max power draw from stock?
Can someone shed light on this?
BTW. I read somewhere that you can show the power limit via command line and reset it too. Does anyone know the exact commands?
Thanks everybody!
how do you guys see this: me and my buddy both got the msi 5070 ti trio oc. when we compare the stock curve (in afterburner) it is obvious that one card has generally higher voltages per frequency.
So we have for example card1 900mv/2422Mhz, 960/2625 or 1075/2932 for the one
and
card2 900mv/2497Mhz, 960/2692Mhz or 1075/3007 for the other.
Would the latter gpu be the better one chip wise, since it needs lower voltages for the same clocks (as the other)?
Just trying to figure out the OC/UV game.
We ran benchmarks (e.g. 3DMark Steel Nomad) and card1 draws at the power limit of 300w (with 1025mv 2770mhz) while card2 just draws max 270w (with 980mv and 2700Mhz). Why is card2 not set up for max power draw from stock?
Can someone shed light on this?
BTW. I read somewhere that you can show the power limit via command line and reset it too. Does anyone know the exact commands?
Thanks everybody!
Last edited: