New build 1080ti lock up?

Associate
Joined
5 Aug 2018
Posts
132
Evening gents,

I have recently built my first rig in years with the Ryzen 2700x bundle and the Aorus 1080 ti and have just had an unusual hardware crash whilst playing eft.

The screen went black (lost connection), the rgb led on gpu went out, two led above the 8pin psu connectors lit up white (solid not flashing as I have read some describe online) and the fans stuck on max.

After a hard reset it fired up and ran eft without issue although didn't play for as long this time.

Now the system was built with a new corsair rm750x psu with a single cable with two 6+2 pin connectors running the gpu.

Could this weird hardware crash have been caused by the gpu being unable to draw enough power down the single cable? And if so could it have caused any damage?

Any help is much apprichiated!
 
Last edited:
What size of power supply do you have fitted ?

What are the temps within the case and how good is your cooling?

To me,Ryzen seems far more likely to crash as a result of memory / cpu clock speed instability. Have you overclocked any of it either manually or through the auto over locking stuff? If so that may be a possible cause.

I was having a little instability with mine under load and backing off the over clocking has made a significant difference to stability.

(So has updating the bios of the main board too)
 
Power supply is a 750w

Currently just stock air cooling with a couple extra case fans but the system clocks havnt been touched and left stock.

Mb bios was done before installing windows.
 
Power supply should be fine. I’m running a 1080ti using a 600w psu.

Going back to my build, If I had just set only the bios memory speed to the rating on the ram itself then I would occasionally get instability. If I selected for the bios to read the settings using xmp/docp, then it would auto set the timings and speed and the stability issues were gone.

I’ve not tried to push my system past it’s base working speed. Pushing that boundary to me gets frustrating. I just want it to work well.


Is the black screen a regular thing ? It may just be a one off ?
 
It was a loss of connection to the gpu rather than a black screen (like if you pulled the HDMI lead out) and it's only happened the once so far!
 
Last edited:
If it's not down to both 8 pins running off 1 cable could it be that the psu is plugged into a multi plug adapter rather than straight into the wall?

Really worried as I have never spent so much on a rig and it's a bit of a shock after it's been running fine for a week.

Unfortunately in my panic I didn't think to check the MB status led, also when I say hard reset I had to turn off the psu as reset button and holding power button had no effect.
 
Last edited:
This evening I have added another psu cable so each 8pin has it's own cable.

Downloaded Gpuz to find the bios isn't one of the older versions that allowed 300w default so no need to flash thankfully.

Ran Furmark for a good few mins with no issues... Hopefully it'll be a one off but will try to get more info if it happens again.
 
After an evening of pubg however it has just done the same as before. Cuts out and brings up two solid lights so the psu cables have not helped.

Worried I have a faulty card now!
 
Had a reply from the guys at ocuk suggesting this issue could be down to the gpu loosing connection to the MB? Advised i test another pcie lane, however I am no expert myself but wouldn't a mobo pcie lane issue be more consistent rather than only having issues after an hour or so of gaming?

They are happy for me to return gpu to them for testing or replacing If I believe that is at fault but I am just going off the basis that when it crashes the gpu is the only component that shows any visable change.
 
It cant hurt to swap to another slot and test to rule out their theory.

Personally i would use another slot and retest, no need for speculation when you can rule it out.

Either way they are happy for you to send it back so that's good.
 
Thanks so much for the reply, defo a good idea to test before sending it back and finding out the gpu is fine.

Can you recommend any stress tests to use (Furmark?) to try and replicate the result or would you advise to just swap slots and game as before to see if it happens again?

Many thanks
 
Try stress testing in the current slot then again in a different one. You have to be certain ocuk can replicate the same error otherwise you have the usual car problems where you take it to a mechanic and the problem doesn't actually work
 
Thanks for the reply, that will likely be tonight's mission to see if I can stress test without gaming and try and replicate the problem.

Once I've done that I can try the next slot to see if results can be replicated. Also didn't think to check windows event viewer as the system is technically still running when the card goes down although it might not pick up on what caused the card to go down and just that the system lost power when I turned it off.

Thanks again for the advice guys!
 
So I've just spent most of the evening trying to replicate the issue whilst running Gpuz log and it hasn't gone tonight.

The only thing I have done is change windows power plan from basic to performance, going to feel a little silly if that's all it was!

Going to keep testing with Gpuz logging and see if I can get it to happen again!
 
Ok so a bit of a development tonight, after not being able to replicate it at all last night or this morning I started to think it might actually just be the windows power setting conflicting with the asus ai suite.

However tonight I thought I would try space hulk out and after a couple of mins of play I was surprised to see the no fan light still lit up (a feature that's suppose to keep the gpu quiet when not in use).

I tabbed back to the desktop and watch the temps come, at 82 with still no fan I quit all apps and found I couldn't get the fans on even if I set them to manual. This is leading me to think this has been happening without my noticing whilst gaming causing the card to overheat.

I have just updated aorus engine to 1.39 though so going to try rolling back a couple to see if it's just their software.
 
Just rolled back to aorus engine 1.38 and fans came right back on...

Tested in game and they came on as they should once the heat rose. That's not good if 1.39 stops people's fans from working!
 
dont use Aorus engine, love Aorus but software is useless haha, reason why Aorus 2080/ti is delayed .

also, have you ran the bundle at default ram speed, 2133hz ?

if you had an intel system id normally say GPU most of the time, but with ryzen, ram issues can play a huge part!
 
I have only recently put the RAM to it's correct 3.2 speed and system seems stable, not touches the gpu settings but both times I has the crash was with the RAM on default speed.

I rolled back the aorus software to 1.38 which brought back the fans and has seemed stable since.

Windows power plan resets itself to balanced so can rule that out, although the asus software also got an update which was suppose to fix this but no dice.

Typically since I've been logging all games with Gpuz I havnt had this crash again so wondering if it was down to one of the drivers.

The guys at overclockers have sent me an ram number which will last for 28 days so will have to keep testing.

Many thanks for your reply.
 
So it finally happened again... but this time it was after waking the pc up from standby. So the gpu hadn't been under load as per the last two examples, not sure where to look next.

Starting to wonder if it is power related but it's strange that it locks on and won't turn off by the power button. I hope these crashes and hard resetting are not causing any lasting instability.

And typically as I wasn't gaming I didn't have Gpuz logging...

Just noticed there is a bios update for the mb so going to get that done.
 
Last edited:
Back
Top Bottom