# Constant Black Screen Crashing, Is it the PSU?



## Ian Davis (Mar 17, 2016)

This is an issue that's been going on for about 3 weeks. I was looking at upgrading anyway, so I've already replaced the m/board, cpu and ram, the GPU is 6 months old and the only thing left is the PSU, so heres the details:

Gigabyte GA-Z170X-Gaming 7
i5 6600
Gigabyte 980Ti G1
G.Skill 3000Mhz DDR4 16Gb
Aerocool 1050GT PSU
Samsung 850 Evo OS SSD
Samsung 850 series SSD x2
WD HDDs x4
Samsung 28"UHD and 2 x 27" HD

What happens is a total black screen crash on all screens, but the pc appears to continue normally. A hard power off reboot is required, sometimes even this does not fix and GPU removal, CMOS reset and boot with onboard graphics is the only way to resume.

I have reinstalled windows several times (full clean installs each time)
I am using Nvidia 361.75

I have just run OCCT for an hour and the results are perplexing, I don't know a great deal about PSU's but if the results are correct, it appears to be utterly shot.
For whatever reason I am not seeing a +12v reading, not sure why, but the rest are as follows:

+3.3v = 2.7 - 2.74
+5v = 4.48 - 4.49
-12v = -5.7
-5v = -6.4
+5v VCCH = 3.79

PSU's aren't my area of interest, but if these readings are correct, they are so far out the 5% +/- tolerance its laughable.

Opinions?


----------



## CAPSLOCKSTUCK (Mar 17, 2016)

can you run HWMonitor and do a snip or screenshot of the voltages please.

http://www.cpuid.com/softwares/hwmonitor.html


and hello from us at TPU....


edit.......give this a read too
http://www.techpowerup.com/forums/threads/need-help-how-likely-to-get-a-bad-cpu.217147/


----------



## 95Viper (Mar 18, 2016)

Can you check in the bios to see what it states as voltages?

Software results vary and can be mis-leading or incorrect.  Best to get a good meter and check them.

I read somewhere, can't remember where... might have been in thread over at johnnyguru... that the Aerocool GT1050SG looked good but had a weak 3.3V.  For what that is worth.

If you have another PSU try it.


----------



## Bill_Bright (Mar 18, 2016)

Unfortunately, software based hardware monitors only report what the chipset reports from the cheap, low-tech sensors. I have a Gigabyte board here that reports via Speccy, HWMonitor and HWInfo65 that the +12V is ~6.3VDC. But clearly, if my +12VDC was that low, my system would not be running. Checking with my *PSU Tester* shows +11.9VDC and with my decent quality multimeter, I get +11.97VDC.

BTW, most current systems don't use -12 or -5V so you can ignore them.


----------



## Ian Davis (Mar 18, 2016)

Hi and thanks for the replies, I'd read the same thing about software monitors not being too reliable but also that bios monitors weren't always that good, really need a multi-meter heh.

Anyway, HWMonitor results are as follows:

Hardware Monitors
-------------------------------------------------------------------------

Hardware monitor ITE IT87
Voltage 0 1.01 Volts [0x3F] (CPU VCORE)
Voltage 1 2.72 Volts [0xAA] (VIN1)
Voltage 2 2.70 Volts [0xA9] (+3.3V)
Voltage 3 4.49 Volts [0xA7] (+5V)
Voltage 5 -6.91 Volts [0x6C] (-12V)
Voltage 6 -7.30 Volts [0x72] (-5V)
Voltage 7 3.79 Volts [0x8D] (+5V VCCH)
Voltage 8 2.06 Volts [0x81] (VBAT)
Temperature 0 42°C (107°F) [0x2A] (TMPIN0)
Temperature 1 45°C (113°F) [0x2D] (TMPIN1)
Temperature 2 37°C (98°F) [0x25] (TMPIN2)
Fan 0 3125 RPM [0x1B0] (FANIN0)
Fan PWM 0 0 pc [0x0] (FANPWM0)
Fan PWM 1 0 pc [0x0] (FANPWM1)
Fan PWM 2 0 pc [0x0] (FANPWM2)

They seem to be fairly consistent with OCCT, +3.3 and +5 are way down, not sure why I'm not seeing a +12v reading anywhere.

The PSU is about 2 years old, it was running 3 x 670s on a 2011 board, never missed a beat, but I'm getting some serious random errors now that started just before I changed the m/board, cpu and memory, and have continued exactly as before, all that's left is the GPU and PSU, and due to the start up errors, I'm hedging on the PSU side.

Errors include
No graphic output on boot
CMOS error (2 fast beeps) on boot
Random black screen crashing while doing anything
Restarts may induce failed display output.
Also occasionally loses 1 or more HDDs on boot

Never had anything like this before.

Would of replied sooner, but took the son cycling (hot Aussie autumn day) and ended up going much to far hehe.


----------



## 95Viper (Mar 18, 2016)

Ian Davis said:


> CMOS error (2 fast beeps) on boot



Gigabyte FAQ :  What does BIOS beep sound mean?
For an AMI bios, Memory Error 2 short beeps and I, believe, that board has an ami bios.  I could be wrong.

You might wanna run a memory test and/or try testing with one dimm at a time, in the correct slot to see if one or more causes the problem.


----------



## Ian Davis (Mar 18, 2016)

Gigabyte are confusing with the use of Ami and Award Uefi bios, previous board was Award and error'd on a range of things, the CMOS (2 short), graphics card (1 long and 3 short) , memory error (1 long 2 short) etc, etc. As I said, the board, cpu and memory are brand new and yet the crash is identical to what it was before they were changed and while you cant rule out bad hardware just because its new, how likely is it that you've replaced something faulty with something faulty and its generating an identical crash situation and inducing numerous seemingly unrelated errors?


----------



## 95Viper (Mar 18, 2016)

Ian Davis said:


> Gigabyte are confusing with the use of Ami and Award Uefi bios, previous board was Award and error'd on a range of things, the CMOS (2 short), graphics card (1 long and 3 short) , memory error (1 long 2 short) etc, etc. As I said, the board, cpu and memory are brand new and yet the crash is identical to what it was before they were changed and while you cant rule out bad hardware just because its new, how likely is it that you've replaced something faulty with something faulty and its generating an identical crash situation and inducing numerous seemingly unrelated errors?



I don't know what to tell you... my crystal ball was sent in on an RMA... it had faulty memory, psu was dying and a corrupt display.

So, I am on my own.

I have noticed a lot of threads lately are PSU problems and people receiving bad memory or the memory just not liking the MB they are stuck in.

Other than doing what advice was given.
You never did say what the bios readings in the PC or System Health section.
You can try to reseat the memory, check all you connections (power and data), unplug anything that was used in your previous build and test.
If it is not a bad piece of hardware, then, it has to be something you are installing(Software, drivers, etc.) that is causing it.
If it is the PSU, you won't know 'til you test it or try another.

Edit:  You can try testing your build outside of the case... to eliminate that as the cause.  Could be a bad reset or on switch.


----------



## Ian Davis (Mar 18, 2016)

hehe yeah, crystal balls are darn useful.
The previous build only shared the GPU, PSU and drives, and I've pulled everything out and put it back in sooooo many times.
I've tested the GPU in another machine for 2 days and it worked flawlessly. I tried the old 670 in this machine and same thing happened.
I've tried pretty much everything on the software front, except maybe going back to windows 7 or 8, but 10 had worked very well for me since it launched.
I've reinstalled 10 so many times testing, including without any updates at all and only minimal drivers, both on the old build and this one, and it still crashed.
From memory, bios reads the 3.3 at 3.268, the +5 at 4.95 and the +12 at 12.33 thereabouts, all fairly normal, but really need to buy a multimeter.
I've also moved the GPU off the main (built in) cabling from the PSU onto the modular rail, just had a game running for about an hour without issue, but its not consistent, its been as much as 2 days since making a tweak before it started happening again, just when you're starting to think its fixed, sigh.

Does not seem to be temperature related either.

Ohh well, I'll just have to pull the 850 out the kiddies pc and see what happens.


----------



## Bill_Bright (Mar 18, 2016)

Ian Davis said:


> but also that bios monitors weren't always that good


That's because the BIOS monitors are using the same low-tech sensors. These sensors just provide a hexadecimal number that represents a specific voltage. There are no trade secrets here. The number represents the same voltage (same with temperature sensors too) regardless the software. So it is not like this software monitor is more accurate than another. You see differences because of the sampling rates and sampling times.


Ian Davis said:


> really need a multi-meter heh.


Yes, but it is important to note all power supplies (computer PSUs, batteries, car engines) can only be accurately measured when under a proper load. So you cannot just pull the 20/24 Pin power connector, short the two startup pins, then measure the voltages and assume what you see is correct. So the PSU should be connected to a properly load, then tested. And the risk with using a multi-meter is sticking two, hardened, sharp, high-conducted probes into the heart of electronics. Care must be taken.


----------



## eidairaman1 (Mar 18, 2016)

Chk the monitor cables. Try the gpu in a different rig altogether, Aerocool makes a good case but psu quality is iffy.


----------



## Ian Davis (Mar 18, 2016)

Well its been 22hrs since I moved the GPU supply off the main supply and onto the modular rail, and there hasn't been a hiccup since.
I've had all 3 screens reconnected for the majority of it, run every gpu / cpu benchmark / stress tester and left Path of Exile running overnight, and that game caused it to crash almost without fail within 5 minutes of launching it.

Its not the longest its gone without failing, but considering the software I was running, it would definitely of failed somewhere under those loads previously so I am 99% confident now its a PSU issue.

I had tried the GPU in another PC previously for 2 days where it performed as expected, this setup with an old 670 crashed as usual.

If it continues to perform for the next 2 or 3 days, I think a new PSU is in order.

Now I'm seriously considering this PSU, 10 year warranty is insane and all reviews are excellent, anyone tried these? 

http://au.evga.com/Products/Product.aspx?pn=220-G2-0850-XR


----------



## eidairaman1 (Mar 18, 2016)

Seasonic X, XFX, Corsair RX/AX/HX i believe, perhaps EVGA even


----------



## Delish (Mar 18, 2016)

I guess this is one of the best rankings for PSUs http://www.tomshardware.co.uk/forum/id-2547993/psu-tier-list.html


----------



## Bill_Bright (Mar 19, 2016)

I've made several builds recently with 550W and 650W EVGA Supernova G2 supplies and they've been great. I can only surmise the 850W being in the same family will too.

But BTW, 850W is way overkill for your needs. You could easily get by with 650W or even 550W. Note I buffered the results by bumping CPU utilization to 100% and Computer Utilization to 16 hours/day, and added 4 x 120mm case fans and as seen by plugging your HW into the eXtreme PSU calculator, the calculated load is just 494W with the minimum recommended being 544W. So I would look at the 650W just to provide some extra wiggle room should you decide up add more hardware a year or two down the road.


----------



## Ian Davis (Mar 20, 2016)

Cheers Bill.
After two and a half days with the GPU running off the modular rail and not a single hiccup, I think that's pretty much confirmed a PSU fault now, and based off of what you just said, I'll go with the 750w Evga SuperNova G2 as there are 6 x 140mm fans running (radiator and case and hot Australian summers, autumns, winters and springs, I live in Queensland where a cold day is 60F and 100F is just warming up so airflow is paramount) and I like a bit extra wiggle room, never know how much power the new NVidia / AMD GPUs are going to ask for.

So thanks for the advice on capacity, saved me a chunk dropping down to the 750w.


----------



## EarthDog (Mar 20, 2016)

A 650w psu is building a A LOT of headroom on that machine. I've read run a 980ti to its limits  (stock bios), and a 6700K around 4.8ghz and hit around 495W AT THE wall (90% efficient psu = ~450w actual). That's with a pump, 5 fans, 2 hdd, and 2 ssds.

Remember the gpu you have has a power limit on their stock bios. Typically 10% or less is allowed. Its a 250w card + 10% = 275w.

Thkse calculators tend to overestimate to compensate for people buying crap psus. I also don't understand why running your pc longer makes it use more power. That makes no sense unless the hardware was under thermal runaway and losing efficiency. Anyway, it's accuracy concerns me. Particularly considering my actual results from reviewing and overclocking these past few years.

Power use on products has remained the same for more performance, or gone down for the same/more performance. I wouldn't imagine next Gen cards to use more than where they are now...technologies are using less power for more performance. A 650w g2 would be perfect and allow for overclocking and any other addition except another gpu.

A quality 750w psu is good for 2 980tis.


----------



## Ian Davis (Mar 20, 2016)

Thanks for the input Earth.

It really comes down to an economics equation now, the 750G@ is $20 more than the 650G2, whereas the 850G2 is $70 more than the 750. I'll happily pay the extra $20 and have a stack of built in headroom so that I don't have to worry about whether to get a pascal or another 980Ti in a few months time. (or who knows, maybe a 2nd pascal hehe).


----------



## Sasqui (Mar 20, 2016)

Right before my Hyper PSU totally failed (taking a MB with it) the system started doing the random screen blank thing.  Last suspect was the PSU.  Just FYI from experience.


----------



## Bill_Bright (Mar 20, 2016)

Ian Davis said:


> I live in Queensland where a cold day is 60F and 100F


And you don't have air conditioning? Outdoor temperatures can surely play a critical role in case cooling, but not so much if the computer is operated indoors in a controlled environment.

750W with a quality supply is still overkill, even with 6 fans and alternative cooling (and a requirement for added wiggle room).  That said, with a quality supply with a relatively flat efficiency curve, your on-going energy costs will effectively be the same with the 750 vs 650 (or even 550). The only harm will be to your bank account for the initial purchase. Either way, you are getting a good supply.


----------



## eidairaman1 (Mar 20, 2016)

I wouldn't worry about overkill on the PSU, he will spend his money the way he wants, I honestly think a 750W psu is a good compromise.

Look at my signature rig. Its only gold rated but it handles the oc's right to the point my cooling cant keep up anymore due to temp spike in the cpu.


----------



## Bill_Bright (Mar 20, 2016)

eidairaman1 said:


> I honestly think a 750W psu is a good compromise.


I agree. And as I said, it is a good supply regardless.

The problem (?) is many firmly believe bigger is always better. It's not.



eidairaman1 said:


> Its only gold rated


"_Only_" gold??? You say that as if it is something to apologize for. IMO, there's no shame in "Bronze" so to suggest Gold is somehow not totally worthy is very misleading. I do urge all buying to get at least Bronze, and Gold if the budget allows. But I see no reason to go beyond Gold. Gold only represent slightly better efficiency than Bronze, but it does not suggest better stability, more accurate voltage regulation, better reliability or a longer life expectancy! And Platinum and Titanium don't over Gold either.

In fact, IMO, I see the only real benefit for paying the premium prices for a Platinum and especially Titanium over Gold is bragging rights. The biggest spread between Gold and the top rated Titanium is just 4% at 50% load. It would take a very long time to recoup the added costs of those supplies with the only slightly lower energy costs.


----------



## EarthDog (Mar 21, 2016)

eidairaman1 said:


> I wouldn't worry about overkill on the PSU, he will spend his money the way he wants, I honestly think a 750W psu is a good compromise.
> 
> Look at my signature rig. Its only gold rated but it handles the oc's right to the point my cooling cant keep up anymore due to temp spike in the cpu.


umm, you could run 3 of your pcs on that psu....it better handle one..

Titanium rated psus will pay for themselves after a couple years for miners or F@H, etc... those that run 24/7 loaded. Otherwise, it's tough to recoup those costs. It is more efficient at idle than gold is since it has the additional standard/read point at 10% as opposed to 20% where gold and lower start. Otherwise, there isn't much point as Bill said.


----------



## Ian Davis (Mar 21, 2016)

There is also the issue of supply here in Australia, we do not have the same levels of availability found in the US or Europe.
As I would prefer a gold rated supply, taking into consideration shipping costs, availability and 650 minimum, the options are:

XFX TS series 650 gold - $119
Evga SuperNova 650w G1 - $139
XFX XTR modular series 750 gold - $149
Seasonic G-650 - $155
Thermaltake toughpower 750w gold - $159
Evga SuperNova750 G2 - $159
Seasonic G750 - $179
Seasonic XP-760 Platinum V2 - $229
Corsair HX750i Platinum - $229

Just to note that my pc is on 24/7 though not under load for the majority of it and in a hot environment, I don't tend to turn the aircon on til the mercury passes 35C / 95F.
All prices are Aussie $ and as I live semi-rural require an additional $20 shipping.
Don't really see the point of going platinum, just listed them for reference more than anything.
Thoughts?


----------



## EarthDog (Mar 21, 2016)

You wont make up the cost, so there isn't a point of going platinum for you. 

That XFX TS series is a very solid PSU... go get that one: http://www.jonnyguru.com/modules.php?name=NDReviews&op=Story6&reid=426


----------

