Low memory clock on Maxwell2 cards (960/970/980, probably Titan X)

Manuel Palacios
Manuel Palacios
Joined: 18 Jan 05
Posts: 40
Credit: 224259334
RAC: 0

Yes indeed, however, I have

Yes indeed, however, I have found that the core clock makes little difference in completion times with the BRP app. Thus, I do not overclock the core, just the memory. It was a lot more memory bound with the 1.52 app, and now with v1.57 and cuda55, it's not in need of such an aggressive clock on the memory.

archae86
archae86
Joined: 6 Dec 05
Posts: 3145
Credit: 7023394931
RAC: 1809313

My GTX 970 host has run for

My GTX 970 host has run for months at the current settings, so far as I recall. I did not find a need to relax it when the CUDA55 Parkes supplanted CUDA32. I don't run any other BOINC work on it.

GPU-Z reports GPU core clock 1427 MHZ
GPU memory clock 1949 MHz
VDDC 1.20 V (I did not tamper with this)
Power about 82.5% TDP.

My biggest difficulty in the whole exercise was finding a reproducible way to automate the overclock with system reboot. I eventually succeeded with a scheme that uses a delayed launch batch file to start up boincmgr after I have run two Nvidia Inspector command line commands to set the overclock, delayed a little after user logon.

As the system is my wife's primary PC, I needed something which would work reliably on reboot without manual intervention. So far so good.

Jeroen
Jeroen
Joined: 25 Nov 05
Posts: 379
Credit: 740030628
RAC: 556

With the Maxwell cards,

With the Maxwell cards, NVIDIA inspector works great in Windows for overriding the memory frequency of P2.

Has anyone found a tool or equivalent way to adjust P2 memory frequency or force P3 in Linux when running CUDA apps? I have been searching around on Google but so far have not found anything. When I run Einstein apps, below is what I see in nvidia-settings with coolbits enabled.

Stef
Stef
Joined: 8 Mar 05
Posts: 206
Credit: 110568193
RAC: 0

On linux i use nvidia-smi to

On linux i use nvidia-smi to set it to the maximum frequency.
For my card it is "nvidia-smi -ac 3600,1544". first is the memory clock (half the DDR data rate frequency), second the gpu clock.
To show the supported clock combinations run "nvidia-smi -q -d SUPPORTED_CLOCKS"
For your card I assume "nvidia-smi -ac 3505,1531"

Jeroen
Jeroen
Joined: 25 Nov 05
Posts: 379
Credit: 740030628
RAC: 556

RE: On linux i use

Quote:
On linux i use nvidia-smi to set it to the maximum frequency.
For my card it is "nvidia-smi -ac 3600,1544". first is the memory clock (half the DDR data rate frequency), second the gpu clock.
To show the supported clock combinations run "nvidia-smi -q -d SUPPORTED_CLOCKS"
For your card I assume "nvidia-smi -ac 3505,1531"

That would perfectly. Both cards are now running at P3 level and I am able to further overclock the memory past spec.

I did not realize that NVIDIA added Geforce support to nvidia-smi.

Thanks!

[/img]

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 770
Credit: 536564330
RAC: 188300

Interesting that the default

Interesting that the default memory clock in P2 under Linux was 6.6 GHz, instead of 6.0 GHz under Windows. In the mean time I've read some justification along the lines "we want to make sure the calculations are correct, hence the downclock". Which - if twisted around - would mean the memory is considerably understable under default gaming clocks.. and it would mean that the hardware is more stable under Linux. Or the driver developments teams don't know what the other one set.

MrS

Scanning for our furry friends since Jan 2002

Jim1348
Jim1348
Joined: 19 Jan 06
Posts: 463
Credit: 257957147
RAC: 0

FWIW, my GTX 960, 970 and 980

FWIW, my GTX 960, 970 and 980 all have this problem on Folding (since it is OpenCl), and increasing the clocks helps some. But the MCL is generally less than 50%, usually 20% to 35% and sometimes less than 10% depending on the work units.

Fortunately the GTX 750 Tis that I use for Einstein do not have this problem. Thanks for pointing it out.

Jacob Klein
Jacob Klein
Joined: 22 Jun 11
Posts: 45
Credit: 114028485
RAC: 0

ExtraTerrestrial

ExtraTerrestrial Apes:

Thanks for this thread! I confirmed the problem on my eVGA GTX 970 FTW, where it crunches GPUGrid tasks 2-at-a-time, but uses the P2 power state, at 3005 MHz, instead of the expected 3505 MHz, on Windows 10 x64, using latest drivers 358.59.

So, I changed my "Max Boost All.bat" file, which I have a shortcut in:
C:\ProgramData\Microsoft\Windows\Start Menu\Programs\StartUp
(so that it starts up every time I log in).

I also have 2 GTX 660 Ti GPUs in the rig, also overclocked to the "perfect amount" to where no work units fail at all. The GTX 970 is GPU device 2.

So, I changed my startup .bat file...
FROM:
"c:\Program Files\NVIDIA Inspector\nvidiaInspector.exe" -setPowerTarget:0,123 -setBaseClockOffset:0,0,32 -setPowerTarget:1,140 -setBaseClockOffset:1,0,-46 -setPowerTarget:2,110 -setTempTarget:2,1,80 -setBaseClockOffset:2,0,8
TO:
"c:\Program Files\NVIDIA Inspector\nvidiaInspector.exe" -setPowerTarget:0,123 -setBaseClockOffset:0,0,32 -setPowerTarget:1,140 -setBaseClockOffset:1,0,-46 -setPowerTarget:2,110 -setTempTarget:2,1,80 -setBaseClockOffset:2,0,8 -setMemoryClock:2,2,3505

Looking forward to the potential performance increase :-p (I'm not really expecting much).

Thanks,
Jacob

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 770
Credit: 536564330
RAC: 188300

Update from my side: on my

Update from my side: on my GTX970 a memory clock of 3700 MHz proved not to be stable enough for GPU-Grid, yielding about 1 - 2 errors per week. Switching back to 3500 MHz fixed it and is still better than the "stock" 3000 MHz. The limit is probably somewhere between 3600 and 3650 MHz for my card.

MrS

Scanning for our furry friends since Jan 2002

Gamboleer
Gamboleer
Joined: 5 Dec 10
Posts: 173
Credit: 168389195
RAC: 0

I've discovered something

I've discovered something strange apropos to this.

I have an i5 that had a GTX 950 2GB in it. The 950 doesn't show a P2 state available in nVidia Inspector.

I also have an older Mac Pro that I wanted to boost as much as possible, so I bought two GTX 960's 2GB with a small factory overclock this weekend, models with 6-pin power inputs so my Pro could run them.

Many card switches later, I've discovered that the 960 runs at almost exactly the same speed as the 950, both in the Mac in El Capitan and the Windows 10 PC -- in fact, the 950 is a tiny shade faster than the 960, despite the 960 supposedly having an approximately 50% higher computer capacity.

If I use Inspector to unlock the memory in P2 on the 960, I can boost its performance a little over 10%. Given the card costs a minimum of $50 more than a 950, and consumes 33% more electricity, it's hardly worthwhile, and not possible to "fix" in OSX.

Thought I would add to this thread because of the 950 not having unlockable P2 state info.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.