How to overclock GPU

Logforme
Logforme
Joined: 13 Aug 10
Posts: 332
Credit: 1714373961
RAC: 0
Topic 196078

I just got a GTX580 and thought I'd return to crunch some E@H workunits. The cuda app seems to work ok but it's only running at 60% GPU utilization. I guess this is how the app is written but I thought I might speed it up by doing some overclocking.

There seem to be 4 parameters you can overclock on the GTX580: Core Voltage, Core Clock, Shader Clock and Memory Clock. I have fiddled around a little and gotten some speed increase but I'm curious about which parameter would benefit the E@H cuda app the most.
I'm not much interested in overclocking, say the memory, increasing the heat and noise on the card but then finding out that the E@H app don't benefit at all from the memory overclock :)

So, my question: Which overclock parameter(s) benefits the E@H app the most?

Bernd Machenschalk
Bernd Machenschalk
Moderator
Administrator
Joined: 15 Oct 04
Posts: 4276
Credit: 245534352
RAC: 11020

How to overclock GPU

I doubt that the E@H App would benefit from overclocking at all. The bottleneck of the computation is the part that's still done on the CPU. If you want to speed up E@H CUDA Apps, get a faster CPU. Besides expect the rate of invalid results to raise significantly with overclocking the GPU.

BM

BM

Logforme
Logforme
Joined: 13 Aug 10
Posts: 332
Credit: 1714373961
RAC: 0

Thanks for the reply. Well

Thanks for the reply.

Well that's a little depressing. I realize the E@H calculations might be hard to run 100% on the GPU but when I crunched for Milkyway@Home the speed difference between CPU and GPU crunching was amazing. That's why I stopped crunching on CPU, felt like a complete waste of electricity.

I read something about running more than one GPU task simultaneously. Would that bring the GPU to 100% utilization? How is that done?

Toobster
Toobster
Joined: 19 Aug 11
Posts: 7
Credit: 13881361
RAC: 0

To run more GPU task you need

To run more GPU task you need to use a tweak with a file called app_info.xml..
See this thread:

http://einsteinathome.org/node/196075

tolafoph
tolafoph
Joined: 14 Sep 07
Posts: 122
Credit: 74659937
RAC: 0

RE: I read something about

Quote:
I read something about running more than one GPU task simultaneously. Would that bring the GPU to 100% utilization? How is that done?

Quote:
To run more GPU task you need to use a tweak with a file called app_info.xml..
See this thread:
http://einsteinathome.org/node/196075

For additional Information you also can read here http://einsteinathome.org/node/195643 (4 WUs on GTX480).

ExtraTerrestrial Apes
ExtraTerrestria...
Joined: 10 Nov 04
Posts: 770
Credit: 544744036
RAC: 180736

1st priority: run several WUs

1st priority: run several WUs simultaneously to make better use of your hardware as-is

Then:
- if you still want to overclock it's very probably the shader clock. Although this is tied to the chip clock on your card anyway. Leave the memory clock alone.
- don't increase voltage on a GF110 chip (yours). It's usually running hot enough already. You may want to try lowering the voltage to save power and reduce heat & noise. Careful, though, as lower voltages mean the chip reaches lower clocks reliably, i.e. you're working directly against overclocking.

MrS

Scanning for our furry friends since Jan 2002

Jeroen
Jeroen
Joined: 25 Nov 05
Posts: 379
Credit: 740030628
RAC: 4

If you run multiple work

If you run multiple work units at once on your 580, make sure to also have the card installed in a x16 slot. The extra PCI-E lanes do benefit performance with this application. As for overclocking, I run my 580s at 850 MHz with stock memory frequency. I am still able to undervolt my cards at this frequency without sacrificing stability. I run that in combination with my 920 @ 4.2 Ghz. The combination of the two has made it possible to complete three work units at once in around 3200-3300 seconds. GPU utilization is on average 88%.

Logforme
Logforme
Joined: 13 Aug 10
Posts: 332
Credit: 1714373961
RAC: 0

Thanks for all the help guys.

Thanks for all the help guys.

Fred J. Verster
Fred J. Verster
Joined: 27 Apr 08
Posts: 118
Credit: 22451438
RAC: 0

RE: I doubt that the E@H

Quote:

I doubt that the E@H App would benefit from overclocking at all. The bottleneck of the computation is the part that's still done on the CPU. If you want to speed up E@H CUDA Apps, get a faster CPU. Besides expect the rate of invalid results to raise significantly with overclocking the GPU.

BM

Another possebillity is running 2 per GPU, but I would not do this
unless it's a FERMI (400/500series) and are familiar with app_info.xml
file editting. One typo...............................

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.