All things Amd GPU

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4779
Credit: 17807905573
RAC: 4000038

You don't have to write

You don't have to write cc_config.xml.  BOINC writes it automatically if you just modify one of the stock parameters for the file via the Manager.  Just go to Options >>Event Log options in the Manager and toggle the sched_op_debug setting on and Save the settings.

That automatically populates a complete, full cc_config.xml file with all possible parameters.

Then you can add your gpu exclude statement in the <options> file section.  Then re-read the file settings via the Manager Options >> Read config files menu option.

You need to restart BOINC to pick up the gpu exclude statement though.

 

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3751
Credit: 35754429424
RAC: 39434491

astro-marwil wrote: Hallo

astro-marwil wrote:

Hallo Mikey!

I’ve very little experience in programming, decades ago. How does look the body of an xml-file? There was no cc_config.xml in my BOINC, so I’ve to write it completely new. At now, it’s that

start
<exclude_gpu>
<url>http://einstein.phys.uwm.edu/</url>
<device_num>0</device_num>
</exclude_gpu>
end

At me <0> is my IGPU, >1< is my Radeon 6600 from BOINC-messages.

Even with ´begin´ instead of ´start´ in the first line, I still get the error-message: missing starttag in cc_config.xml

Kind regards and happy crunching

Martin

if you disable the iGPU in the BIOS, you wont have to mess around with anything with a cc_config file. 

just disable it.

_________________________________________________________________________

astro-marwil
astro-marwil
Joined: 28 May 05
Posts: 520
Credit: 442033299
RAC: 819241

Hallo Ian & Steve

Hallo Ian & Steve !

Thank-you for your answer.

I want to use the IGPU for my daily work and leave the GPU undisturbed all time crunching E@H. I believe, this is not possible, when disabling the IGPU in the BIOS.

Kind regards and happy crunching

Martin

Tom M
Tom M
Joined: 2 Feb 06
Posts: 5768
Credit: 7826214713
RAC: 3466776

astro-marwil wrote: Hallo

astro-marwil wrote:

Hallo Ian & Steve !

Thank-you for your answer.

I want to use the IGPU for my daily work and leave the GPU undisturbed all time crunching E@H. I believe, this is not possible, when disabling the IGPU in the BIOS.

Kind regards and happy crunching

Martin

Seems like there is another parameter in the cc_config.xml file that basically says use only the most powerful GPU for crunching.   Set it to "zero" instead of one and boinc should only use the discrete GPU.

Think it says something like "use all GPU's".

Tom M

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3751
Credit: 35754429424
RAC: 39434491

Tom M wrote: astro-marwil

Tom M wrote:

astro-marwil wrote:

Hallo Ian & Steve !

Thank-you for your answer.

I want to use the IGPU for my daily work and leave the GPU undisturbed all time crunching E@H. I believe, this is not possible, when disabling the IGPU in the BIOS.

Kind regards and happy crunching

Martin

Seems like there is another parameter in the cc_config.xml file that basically says use only the most powerful GPU for crunching.   Set it to "zero" instead of one and boinc should only use the discrete GPU.

Think it says something like "use all GPU's".

Tom M

that parameter needs to be set to 1, not 0.

_________________________________________________________________________

Tom M
Tom M
Joined: 2 Feb 06
Posts: 5768
Credit: 7826214713
RAC: 3466776

Ian&Steve C. wrote: Tom M

Ian&Steve C. wrote:

Tom M wrote:

astro-marwil wrote:

Hallo Ian & Steve !

Thank-you for your answer.

I want to use the IGPU for my daily work and leave the GPU undisturbed all time crunching E@H. I believe, this is not possible, when disabling the IGPU in the BIOS.

Kind regards and happy crunching

Martin

Seems like there is another parameter in the cc_config.xml file that basically says use only the most powerful GPU for crunching.   Set it to "zero" instead of one and boinc should only use the discrete GPU.

Think it says something like "use all GPU's".

Tom M

that parameter needs to be set to 1, not 0.

Unless I am confused the parameter needs to be zero based on the documentation.

https://boinc.berkeley.edu/wiki/Client_configuration

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3751
Credit: 35754429424
RAC: 39434491

you're confused. and not

you're confused. and not understanding how BOINC determines what is the "most capable" GPU.

Quote:

<use_all_gpus>0|1</use_all_gpus>

If 1, use all GPUs (otherwise only the most capable ones are used). Requires a client restart.

 

his issue is partially because he lacks this parameter. BOINC thinks that the iGPU is the most capable because it's reporting more VRAM than his discrete card (12GB for iGPU, 8GB for the RX6600). VRAM amount is higher priority than "speed" (likely flops) for determining what is most capable in BOINC logic.

he needs it to be '1', and he will need an exclude(or ignore) statement to exclude(or ignore) the iGPU.

_________________________________________________________________________

Tom M
Tom M
Joined: 2 Feb 06
Posts: 5768
Credit: 7826214713
RAC: 3466776

Yet another technical detail

Yet another technical detail on how boinc actually works than has flown right over my head.

:(

A Proud member of the O.F.A.  (Old Farts Association).  Be well, do good work, and keep in touch.® (Garrison Keillor)

Keith Myers
Keith Myers
Joined: 11 Feb 11
Posts: 4779
Credit: 17807905573
RAC: 4000038

Answered here by Richard

Answered here by Richard Haselgrove. How BOINC chooses the most capable GPU

BOINC tries to use the "best" GPU in a system. BOINC's assessment of "best" is done in the '_compare' functions in coproc_detect.cpp. If I'm reading it right, the priority order for NVidia GPUs is:

1) compute capability
2) CUDA version
3) Available memory
4) Speed

 

Ian&Steve C.
Ian&Steve C.
Joined: 19 Jan 20
Posts: 3751
Credit: 35754429424
RAC: 39434491

Keith Myers wrote: Answered

Keith Myers wrote:

Answered here by Richard Haselgrove. How BOINC chooses the most capable GPU

BOINC tries to use the "best" GPU in a system. BOINC's assessment of "best" is done in the '_compare' functions in coproc_detect.cpp. If I'm reading it right, the priority order for NVidia GPUs is:

1) compute capability
2) CUDA version
3) Available memory
4) Speed

i know you're just quoting Richard's old post, but the link is broken as BOINC uses github now and not trac.

also the referenced coproc_detect.cpp file doesn't exist and hasn't been used since c. 2012-ish. these functions are now in gpu_nvidia.cpp for Nvidia. and in gpu_amd.cpp for AMD

AMD specific priority:
1. double precision support
2. local RAM (VRAM size)
3. speed (peak flops)

_________________________________________________________________________

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.