a little under 1 hr 50 mins. after reading the hyperthread bit i have a question tho. am benching measured integer over 6400 but einstien only sees one processor but should see 2 right and get an increase in benchmarks? more ram faster for boinc since its cpu intensive? its an oddball so probably just a windoze driver. running xpsp1 220*7 / 1555 mhz, 512 ram. its a clawhammer sse, sse2, etc, 64k l1 1mb l2 socket 754. running boinc 5.2.13 albert 4.37. also, under device mgr / system; numeric data processor (no drivers installed tho driver signing says msoft xp...)? not sure if i shouldnt just leave a good thing running :)
Hyperthreading is a feature of Intel CPUs. You have an AMD. So Einstein
is correct about the number of CPUs.
To put it kindly, the benchmarks are not worth the paper they are written on ...
The problem has been there has been nothing to replace them. There is now, which is why the new SETI@Home application will, in conjunction with the later generation BOINC Client software go to a FLOPS counting method (well, pseudo-FLOPS counting as they don't count each and every one). This gives a more stable credit claim so that there is no longer the problem where on participant claims 200 Cobbelstones and another for the same unit of work will claim 25 ...
Current experience is showing a variance well under 5% ... when the new application is fielded, we shall have to see what we actually experience "in the wild" so to speak. This will not immediately solve all the problems as there will be those that will want to run the oldest version of BOINC they can get away with ... :)
BUt, with the current averaging, and as we get more people using the more current versions of the BOINC Client software this whole nightmare will pass ... at least on SETI@Home ... then we have to, ahem, encourage the other projects to make the changes needed to implement the improved system. And it cannot happen soon enough for me ... :)
glad to c progress is being made- cant wait for a level field... i feel like the v8 comercial
Quote:
To put it kindly, the benchmarks are not worth the paper they are written on ...
The problem has been there has been nothing to replace them. There is now, which is why the new SETI@Home application will, in conjunction with the later generation BOINC Client software go to a FLOPS counting method (well, pseudo-FLOPS counting as they don't count each and every one). This gives a more stable credit claim so that there is no longer the problem where on participant claims 200 Cobbelstones and another for the same unit of work will claim 25 ...
Current experience is showing a variance well under 5% ... when the new application is fielded, we shall have to see what we actually experience "in the wild" so to speak. This will not immediately solve all the problems as there will be those that will want to run the oldest version of BOINC they can get away with ... :)
BUt, with the current averaging, and as we get more people using the more current versions of the BOINC Client software this whole nightmare will pass ... at least on SETI@Home ... then we have to, ahem, encourage the other projects to make the changes needed to implement the improved system. And it cannot happen soon enough for me ... :)
With all the different sizes of WUs in albert, this thread is totally irrelevant. Nobody's figure means anything compared to anyone else's, or even to what they may have next week. May as well close it.
microcraft
"The arc of history is long, but it bends toward justice" - MLK
i disagree; just click on the name on the left to c what theyre working with and compare that to other machines and the specs of those compared to their computing time. i have seen a few units that i did in 6k secs someone else took 22k on. on the benchmarks, the the floating point is uaually a good reference but the measured integer seems vary a lot.
Quote:
With all the different sizes of WUs in albert, this thread is totally irrelevant. Nobody's figure means anything compared to anyone else's, or even to what they may have next week. May as well close it.
i disagree; just click on the name on the left to c what theyre working with and compare that to other machines and the specs of those compared to their computing time. i have seen a few units that i did in 6k secs someone else took 22k on. on the benchmarks, the the floating point is uaually a good reference but the measured integer seems vary a lot.
That only applies within a particular WU. If you click on mine and check a WU, you'll find I do them in 3750 secs, but that doesn't make my rig 60% faster than yours, bacause I'm working on smaller WUs. You can't compare apples and oranges. The thread was for the old days of einstein WUs, where they were all the same size. Then it had some significance.
microcraft
"The arc of history is long, but it bends toward justice" - MLK
takes a little math; youre averaging 165.38 seconds per claimed credit; i am running 221.13 so that would make you about 1.3 x faster than me. only done on the last reported work so more math would make it more accurate, but it works. nice specs; whatcha running?
Quote:
That only applies within a particular WU. If you click on mine and check a WU, you'll find I do them in 3750 secs, but that doesn't make my rig 60% faster than yours, bacause I'm working on smaller WUs. You can't compare apples and oranges. The thread was for the old days of einstein WUs, where they were all the same size. Then it had some significance.
takes a little math; youre averaging 165.38 seconds per claimed credit; i am running 221.13 so that would make you about 1.3 x faster than me. only done on the last reported work so more math would make it more accurate, but it works. nice specs; whatcha running?
jl,
Even comparing credit/sec doesn't work in this case. I'm also doing Seti, and running an optimized Seti app for faster production there, and using an optimized (higher benchmarking) Boinc client to keep those credit claims at near-normal levels. Fine, as far as Seti goes, more production should be rewarded with more credit. The stickler comes when Einstein is in the mix. Because there is no Einstein optimized app, the high-marking Boinc client skews my credit clains upward, and thus the claim/sec is out with the dishwater. Other than that, your measure is a good one, and would apply well in ordinary circumstances.
My rig is described here, at Warhawk's request. It's 5hr Einstein times made it one of the fastest single-proc rigs on Windows.
Regards,
Michael
microcraft
"The arc of history is long, but it bends toward justice" - MLK
RE: a little under 1 hr 50
)
Hyperthreading is a feature of Intel CPUs. You have an AMD. So Einstein
is correct about the number of CPUs.
Michael
Team Linux Users Everywhere
smack me stoopid . are the
)
smack me stoopid .
are the benchmarks really that far off? i seem to be stompin machines which i shouldnt be.
thanks!
To put it kindly, the
)
To put it kindly, the benchmarks are not worth the paper they are written on ...
The problem has been there has been nothing to replace them. There is now, which is why the new SETI@Home application will, in conjunction with the later generation BOINC Client software go to a FLOPS counting method (well, pseudo-FLOPS counting as they don't count each and every one). This gives a more stable credit claim so that there is no longer the problem where on participant claims 200 Cobbelstones and another for the same unit of work will claim 25 ...
Current experience is showing a variance well under 5% ... when the new application is fielded, we shall have to see what we actually experience "in the wild" so to speak. This will not immediately solve all the problems as there will be those that will want to run the oldest version of BOINC they can get away with ... :)
BUt, with the current averaging, and as we get more people using the more current versions of the BOINC Client software this whole nightmare will pass ... at least on SETI@Home ... then we have to, ahem, encourage the other projects to make the changes needed to implement the improved system. And it cannot happen soon enough for me ... :)
glad to c progress is being
)
glad to c progress is being made- cant wait for a level field... i feel like the v8 comercial
RE: glad to c progress is
)
Well, you can still have a V8 ...
With all the different sizes
)
With all the different sizes of WUs in albert, this thread is totally irrelevant. Nobody's figure means anything compared to anyone else's, or even to what they may have next week. May as well close it.
microcraft
"The arc of history is long, but it bends toward justice" - MLK
i disagree; just click on the
)
i disagree; just click on the name on the left to c what theyre working with and compare that to other machines and the specs of those compared to their computing time. i have seen a few units that i did in 6k secs someone else took 22k on. on the benchmarks, the the floating point is uaually a good reference but the measured integer seems vary a lot.
RE: i disagree; just click
)
That only applies within a particular WU. If you click on mine and check a WU, you'll find I do them in 3750 secs, but that doesn't make my rig 60% faster than yours, bacause I'm working on smaller WUs. You can't compare apples and oranges. The thread was for the old days of einstein WUs, where they were all the same size. Then it had some significance.
microcraft
"The arc of history is long, but it bends toward justice" - MLK
takes a little math; youre
)
takes a little math; youre averaging 165.38 seconds per claimed credit; i am running 221.13 so that would make you about 1.3 x faster than me. only done on the last reported work so more math would make it more accurate, but it works. nice specs; whatcha running?
RE: takes a little math;
)
jl,
Even comparing credit/sec doesn't work in this case. I'm also doing Seti, and running an optimized Seti app for faster production there, and using an optimized (higher benchmarking) Boinc client to keep those credit claims at near-normal levels. Fine, as far as Seti goes, more production should be rewarded with more credit. The stickler comes when Einstein is in the mix. Because there is no Einstein optimized app, the high-marking Boinc client skews my credit clains upward, and thus the claim/sec is out with the dishwater. Other than that, your measure is a good one, and would apply well in ordinary circumstances.
My rig is described here, at Warhawk's request. It's 5hr Einstein times made it one of the fastest single-proc rigs on Windows.
Regards,
Michael
microcraft
"The arc of history is long, but it bends toward justice" - MLK