S5R3 search strategy ?

log in

Advanced search

Message boards : Cruncher's Corner : S5R3 search strategy ?

Author Message
Profile Jean Jeener
Send message
Joined: 3 Jun 05
Posts: 32
Credit: 3,655,712
RAC: 599
Message 75573 - Posted: 4 Oct 2007, 10:25:48 UTC

Bruce, Bernd, and colleagues,
Please give some hints or references about the search strategy used in S5R3, that seems quite different from previous strategies. This is to satisfy my curiosity, and that of many other crunchers. Forgive me if the information has been given already without drawing my attention. Thank you. Jean Jeener.
____________

Profile Reinhard Prix
Project developer
Project scientist
Send message
Joined: 15 Oct 04
Posts: 6
Credit: 696,707
RAC: 106
Message 75618 - Posted: 5 Oct 2007, 9:34:39 UTC - in response to Message 75573.

Ok, let me try to give you a short summary of the "evolution" of our search strategies used in successive runs. For a slightly more general overview of where we currently stand, there is a poster on E@H [presented at a recent conference on pulsar astronomy], which you might find interesting:
G070593-03.pdf

The key step starting with S5R2 was to move part of the "post processing" from our server to the E@H hosts: previous searches performed one (or two) "F-statistic" searches on the host before sending back the results. These searches were performed over a number (between 17 and 60 in different runs) of different time stretches ("stacks"), which we combined using a "coincidence scheme" in the post-processing stage on the server. The amount of data (ie number of candidates) that can be allowed to be sent back from each host to the server is limited, and it turned out that this was the main factor holding back our achievable sensitivity.

The new "Hierarchical" search scheme, used since S5R2, performs F-statistic searches over 84 different stacks, then combines the results by a sophisticated coincidence scheme ("Hough transform") on the host, and only *then* sends back the results to the server. This avoids the data-returning bottleneck of previous runs and substantially increases the expected sensitivity (by about a factor of 6!)

The first Hierarchical search [S5R2], suffered from certain limitations (too technical to go into here ...) in the workunit-design, due to this new code and search scheme. These limitations were overcome in S5R3 by splitting the sky into several patches and having each workunit search only over one patch at a time, instead of the whole sky at once.

The resulting current search is a substantial leap forward for E@H, and promises unprecedented sensitivity to gravitational waves from spinning neutron stars. However, we are already working on future improvements to this scheme, which should allow us to further increase our reach in distance to spinning neutron stars (namely by increasing the range of frequency spin-downs searched over)

Hope this helps clarify a bit of what is going on "behind the scene".
Best,
Reinhard.

____________

darkpella
Send message
Joined: 11 Sep 05
Posts: 2
Credit: 1,561,167
RAC: 0
Message 75623 - Posted: 5 Oct 2007, 10:42:30 UTC - in response to Message 75618.

Ok, let me try to give you a short summary of the "evolution" of our search strategies used in successive runs. For a slightly more general overview of where we currently stand, there is a poster on E@H [presented at a recent conference on pulsar astronomy], which you might find interesting:
G070593-03.pdf

The key step starting with S5R2 was to move part of the "post processing" from our server to the E@H hosts: previous searches performed one (or two) "F-statistic" searches on the host before sending back the results. These searches were performed over a number (between 17 and 60 in different runs) of different time stretches ("stacks"), which we combined using a "coincidence scheme" in the post-processing stage on the server. The amount of data (ie number of candidates) that can be allowed to be sent back from each host to the server is limited, and it turned out that this was the main factor holding back our achievable sensitivity.

The new "Hierarchical" search scheme, used since S5R2, performs F-statistic searches over 84 different stacks, then combines the results by a sophisticated coincidence scheme ("Hough transform") on the host, and only *then* sends back the results to the server. This avoids the data-returning bottleneck of previous runs and substantially increases the expected sensitivity (by about a factor of 6!)

The first Hierarchical search [S5R2], suffered from certain limitations (too technical to go into here ...) in the workunit-design, due to this new code and search scheme. These limitations were overcome in S5R3 by splitting the sky into several patches and having each workunit search only over one patch at a time, instead of the whole sky at once.

The resulting current search is a substantial leap forward for E@H, and promises unprecedented sensitivity to gravitational waves from spinning neutron stars. However, we are already working on future improvements to this scheme, which should allow us to further increase our reach in distance to spinning neutron stars (namely by increasing the range of frequency spin-downs searched over)

Hope this helps clarify a bit of what is going on "behind the scene".
Best,
Reinhard.


Hi,

I noticed that S2 WUs took considerably longer to crunch in comparison with earlier versions. Does this patching scheme reduce the time to crunch a single WU, or does the increased sensitivity eat up all of the reduced computational effort required for a single WU?

Thanks

darkpella
____________
Profile Jean Jeener
Send message
Joined: 3 Jun 05
Posts: 32
Credit: 3,655,712
RAC: 599
Message 75629 - Posted: 5 Oct 2007, 14:26:52 UTC

Many thanks to Rienhard Prix for a quick informative answer. Best regards, Jean Jeener.
____________

DanNeely
Send message
Joined: 4 Sep 05
Posts: 1120
Credit: 189,028,229
RAC: 241,477
Message 75640 - Posted: 5 Oct 2007, 22:32:02 UTC

Interesting, looking at the poster it appears that detecting any known NSes will probably have to wait until Virgo. The crab pulsar's ~60hz signal is sitting almost directly on top of the largest noise spike in the sensitivity curve.
____________

Profile rbpeake
Send message
Joined: 18 Jan 05
Posts: 230
Credit: 99,589,736
RAC: 303,242
Message 75645 - Posted: 6 Oct 2007, 0:27:34 UTC - in response to Message 75618.

...This avoids the data-returning bottleneck of previous runs and substantially increases the expected sensitivity (by about a factor of 6!)...

Hope this helps clarify a bit of what is going on "behind the scene".
Best,
Reinhard.

It does indeed, and thank you very much! It certainly answers the question of why we continue to crunch S5 data...because there is a lot more to "mine" out of the data, and discovering the best data-mining techniques is half of what the LIGO search is all about! Fascinating! :)
____________
Regards,
Bob P.
darkpella
Send message
Joined: 11 Sep 05
Posts: 2
Credit: 1,561,167
RAC: 0
Message 75714 - Posted: 8 Oct 2007, 6:41:31 UTC

Weeell,

didn't get any reply to my first question so I'll try w/ a second one..

Since more releases of S5 crunching application (S5, S5R2, S5R3 and guess there will be more to come) have been run under E@H, did the WUs we crunched come from the very same exeprimental data collected through LIGO, or were different series used for different runs? I mean, will you be able to spot the effects (if any) of the different sensitivity level of the different computational schemes simply by looking at what was found?

Bye

darkpella
____________

Profile Bernd Machenschalk
Volunteer moderator
Project administrator
Project developer
Avatar
Send message
Joined: 15 Oct 04
Posts: 3612
Credit: 128,553,841
RAC: 53,700
Message 75718 - Posted: 8 Oct 2007, 9:57:08 UTC - in response to Message 75714.
Last modified: 8 Oct 2007, 11:35:27 UTC

Since more releases of S5 crunching application (S5, S5R2, S5R3 and guess there will be more to come) have been run under E@H, did the WUs we crunched come from the very same exeprimental data collected through LIGO, or were different series used for different runs?

I don't have time to look it up, but from the top of my head:
- S5R1 used roughly the same amount of data than we used in S4 runs picked from the data that was available then of S5, which was from about half a year. S5RI used the same data set. searching for sources with different "spindowns" than what we looked for in S5R1.
- For S5R2 we used more data (and thus a new data distribution scheme), from the first 13 or 14 months of S5 (S5 finally lasted 22 months). The parameter ranges we searched for in S5R2 were limited by a number of (mostly technical) things (Reinhard mentioned this), we are searching over much larger ranges (of frequency and spindown) in S5R3 in the same data we used for S5R2.
- According to current plans S5R4 will cover the whole S5 data set, which wasn't available until this month (and pre-processing will still take a while anyway).
I mean, will you be able to spot the effects (if any) of the different sensitivity level of the different computational schemes simply by looking at what was found?

Sorry, I don't understand that part.
It might help to keep in mind that it isn't (only) the sensitivity that varies between the seaches, but also the properties of the GW sources we are looking for.

BM
ExtraTerrestrial Apes
Avatar
Send message
Joined: 10 Nov 04
Posts: 710
Credit: 39,144,144
RAC: 1,239
Message 75737 - Posted: 8 Oct 2007, 22:43:49 UTC

Thanks for the information, Reinhard and Bernd!

MrS
____________
Scanning for our furry friends since Jan 2002

Message boards : Cruncher's Corner : S5R3 search strategy ?


Home · Your account · Message boards

This material is based upon work supported by the National Science Foundation (NSF) under Grants PHY-1104902, PHY-1104617 and PHY-1105572 and by the Max Planck Gesellschaft (MPG). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the investigators and do not necessarily reflect the views of the NSF or the MPG.

Copyright © 2016 Bruce Allen