Download speeds slow for BRP data


Advanced search

Message boards : Cruncher's Corner : Download speeds slow for BRP data

AuthorMessage
Profile MarkJ
Avatar
Send message
Joined: Feb 28 08
Posts: 214
Credit: 25,075,749
RAC: 513
Message 113982 - Posted 10 Sep 2011 8:30:14 UTC

    I noticed today that the BRP data downloads were very slow and even going into backoff. Is it that E@H has so much GPU power at its disposal that the network is unable to keep up?

    Maybe its a similar situation to S@H where there are so many hungry GPU hosts and insufficent network bandwidth to deal with them all at once.

    I wonder if we could come up with some way to reuse some/all of the data files rather than having to download 8 x 4Mb files for every wu (yes I know this has been asked before).
    ____________
    BOINC blog

    Profile Bikeman (Heinz-Bernd Eggenstein)
    Forum moderator
    Project administrator
    Project developer
    Avatar
    Send message
    Joined: Aug 28 06
    Posts: 3225
    Credit: 73,764,233
    RAC: 27,484
    Message 114007 - Posted 11 Sep 2011 14:10:54 UTC - in response to Message 113982.

      Last modified: 11 Sep 2011 14:14:48 UTC

      Hi

      The BRP4 work unit generator was improved recently and Is now doing a much better job to fill the task queues of volunteers. So I guess there is a bit more downloading going on right now compared to when the WU generator was too slow.

      Anyway, as long as the downloads can be completed in time for the WU to run, I personally don't mind the downloads being slow, as long as you have a permanent Internet connection. If you are using some kind of dial up or pay by connection time, this is a real pain of course.


      Unlike the GW search data, the input data for the BRP search is unique for that task and is completely consumed by the app: no other volunteer ( except for the validating "wingman") will get another copy of the data files your app gets.

      This is because a certaIn pre-processing step (de-dispersion) of the raw input data happens on the project server, and the volunteer hosts get the de-dispersed data, not the" raw " telescope data.

      Now, if de-dispersion was done on the volunteer hosts as well, the data could be reused across work units. However, the size of the raw input data that would need to be transferred (and the required RAM to process it) is prohibitive for consumer PCs, so the scientists ruled out this possibility for the moment.

      CU
      HB
      ____________

      Profile MarkJ
      Avatar
      Send message
      Joined: Feb 28 08
      Posts: 214
      Credit: 25,075,749
      RAC: 513
      Message 114016 - Posted 12 Sep 2011 11:14:39 UTC

        Last modified: 12 Sep 2011 11:15:09 UTC

        Unfortunately while I have a decent DSL speed it has download limits, so I tend to use the off-peak period for a couple of the machines. The off-peak window is only 6 hours, so download speed effects those machines as they try and replenish their 1 day cache.

        Cheers,
        MarkJ
        ____________
        BOINC blog

        Richard Haselgrove
        Send message
        Joined: Dec 10 05
        Posts: 1334
        Credit: 29,572,747
        RAC: 8,105
        Message 114017 - Posted 12 Sep 2011 13:44:19 UTC - in response to Message 114007.

          Hi

          The BRP4 work unit generator was improved recently ...

          It would be interesting to learn a little bit more about the WU generation process and the stages involved. In particular, it feels to me as if it's a batch process, with blocks of WUs being released in clumps, rather than a steady flow.

          I'm also seeing great variability in download speeds. Sometimes no progress is possible at all, sometimes they crawl down at 5 or 10 KB/sec, and sometimes they come through at up to 100 KB/sec.

          It led me to wonder whether the intermediate datafile storage server had difficulty coping with both high-speed data insertion and high-speed downloading at the same time.

          hotze33
          Send message
          Joined: Nov 10 04
          Posts: 91
          Credit: 75,381,647
          RAC: 19,161
          Message 114019 - Posted 12 Sep 2011 18:41:57 UTC

            For the first time in days I had now normal download speeds. Last week it was arround 8-15kB/s.
            ____________

            Profile Stranger7777
            Avatar
            Send message
            Joined: Mar 17 05
            Posts: 322
            Credit: 95,999,699
            RAC: 69,178
            Message 114098 - Posted 17 Sep 2011 22:05:49 UTC

              I think we are really staying near a moment when users machines will be able to get the whole beam at once by downloading large data file with raw telescope data, crunch it from the beginnig to the end completely (including pre-processing and post-processing), and return back to server all the data in a post processed state. There are a lot of modern users machines that are able to do this job already today. They alread have rather good internet connection and are able to work with huge databases right in big enough PCs memory. So, it will be great if we will have an opportunity to crunch BIG WUs on sufficient machines. BOINC already can return machine capacity information to server so it can decide whether it is possible for that machine to do BIG job at once.

              Profile Steve Dodd
              Send message
              Joined: Mar 20 05
              Posts: 4
              Credit: 5,999,810
              RAC: 751
              Message 114249 - Posted 30 Sep 2011 14:37:48 UTC

                I'm going into backoff for nearly all of my downloads (doesn't matter what kind of job (CPU only)). DSL, constant internet presence. (5 machines are showing this activity)
                ____________

                Richard Haselgrove
                Send message
                Joined: Dec 10 05
                Posts: 1334
                Credit: 29,572,747
                RAC: 8,105
                Message 114250 - Posted 30 Sep 2011 15:06:29 UTC - in response to Message 114249.

                  I'm seeing five different behaviours:

                  1) Failure to connect at all to the download server
                  2) Connect, but an almost immediate (<20 seconds) disconnect
                  3) Connect, but around 5 minutes with no data flow before timeout and disconnect
                  4) Connect and data, but a low sustained transfer rate (<10 KB/sec)
                  5) Connect and data, with a normal/fast transfer rate (~100 KB/sec is 'normal' here)

                  Modes (1)-(4) are all too familiar from that other big CUDA project, SETI@Home, and seem to be symptomatic of an overloaded download link/router/server.

                  The data rates here are prodigious - BRP4 requires roughly four times as much data per unit of computation time, as even the fastest 'shorty' SETI work. And with SETI being largely down at the moment, a lot of that data demand will have transferred here.

                  It would be interesting to know if Einstein/AEI has any publicly-accessible network monitors like SETI's Cricket graphs. Even if there's nothing available to the public, it might be worth the project's network support staff checking their internal tools, and seeing if the download system is best tuned to support the sort of volume we're seeing now - those five-minute timeouts must waste a lot of socket memory, for example.

                  Jeroen
                  Send message
                  Joined: Nov 25 05
                  Posts: 314
                  Credit: 338,771,498
                  RAC: 300,243
                  Message 114254 - Posted 30 Sep 2011 17:15:04 UTC - in response to Message 114007.

                    Is there a possibility of using a different compression algorithm for BRP4? Would using LZMA2 for example reduce the size of each task some? I am not sure what the existing compression algorithm is. With each task being 32MB and GPUs being able to process these tasks in as little as 20-minutes, the transfer requirements become significant. This is even more so the case when multiple GPUs are running.

                    FrankHagen
                    Send message
                    Joined: Feb 13 08
                    Posts: 102
                    Credit: 63,762
                    RAC: 78
                    Message 114255 - Posted 30 Sep 2011 17:28:16 UTC - in response to Message 114254.

                      Last modified: 30 Sep 2011 17:28:39 UTC

                      Is there a possibility of using a different compression algorithm for BRP4? Would using LZMA2 for example reduce the size of each task some? I am not sure what the existing compression algorithm is. With each task being 32MB and GPUs being able to process these tasks in as little as 20-minutes, the transfer requirements become significant. This is even more so the case when multiple GPUs are running.


                      well, there are several projects which use 7z http://www.7-zip.org/ to compress data.

                      Richard Haselgrove
                      Send message
                      Joined: Dec 10 05
                      Posts: 1334
                      Credit: 29,572,747
                      RAC: 8,105
                      Message 114256 - Posted 30 Sep 2011 17:39:21 UTC - in response to Message 114255.

                        Is there a possibility of using a different compression algorithm for BRP4? Would using LZMA2 for example reduce the size of each task some? I am not sure what the existing compression algorithm is. With each task being 32MB and GPUs being able to process these tasks in as little as 20-minutes, the transfer requirements become significant. This is even more so the case when multiple GPUs are running.

                        well, there are several projects which use 7z http://www.7-zip.org/ to compress data.

                        and applying 7-zip to a random data file reduces its size from 4,098 KB to 3,979 KB - about 3% compression.

                        I do think the project might have thought of that one, if it was going to be any significant use ;-)

                        FrankHagen
                        Send message
                        Joined: Feb 13 08
                        Posts: 102
                        Credit: 63,762
                        RAC: 78
                        Message 114257 - Posted 30 Sep 2011 17:45:29 UTC - in response to Message 114256.

                          Is there a possibility of using a different compression algorithm for BRP4? Would using LZMA2 for example reduce the size of each task some? I am not sure what the existing compression algorithm is. With each task being 32MB and GPUs being able to process these tasks in as little as 20-minutes, the transfer requirements become significant. This is even more so the case when multiple GPUs are running.

                          well, there are several projects which use 7z http://www.7-zip.org/ to compress data.

                          and applying 7-zip to a random data file reduces its size from 4,098 KB to 3,979 KB - about 3% compression.

                          I do think the project might have thought of that one, if it was going to be any significant use ;-)


                          yup - it depends on the kind of data. at least worth a test..

                          Profile Stranger7777
                          Avatar
                          Send message
                          Joined: Mar 17 05
                          Posts: 322
                          Credit: 95,999,699
                          RAC: 69,178
                          Message 114260 - Posted 1 Oct 2011 9:25:40 UTC - in response to Message 114257.

                            Last modified: 1 Oct 2011 9:31:27 UTC

                            Is there a possibility of using a different compression algorithm for BRP4? Would using LZMA2 for example reduce the size of each task some? I am not sure what the existing compression algorithm is. With each task being 32MB and GPUs being able to process these tasks in as little as 20-minutes, the transfer requirements become significant. This is even more so the case when multiple GPUs are running.

                            well, there are several projects which use 7z http://www.7-zip.org/ to compress data.

                            and applying 7-zip to a random data file reduces its size from 4,098 KB to 3,979 KB - about 3% compression.

                            I do think the project might have thought of that one, if it was going to be any significant use ;-)


                            yup - it depends on the kind of data. at least worth a test..


                            Well. I did the test. BRP4 can be compressed to 91% of original size according to WinRAR. They are partly text files and partly data files with lots of zeroes in there. So it is the moment to do something with data. Not with the network bandwidth.

                            FrankHagen
                            Send message
                            Joined: Feb 13 08
                            Posts: 102
                            Credit: 63,762
                            RAC: 78
                            Message 114270 - Posted 1 Oct 2011 14:20:59 UTC - in response to Message 114260.

                              Well. I did the test. BRP4 can be compressed to 91% of original size according to WinRAR. They are partly text files and partly data files with lots of zeroes in there. So it is the moment to do something with data. Not with the network bandwidth.


                              alexander's winrar probably is not an option because it's payware. 7z is open-source and to my experience often performs better..

                              Profile Stranger7777
                              Avatar
                              Send message
                              Joined: Mar 17 05
                              Posts: 322
                              Credit: 95,999,699
                              RAC: 69,178
                              Message 114274 - Posted 1 Oct 2011 17:27:07 UTC - in response to Message 114270.

                                Well. I did the test. BRP4 can be compressed to 91% of original size according to WinRAR. They are partly text files and partly data files with lots of zeroes in there. So it is the moment to do something with data. Not with the network bandwidth.


                                alexander's winrar probably is not an option because it's payware. 7z is open-source and to my experience often performs better..

                                I think there will be no difference in compression between both WinRAR and 7z.

                                Richard Haselgrove
                                Send message
                                Joined: Dec 10 05
                                Posts: 1334
                                Credit: 29,572,747
                                RAC: 8,105
                                Message 114275 - Posted 1 Oct 2011 17:41:35 UTC - in response to Message 114274.

                                  Well. I did the test. BRP4 can be compressed to 91% of original size according to WinRAR. They are partly text files and partly data files with lots of zeroes in there. So it is the moment to do something with data. Not with the network bandwidth.

                                  alexander's winrar probably is not an option because it's payware. 7z is open-source and to my experience often performs better..

                                  I think there will be no difference in compression between both WinRAR and 7z.

                                  Both WinRAR and 7z can handle multiple compression formats. It isn't the tool that you use that matters, it's the compression algorithm that you choose which has to be one which balances compression ratios with server CPU load.

                                  Remember too that the servers run Linux, so whatever tool is chosen, it won't be one with "Win..." in its name ;-)

                                  Profile Bikeman (Heinz-Bernd Eggenstein)
                                  Forum moderator
                                  Project administrator
                                  Project developer
                                  Avatar
                                  Send message
                                  Joined: Aug 28 06
                                  Posts: 3225
                                  Credit: 73,764,233
                                  RAC: 27,484
                                  Message 114277 - Posted 1 Oct 2011 18:43:34 UTC

                                    Hi!

                                    If you try gzip under Linux, the result is not significantly different, around 8% compression which is not that much.

                                    The work unit generation was kind of a bottleneck before (and compression would be just another, final stage for this), but now that is solved and one can start to balance the 8% or so bandwidths saving against the "never change a running system" thing :-).

                                    There is even a transparent file decompression feature that is built into BOINC so that BOINC will handle the decompression on the client side (BOINC > 5.4) if instructed by the header response of the HTTP download server? So there would not have to be a new client version for this even, if I understand this correctly. Only server side compression, cconfig changes in (ALL!) download servers.

                                    Right?

                                    CU
                                    HBE
                                    ____________

                                    Jeroen
                                    Send message
                                    Joined: Nov 25 05
                                    Posts: 314
                                    Credit: 338,771,498
                                    RAC: 300,243
                                    Message 114280 - Posted 1 Oct 2011 20:11:01 UTC - in response to Message 114277.

                                      Last modified: 1 Oct 2011 20:11:19 UTC

                                      I tried a few different compression options with one of the 4MB files:

                                      $ du -sk p2030.20100331.G45.61-01.91.C.b0s0g0.00000_3465.*
                                      3988 p2030.20100331.G45.61-01.91.C.b0s0g0.00000_3465.7z (lzma2)
                                      4100 p2030.20100331.G45.61-01.91.C.b0s0g0.00000_3465.binary
                                      4064 p2030.20100331.G45.61-01.91.C.b0s0g0.00000_3465.binary.bz2 (-9)
                                      3952 p2030.20100331.G45.61-01.91.C.b0s0g0.00000_3465.binary.gz (-9)
                                      3988 p2030.20100331.G45.61-01.91.C.b0s0g0.00000_3465.binary.xz (lzma2)

                                      It looks like gzip has the best compression for this particular file format although the difference is not that much in any case. I tried a couple different filters with xz as well but these did not improve compression.

                                      From reading the BOINC documentation, client 5.4 or newer can handle both deflate and gzip using HTTP content encoding. The files would be compressed on the server side and the boinc client can handle decompression after download. If Apache is being used by the project then mod_deflate can handle compression automatically.

                                      http://boinc.berkeley.edu/trac/wiki/FileCompression

                                      Profile SciManStev
                                      Avatar
                                      Send message
                                      Joined: Aug 27 05
                                      Posts: 100
                                      Credit: 5,854,194
                                      RAC: 0
                                      Message 114283 - Posted 1 Oct 2011 21:40:46 UTC

                                        The data rates here are prodigious - BRP4 requires roughly four times as much data per unit of computation time, as even the fastest 'shorty' SETI work. And with SETI being largely down at the moment, a lot of that data demand will have transferred here.


                                        Guilty as charged.

                                        Steve
                                        ____________
                                        Crunching as member of The GPU Users Group team.

                                        Profile Stranger7777
                                        Avatar
                                        Send message
                                        Joined: Mar 17 05
                                        Posts: 322
                                        Credit: 95,999,699
                                        RAC: 69,178
                                        Message 114297 - Posted 2 Oct 2011 9:45:37 UTC

                                          I think we should try. If it will overload servers alittle, than it is no trouble to get everything back to normal. Am I right?

                                          Profile dskagcommunity
                                          Avatar
                                          Send message
                                          Joined: Mar 16 11
                                          Posts: 75
                                          Credit: 14,787,355
                                          RAC: 1,875
                                          Message 114300 - Posted 2 Oct 2011 11:09:56 UTC

                                            Last modified: 2 Oct 2011 11:10:30 UTC

                                            Downloadspeeds are very low @ the time. Was away for some hours and get only 7 WUs downloaded, the over 20 others are still loading O.o Had to primegrid one WU before the first einstein WU was ready to crunch. So the packing is not the real clue that seems to help out of speed problems ^^
                                            ____________
                                            DSKAG Austria Research Team: http://www.research.dskag.at



                                            Profile MarkJ
                                            Avatar
                                            Send message
                                            Joined: Feb 28 08
                                            Posts: 214
                                            Credit: 25,075,749
                                            RAC: 513
                                            Message 114301 - Posted 2 Oct 2011 12:36:18 UTC

                                              I did suggest a change to the way BOINC handles backoffs. It seems Dr A has put it into the too hard basket. If the projects start asking for it then he may look at doing it.

                                              Date: Sat, 01 Oct 2011 10:37:23 -0700
                                              From: David Anderson
                                              Subject: Re: [boinc_dev] Suggested change to project backoffs
                                              To: boinc_dev@ssl.berkeley.edu

                                              It would be possible to do per-server backoff, but it's probably not worth the added complexity.
                                              -- David

                                              Now that Einstein is getting download issues I would suggest a change to the way backoff's work.

                                              Einstein issues different types of work from different download servers, so you get GPU work from one and CPU work from a couple of others. Seti (correct me if I am wrong) collects all type of work from all the download servers so you could get CPU or GPU work from any one of them.

                                              I notice that in Einsteins case as the project goes into back off for GPU work it prevents me from getting CPU work, even though its from a different server. So I'd suggest that rather than backing off on a per-project basis we backoff on a per-project server URL basis (or just server URL basis). This will allow Seti work to flow even if one download server is under stress. It would also work for Einstein in that we could get CPU work when their GPU download server is stressed.. It might also mean that rather than one server getting stressed and then the others get clogged up with downloads that have gone to backoff when they didn't need to.

                                              Over to you David for your thoughts.

                                              Cheers, MarkJ

                                              ____________
                                              BOINC blog

                                              Profile Steve Dodd
                                              Send message
                                              Joined: Mar 20 05
                                              Posts: 4
                                              Credit: 5,999,810
                                              RAC: 751
                                              Message 114306 - Posted 2 Oct 2011 13:58:49 UTC

                                                It's getting to the point now where I'm running out of CPU work on a couple of machines because I can't get work downloaded. Some of that work has a due date that I may not be able to meet if it isn't downloaded soon :(
                                                ____________

                                                Profile Steve Dodd
                                                Send message
                                                Joined: Mar 20 05
                                                Posts: 4
                                                Credit: 5,999,810
                                                RAC: 751
                                                Message 114359 - Posted 5 Oct 2011 0:58:21 UTC - in response to Message 114306.

                                                  looks like the logjam has broken. what, did SETI come back online?
                                                  ____________

                                                  astro-marwil
                                                  Send message
                                                  Joined: May 28 05
                                                  Posts: 277
                                                  Credit: 23,655,042
                                                  RAC: 27,526
                                                  Message 114365 - Posted 5 Oct 2011 8:23:42 UTC

                                                    Last modified: 5 Oct 2011 8:25:15 UTC

                                                    There seems to be a new problem with downloading BRP4-files.
                                                    This morning I got within 3h 63 BRP4-files, but all of them became errored. Now the server says, your daily quota is reached. The modem does´nt show any mal function. CRC-Error are low, 11 within that hour, SNR and datarate ok. I never had such within the more than 6 years I´m up here. Some of the LAT files show sometimes similar problems, but not at this percentage. The last LAT-file, a few minutes ago, was ok.
                                                    Kind regards
                                                    Martin
                                                    ____________

                                                    Profile SciManStev
                                                    Avatar
                                                    Send message
                                                    Joined: Aug 27 05
                                                    Posts: 100
                                                    Credit: 5,854,194
                                                    RAC: 0
                                                    Message 114366 - Posted 5 Oct 2011 11:23:55 UTC - in response to Message 114359.

                                                      looks like the logjam has broken. what, did SETI come back online?

                                                      Seti is still having all kinds of issues. Work only trickles out. It has had little affect on the computers that don't produce a great deal of work, but has crippled the top hosts.

                                                      Steve
                                                      ____________
                                                      Crunching as member of The GPU Users Group team.

                                                      Profile MarkJ
                                                      Avatar
                                                      Send message
                                                      Joined: Feb 28 08
                                                      Posts: 214
                                                      Credit: 25,075,749
                                                      RAC: 513
                                                      Message 114409 - Posted 8 Oct 2011 6:24:42 UTC - in response to Message 114301.

                                                        It seems Dr A has had a rethink after a few more emails on the mailing list and is going to look into this.

                                                        I did suggest a change to the way BOINC handles backoffs. It seems Dr A has put it into the too hard basket. If the projects start asking for it then he may look at doing it.

                                                        Date: Sat, 01 Oct 2011 10:37:23 -0700
                                                        From: David Anderson
                                                        Subject: Re: [boinc_dev] Suggested change to project backoffs
                                                        To: boinc_dev@ssl.berkeley.edu

                                                        It would be possible to do per-server backoff, but it's probably not worth the added complexity.
                                                        -- David

                                                        Now that Einstein is getting download issues I would suggest a change to the way backoff's work.

                                                        Einstein issues different types of work from different download servers, so you get GPU work from one and CPU work from a couple of others. Seti (correct me if I am wrong) collects all type of work from all the download servers so you could get CPU or GPU work from any one of them.

                                                        I notice that in Einsteins case as the project goes into back off for GPU work it prevents me from getting CPU work, even though its from a different server. So I'd suggest that rather than backing off on a per-project basis we backoff on a per-project server URL basis (or just server URL basis). This will allow Seti work to flow even if one download server is under stress. It would also work for Einstein in that we could get CPU work when their GPU download server is stressed.. It might also mean that rather than one server getting stressed and then the others get clogged up with downloads that have gone to backoff when they didn't need to.

                                                        Over to you David for your thoughts.

                                                        Cheers, MarkJ

                                                        Fred J. Verster
                                                        Avatar
                                                        Send message
                                                        Joined: Apr 27 08
                                                        Posts: 114
                                                        Credit: 20,817,224
                                                        RAC: 8,000
                                                        Message 114455 - Posted 11 Oct 2011 22:53:17 UTC - in response to Message 114409.

                                                          Last modified: 11 Oct 2011 22:54:09 UTC

                                                          Haven't noticed DownLoad trouble or no D'Load, at all.
                                                          Largest DownLoad Qeue, was SETIs. Always on the fastest rigs,
                                                          cause they need the most.
                                                          Thanks David Anderson , for the update and future plans.
                                                          ____________

                                                          Knight who says Ni N! N!

                                                          Profile Bernd Machenschalk
                                                          Forum moderator
                                                          Project administrator
                                                          Project developer
                                                          Avatar
                                                          Send message
                                                          Joined: Oct 15 04
                                                          Posts: 3274
                                                          Credit: 90,832,091
                                                          RAC: 9,943
                                                          Message 114477 - Posted 13 Oct 2011 8:49:49 UTC

                                                            Is there any other BOINC project that uses more than a single server for download? I would imagine that Einstein@home is really a special case in that respect.

                                                            I do very well understand that implementing a rather complex mechanism in the client to work around a problem of a single project that this project needs to fix anyway is considered not worth the effort.

                                                            BM

                                                            Richard Haselgrove
                                                            Send message
                                                            Joined: Dec 10 05
                                                            Posts: 1334
                                                            Credit: 29,572,747
                                                            RAC: 8,105
                                                            Message 114480 - Posted 13 Oct 2011 13:08:23 UTC - in response to Message 114477.

                                                              Is there any other BOINC project that uses more than a single server for download? I would imagine that Einstein@home is really a special case in that respect.

                                                              I do very well understand that implementing a rather complex mechanism in the client to work around a problem of a single project that this project needs to fix anyway is considered not worth the effort.

                                                              BM

                                                              CPDN (Climate Prediction) uses multiple download servers, but they've taken a different approach. The client is only given one url, but for redirection or redundancy it can be of the form

                                                              <url>http://climateapps2.oucs.ox.ac.uk/cpdnboinc/download/mirror.php?file=/hadam3p_6.14_windows_intelx86.exe</url>

                                                              - I would guess the decision as to exactly which machine handles the load is made by the php script running on the server.

                                                              SETI@home use a single download url, but round-robin DNS to distribute the load across two servers. When they need to distribute vast numbers of identical files, for example during an application roll-out, they used some form of transparent redirection to a proxy cache server, but some ISPs choked on that and prevented some clients getting the files.

                                                              I guess having multiple servers only really helps for files which can easily (and efficiently) be made available in more than one place at once - applications, and the locality data files that only Einstein uses. But the problem of the client stalling on files required for one application class, preventing the download of files for a different application within the same project, could be more general than just Einstein.

                                                              Post to thread

                                                              Message boards : Cruncher's Corner : Download speeds slow for BRP data


                                                              Home · Your account · Message boards

                                                              This material is based upon work supported by the National Science Foundation (NSF) under Grants PHY-1104902, PHY-1104617 and PHY-1105572 and by the Max Planck Gesellschaft (MPG). Any opinions, findings, and conclusions or recommendations expressed in this material are those of the investigators and do not necessarily reflect the views of the NSF or the MPG.

                                                              Copyright © 2014 Bruce Allen