Long WU Runtimes

log in

Advanced search

Message boards : Number crunching : Long WU Runtimes

Author Message
frankhagen
Send message
Joined: 22 Oct 08
Posts: 21
Credit: 61,521
RAC: 1
Message 1973 - Posted: 7 Mar 2012, 19:42:53 UTC

could someone over there pretty please tell Tom how boinc works?

sending out WU's that run less than a minute is freaky enough.
coming up with a batch of WU's that take hours directly after that is way too much.
and all of that with a deadline close to 24h is really over the top. :(

not yet talking about creditnew - it will fry him for that...

Profile Tom
Volunteer moderator
Project administrator
Project developer
Avatar
Send message
Joined: 23 Jun 08
Posts: 481
Credit: 207,490
RAC: 70
Message 1974 - Posted: 7 Mar 2012, 20:48:43 UTC - in response to Message 1973.

Awesome :)

The current batch of workunits were intended to be optimized at exactly one hour. This was based off of runtime data gathered from internal testing. We can scale back the work appropriately if it is running far longer than an hour per workunit for most hosts.

Furthermore, the cognitive models computed through MindModeling.org are contributed from multiple sources. Model composition will vary form job to job, and thus from workunit to workunit, so generating a one-size-fits-all scheduling system is difficult. Right now we baseline the model internally and then scale the workunit size from there. However, given that different inputs can affect the model's behavior and runtime, and that fact that performance of host machines will vary, the process won't always be accurate.

frankhagen
Send message
Joined: 22 Oct 08
Posts: 21
Credit: 61,521
RAC: 1
Message 1975 - Posted: 7 Mar 2012, 21:04:21 UTC - in response to Message 1974.

Awesome :)


exactamento!

The current batch of workunits were intended to be optimized at exactly one hour. This was based off of runtime data gathered from internal testing. We can scale back the work appropriately if it is running far longer than an hour per workunit for most hosts.


and exactly THAT is the first thing you not want to do if you are running current boinc server code unless you have opted out for creditnew.

but i have to accept that you got no clue what i am talking about.

have fun, best of luck - i'm out of here until things get sorted out...

Profile cstanley
Volunteer moderator
Project administrator
Project developer
Project scientist
Send message
Joined: 19 Jun 09
Posts: 1
Credit: 13,142
RAC: 0
Message 1976 - Posted: 7 Mar 2012, 21:49:44 UTC - in response to Message 1975.

We hold fpops_est constant, so FLOPS will be perfectly correlated with credits assigned. On top of that, we are tuning the amount of work sent per workunit, so that runtime for each workunit (regardless of which model/job is running) will be about 1 hour.

For a volunteer, this means

[1] Correct amount of credit is assigned for all jobs, and
[2] Average model runtime stays fairly constant across jobs (~ 1 hr).

Seems like a good solution to me. Let me know if you have any thoughts/suggestions.

-Clayton

frankhagen
Send message
Joined: 22 Oct 08
Posts: 21
Credit: 61,521
RAC: 1
Message 1984 - Posted: 8 Mar 2012, 16:11:49 UTC - in response to Message 1976.

We hold fpops_est constant, so FLOPS will be perfectly correlated with credits assigned. On top of that, we are tuning the amount of work sent per workunit, so that runtime for each workunit (regardless of which model/job is running) will be about 1 hour.

For a volunteer, this means

[1] Correct amount of credit is assigned for all jobs, and
[2] Average model runtime stays fairly constant across jobs (~ 1 hr).

Seems like a good solution to me. Let me know if you have any thoughts/suggestions.

-Clayton


it does not work - plain simple.

http://www.mindmodeling.org/beta/results.php?hostid=20129

it happened on each and every project that's similar and switched to creditnew.
but i know everyone simply needs to dig out his very own pitfall.


Post to thread

Message boards : Number crunching : Long WU Runtimes


Main page · Your account · Message boards


Copyright © 2014 MindModeling.org