librelist archives

« back to archive

resque around perform ensure block and shutdown

resque around perform ensure block and shutdown

From:
Karl Baum
Date:
2010-10-19 @ 01:59
As i wrote in a previous thread, i am writing a module that caps the 
number of concurrent resque jobs processing a specific user.

http://gist.github.com/631024

Seems to work pretty well, but the only problem is when i kill the resque 
processes, the ensure block, which decrements the running count, is not 
run.  As a result, when i start the resque jobs up again, the number of 
jobs running for that user is incorrectly set to 5 and that user's jobs 
will forever be blocked.  

What is the proper way to handle cleanup of redis on a normal kill SIGTERM?  

thx

-karl

Re: [resque] resque around perform ensure block and shutdown

From:
Luke Antins
Date:
2010-10-19 @ 02:08
Send SIGQUIT shutdown after the current job has finished =)

-- 
Luke Antins
http://lividpenguin.com


Karl Baum wrote:
> As i wrote in a previous thread, i am writing a module that caps the 
number of concurrent resque jobs processing a specific user.
>
> http://gist.github.com/631024
>
> Seems to work pretty well, but the only problem is when i kill the 
resque processes, the ensure block, which decrements the running count, is
not run.  As a result, when i start the resque jobs up again, the number 
of jobs running for that user is incorrectly set to 5 and that user's jobs
will forever be blocked.
>
> What is the proper way to handle cleanup of redis on a normal kill SIGTERM?
>
> thx
>
> -karl
>    

Re: [resque] resque around perform ensure block and shutdown

From:
Karl Baum
Date:
2010-10-19 @ 15:18
So is kill -SIGQUIT the correct way to shutdown resque?

Thanks!


On Oct 18, 2010, at 10:08 PM, Luke Antins wrote:

> Send SIGQUIT shutdown after the current job has finished =)
> 
> -- 
> Luke Antins
> http://lividpenguin.com
> 
> 
> Karl Baum wrote:
>> As i wrote in a previous thread, i am writing a module that caps the 
number of concurrent resque jobs processing a specific user.
>> 
>> http://gist.github.com/631024
>> 
>> Seems to work pretty well, but the only problem is when i kill the 
resque processes, the ensure block, which decrements the running count, is
not run.  As a result, when i start the resque jobs up again, the number 
of jobs running for that user is incorrectly set to 5 and that user's jobs
will forever be blocked.
>> 
>> What is the proper way to handle cleanup of redis on a normal kill SIGTERM?
>> 
>> thx
>> 
>> -karl
>> 

Re: [resque] resque around perform ensure block and shutdown

From:
Adam Tucker
Date:
2010-10-19 @ 15:27
Check out lib/resque/worker.rb for the signals it handles:

From the comments:

    # TERM: Shutdown immediately, stop processing jobs.
    #  INT: Shutdown immediately, stop processing jobs.
    # QUIT: Shutdown after the current job has finished processing.
    # USR1: Kill the forked child immediately, continue processing jobs.
    # USR2: Don't process any new jobs
    # CONT: Start processing jobs again after a USR2

so QUIT is the most graceful way to shutdown a worker.

-Adam

On Tue, Oct 19, 2010 at 11:18 AM, Karl Baum <karl.baum@gmail.com> wrote:

> So is kill -SIGQUIT the correct way to shutdown resque?
>
> Thanks!
>
>
> On Oct 18, 2010, at 10:08 PM, Luke Antins wrote:
>
> > Send SIGQUIT shutdown after the current job has finished =)
> >
> > --
> > Luke Antins
> > http://lividpenguin.com
> >
> >
> > Karl Baum wrote:
> >> As i wrote in a previous thread, i am writing a module that caps the
> number of concurrent resque jobs processing a specific user.
> >>
> >> http://gist.github.com/631024
> >>
> >> Seems to work pretty well, but the only problem is when i kill the
> resque processes, the ensure block, which decrements the running count, is
> not run.  As a result, when i start the resque jobs up again, the number of
> jobs running for that user is incorrectly set to 5 and that user's jobs will
> forever be blocked.
> >>
> >> What is the proper way to handle cleanup of redis on a normal kill
> SIGTERM?
> >>
> >> thx
> >>
> >> -karl
> >>
>
>

How to configure god

From:
Tute
Date:
2010-10-20 @ 13:13
I want to do two things in god/resque. Add more workers. Maybe ten and 
maybe the posibility of allow more workers active depending in the CPU 
consume in that moment.

Re: [resque] How to configure god

From:
Philippe Lafoucrière
Date:
2010-10-20 @ 13:21
On Wed, Oct 20, 2010 at 3:13 PM, Tute <tute.unique@gmail.com> wrote:
> I want to do two things in god/resque. Add more workers. Maybe ten and
> maybe the posibility of allow more workers active depending in the CPU
> consume in that moment.

Ok !

Re: [resque] How to configure god

From:
Tute
Date:
2010-10-20 @ 13:26
The problem is I don't have idea how.

On 10/20/2010 10:21 AM, Philippe Lafoucrière wrote:
> On Wed, Oct 20, 2010 at 3:13 PM, Tute<tute.unique@gmail.com>  wrote:
>    
>> I want to do two things in god/resque. Add more workers. Maybe ten and
>> maybe the posibility of allow more workers active depending in the CPU
>> consume in that moment.
>>      
> Ok !
>    

Re: [resque] resque around perform ensure block and shutdown

From:
Karl Baum
Date:
2010-10-19 @ 15:38
That makes sense.  If a default kill is done with a TERM, are the 
currently running jobs lost forever?

Thanks!
On Oct 19, 2010, at 11:27 AM, Adam Tucker wrote:

> Check out lib/resque/worker.rb for the signals it handles:
> 
> From the comments:
> 
>     # TERM: Shutdown immediately, stop processing jobs.
>     #  INT: Shutdown immediately, stop processing jobs.
>     # QUIT: Shutdown after the current job has finished processing.
>     # USR1: Kill the forked child immediately, continue processing jobs.
>     # USR2: Don't process any new jobs
>     # CONT: Start processing jobs again after a USR2
> 
> so QUIT is the most graceful way to shutdown a worker.
> 
> -Adam
> 
> On Tue, Oct 19, 2010 at 11:18 AM, Karl Baum <karl.baum@gmail.com> wrote:
> So is kill -SIGQUIT the correct way to shutdown resque?
> 
> Thanks!
> 
> 
> On Oct 18, 2010, at 10:08 PM, Luke Antins wrote:
> 
> > Send SIGQUIT shutdown after the current job has finished =)
> >
> > --
> > Luke Antins
> > http://lividpenguin.com
> >
> >
> > Karl Baum wrote:
> >> As i wrote in a previous thread, i am writing a module that caps the 
number of concurrent resque jobs processing a specific user.
> >>
> >> http://gist.github.com/631024
> >>
> >> Seems to work pretty well, but the only problem is when i kill the 
resque processes, the ensure block, which decrements the running count, is
not run.  As a result, when i start the resque jobs up again, the number 
of jobs running for that user is incorrectly set to 5 and that user's jobs
will forever be blocked.
> >>
> >> What is the proper way to handle cleanup of redis on a normal kill SIGTERM?
> >>
> >> thx
> >>
> >> -karl
> >>
> 
> 

Re: [resque] resque around perform ensure block and shutdown

From:
Luke Antins
Date:
2010-10-19 @ 20:10
Your going to loose any jobs currently being processed.
Unless you have some way to re-queue jobs that have not been performed (if you 
care about that) then yes the jobs will be lost forever.

-- 
Luke Antins
http://lividpenguin.com


Karl Baum wrote:
> That makes sense.  If a default kill is done with a TERM, are the currently 
> running jobs lost forever?
>
> Thanks!
> On Oct 19, 2010, at 11:27 AM, Adam Tucker wrote:
>
>> Check out lib/resque/worker.rb for the signals it handles:
>>
>> From the comments:
>>
>>     # TERM: Shutdown immediately, stop processing jobs.
>>     #  INT: Shutdown immediately, stop processing jobs.
>>     # QUIT: Shutdown after the current job has finished processing.
>>     # USR1: Kill the forked child immediately, continue processing jobs.
>>     # USR2: Don't process any new jobs
>>     # CONT: Start processing jobs again after a USR2
>>
>> so QUIT is the most graceful way to shutdown a worker.
>>
>> -Adam
>>
>> On Tue, Oct 19, 2010 at 11:18 AM, Karl Baum <karl.baum@gmail.com 
>> <mailto:karl.baum@gmail.com>> wrote:
>>
>>     So is kill -SIGQUIT the correct way to shutdown resque?
>>
>>     Thanks!
>>
>>
>>     On Oct 18, 2010, at 10:08 PM, Luke Antins wrote:
>>
>>     > Send SIGQUIT shutdown after the current job has finished =)
>>     >
>>     > --
>>     > Luke Antins
>>     > http://lividpenguin.com <http://lividpenguin.com/>
>>     >
>>     >
>>     > Karl Baum wrote:
>>     >> As i wrote in a previous thread, i am writing a module that caps the
>>     number of concurrent resque jobs processing a specific user.
>>     >>
>>     >> http://gist.github.com/631024
>>     >>
>>     >> Seems to work pretty well, but the only problem is when i kill the
>>     resque processes, the ensure block, which decrements the running count,
>>     is not run.  As a result, when i start the resque jobs up again, the
>>     number of jobs running for that user is incorrectly set to 5 and that
>>     user's jobs will forever be blocked.
>>     >>
>>     >> What is the proper way to handle cleanup of redis on a normal kill
>>     SIGTERM?
>>     >>
>>     >> thx
>>     >>
>>     >> -karl
>>     >>
>>
>>
>

Re: [resque] resque around perform ensure block and shutdown

From:
Karl Baum
Date:
2010-10-19 @ 20:31
Sometimes i see jobs on the error queue with Resque::DirtyExit failures 
though.. aren't those the jobs that were running when we executed the 
kill?


On Oct 19, 2010, at 4:10 PM, Luke Antins wrote:

> Your going to loose any jobs currently being processed.
> Unless you have some way to re-queue jobs that have not been performed (if you 
> care about that) then yes the jobs will be lost forever.
> 
> -- 
> Luke Antins
> http://lividpenguin.com
> 
> 
> Karl Baum wrote:
>> That makes sense.  If a default kill is done with a TERM, are the currently 
>> running jobs lost forever?
>> 
>> Thanks!
>> On Oct 19, 2010, at 11:27 AM, Adam Tucker wrote:
>> 
>>> Check out lib/resque/worker.rb for the signals it handles:
>>> 
>>> From the comments:
>>> 
>>>    # TERM: Shutdown immediately, stop processing jobs.
>>>    #  INT: Shutdown immediately, stop processing jobs.
>>>    # QUIT: Shutdown after the current job has finished processing.
>>>    # USR1: Kill the forked child immediately, continue processing jobs.
>>>    # USR2: Don't process any new jobs
>>>    # CONT: Start processing jobs again after a USR2
>>> 
>>> so QUIT is the most graceful way to shutdown a worker.
>>> 
>>> -Adam
>>> 
>>> On Tue, Oct 19, 2010 at 11:18 AM, Karl Baum <karl.baum@gmail.com 
>>> <mailto:karl.baum@gmail.com>> wrote:
>>> 
>>>    So is kill -SIGQUIT the correct way to shutdown resque?
>>> 
>>>    Thanks!
>>> 
>>> 
>>>    On Oct 18, 2010, at 10:08 PM, Luke Antins wrote:
>>> 
>>>> Send SIGQUIT shutdown after the current job has finished =)
>>>> 
>>>> --
>>>> Luke Antins
>>>> http://lividpenguin.com <http://lividpenguin.com/>
>>>> 
>>>> 
>>>> Karl Baum wrote:
>>>>> As i wrote in a previous thread, i am writing a module that caps the
>>>    number of concurrent resque jobs processing a specific user.
>>>>> 
>>>>> http://gist.github.com/631024
>>>>> 
>>>>> Seems to work pretty well, but the only problem is when i kill the
>>>    resque processes, the ensure block, which decrements the running count,
>>>    is not run.  As a result, when i start the resque jobs up again, the
>>>    number of jobs running for that user is incorrectly set to 5 and that
>>>    user's jobs will forever be blocked.
>>>>> 
>>>>> What is the proper way to handle cleanup of redis on a normal kill
>>>    SIGTERM?
>>>>> 
>>>>> thx
>>>>> 
>>>>> -karl
>>>>> 
>>> 
>>> 
>>