librelist archives

« back to archive

Scaling Resque: multiple Redis servers?

Scaling Resque: multiple Redis servers?

From:
Jonathan Hyman
Date:
2013-06-17 @ 21:07
Hey all,

We've noticed some spikes in our Redis server during peak load which cause
Redis slowness in our app and are working out how we are going to scale out
our Resque fleet, which is the bulk of our Redis traffic. I see that
there's an old project called
resque_redis_composite<https://github.com/redsquirrel/resque_redis_composite>
but,
even if it still works well, fans out per queue, which seems suboptimal.

Has anyone on this list had to grow Resque to multiple Redis servers? What
sorts of things have you done? Any tips would be appreciated.

Thanks,
Jon

Re: [resque] Scaling Resque: multiple Redis servers?

From:
Jason Amster | BeenVerified
Date:
2013-06-17 @ 21:34
I would try redis_failover (https://github.com/ryanlecompte/redis_failover).
 We just did an implementation but it didn't work for us due to our
environment and it's dependency on ZooKeeper which kept on causing
segfaults, but if you are running the right rubies, then you should be
fine. In our case, we just put it on a beefier box and threw up Redis
Sentinal in case anything went wrong, to failover to the salve.


On Mon, Jun 17, 2013 at 5:07 PM, Jonathan Hyman <hyman.jon@gmail.com> wrote:

> Hey all,
>
> We've noticed some spikes in our Redis server during peak load which cause
> Redis slowness in our app and are working out how we are going to scale out
> our Resque fleet, which is the bulk of our Redis traffic. I see that
> there's an old project called 
resque_redis_composite<https://github.com/redsquirrel/resque_redis_composite>
but,
> even if it still works well, fans out per queue, which seems suboptimal.
>
> Has anyone on this list had to grow Resque to multiple Redis servers? What
> sorts of things have you done? Any tips would be appreciated.
>
> Thanks,
> Jon
>



-- 
================================
Jason Amster | BeenVerified, Inc.
Chief Technology Officer
307 5th Avenue, 16th Floor | New York, New York 10016
jamster@beenverified.com | www.beenverified.com

Re: [resque] Scaling Resque: multiple Redis servers?

From:
Jonathan Hyman
Date:
2013-06-17 @ 21:41
Doesn't that just provide master-slave failover? My issue, from what I can
glean, is that so much network traffic is being pushed to/from my Redis box
during bursty traffic that the network buffer causes lag in my application
-- that is, Redis operations are actually taking longer to complete. I have
a separate solution for failover (Sentinel), I just need something to
distribute the load.


On Mon, Jun 17, 2013 at 5:34 PM, Jason Amster | BeenVerified <
jamster@beenverified.com> wrote:

> I would try redis_failover (https://github.com/ryanlecompte/redis_failover).
>  We just did an implementation but it didn't work for us due to our
> environment and it's dependency on ZooKeeper which kept on causing
> segfaults, but if you are running the right rubies, then you should be
> fine. In our case, we just put it on a beefier box and threw up Redis
> Sentinal in case anything went wrong, to failover to the salve.
>
>
> On Mon, Jun 17, 2013 at 5:07 PM, Jonathan Hyman <hyman.jon@gmail.com>wrote:
>
>> Hey all,
>>
>> We've noticed some spikes in our Redis server during peak load which
>> cause Redis slowness in our app and are working out how we are going to
>> scale out our Resque fleet, which is the bulk of our Redis traffic. I see
>> that there's an old project called 
resque_redis_composite<https://github.com/redsquirrel/resque_redis_composite>
but,
>> even if it still works well, fans out per queue, which seems suboptimal.
>>
>> Has anyone on this list had to grow Resque to multiple Redis servers?
>> What sorts of things have you done? Any tips would be appreciated.
>>
>> Thanks,
>> Jon
>>
>
>
>
> --
> ================================
> Jason Amster | BeenVerified, Inc.
> Chief Technology Officer
> 307 5th Avenue, 16th Floor | New York, New York 10016
> jamster@beenverified.com | www.beenverified.com
>

Re: [resque] Scaling Resque: multiple Redis servers?

From:
Eoin Coffey
Date:
2013-06-17 @ 21:51
I don't think it's going to be possible to distribute the load.

All of the works are essentially sitting in a loop trying to RPOP off of
their queues, relying on the fact that Redis will make the atomic,
consistent decision of what worker gets what job.

I don't think you can distribute N worker RPOPs over M redis instances
(what happens if two redis servers give two diff workers the same job?,
etc).

Just my two cents and of course would love to be proved wrong :-)

-Eoin


On Mon, Jun 17, 2013 at 3:41 PM, Jonathan Hyman <hyman.jon@gmail.com> wrote:

> Doesn't that just provide master-slave failover? My issue, from what I can
> glean, is that so much network traffic is being pushed to/from my Redis box
> during bursty traffic that the network buffer causes lag in my application
> -- that is, Redis operations are actually taking longer to complete. I have
> a separate solution for failover (Sentinel), I just need something to
> distribute the load.
>
>
> On Mon, Jun 17, 2013 at 5:34 PM, Jason Amster | BeenVerified <
> jamster@beenverified.com> wrote:
>
>> I would try redis_failover (
>> https://github.com/ryanlecompte/redis_failover).  We just did an
>> implementation but it didn't work for us due to our environment and it's
>> dependency on ZooKeeper which kept on causing segfaults, but if you are
>> running the right rubies, then you should be fine. In our case, we just put
>> it on a beefier box and threw up Redis Sentinal in case anything went
>> wrong, to failover to the salve.
>>
>>
>> On Mon, Jun 17, 2013 at 5:07 PM, Jonathan Hyman <hyman.jon@gmail.com>wrote:
>>
>>> Hey all,
>>>
>>> We've noticed some spikes in our Redis server during peak load which
>>> cause Redis slowness in our app and are working out how we are going to
>>> scale out our Resque fleet, which is the bulk of our Redis traffic. I see
>>> that there's an old project called 
resque_redis_composite<https://github.com/redsquirrel/resque_redis_composite>
but,
>>> even if it still works well, fans out per queue, which seems suboptimal.
>>>
>>> Has anyone on this list had to grow Resque to multiple Redis servers?
>>> What sorts of things have you done? Any tips would be appreciated.
>>>
>>> Thanks,
>>> Jon
>>>
>>
>>
>>
>> --
>> ================================
>> Jason Amster | BeenVerified, Inc.
>> Chief Technology Officer
>> 307 5th Avenue, 16th Floor | New York, New York 10016
>> jamster@beenverified.com | www.beenverified.com
>>
>
>

Re: [resque] Scaling Resque: multiple Redis servers?

From:
Jonathan Hyman
Date:
2013-06-17 @ 21:57
The ideal solution is probably something where Resque.enqueue round-robins
or randomly enqueues a job to a different Redis server, and workers also
alternate checking on each of the servers. I don't know if anyone has
written something like that, though. But I think the key is distributing
the enqueuing; worst case, I can have a worker fleet per Redis server.


On Mon, Jun 17, 2013 at 5:51 PM, Eoin Coffey <ecoffey@gmail.com> wrote:

> I don't think it's going to be possible to distribute the load.
>
> All of the works are essentially sitting in a loop trying to RPOP off of
> their queues, relying on the fact that Redis will make the atomic,
> consistent decision of what worker gets what job.
>
> I don't think you can distribute N worker RPOPs over M redis instances
> (what happens if two redis servers give two diff workers the same job?,
> etc).
>
> Just my two cents and of course would love to be proved wrong :-)
>
> -Eoin
>
>
> On Mon, Jun 17, 2013 at 3:41 PM, Jonathan Hyman <hyman.jon@gmail.com>wrote:
>
>> Doesn't that just provide master-slave failover? My issue, from what I
>> can glean, is that so much network traffic is being pushed to/from my Redis
>> box during bursty traffic that the network buffer causes lag in my
>> application -- that is, Redis operations are actually taking longer to
>> complete. I have a separate solution for failover (Sentinel), I just need
>> something to distribute the load.
>>
>>
>> On Mon, Jun 17, 2013 at 5:34 PM, Jason Amster | BeenVerified <
>> jamster@beenverified.com> wrote:
>>
>>> I would try redis_failover (
>>> https://github.com/ryanlecompte/redis_failover).  We just did an
>>> implementation but it didn't work for us due to our environment and it's
>>> dependency on ZooKeeper which kept on causing segfaults, but if you are
>>> running the right rubies, then you should be fine. In our case, we just put
>>> it on a beefier box and threw up Redis Sentinal in case anything went
>>> wrong, to failover to the salve.
>>>
>>>
>>> On Mon, Jun 17, 2013 at 5:07 PM, Jonathan Hyman <hyman.jon@gmail.com>wrote:
>>>
>>>> Hey all,
>>>>
>>>> We've noticed some spikes in our Redis server during peak load which
>>>> cause Redis slowness in our app and are working out how we are going to
>>>> scale out our Resque fleet, which is the bulk of our Redis traffic. I see
>>>> that there's an old project called 
resque_redis_composite<https://github.com/redsquirrel/resque_redis_composite>
but,
>>>> even if it still works well, fans out per queue, which seems suboptimal.
>>>>
>>>> Has anyone on this list had to grow Resque to multiple Redis servers?
>>>> What sorts of things have you done? Any tips would be appreciated.
>>>>
>>>> Thanks,
>>>> Jon
>>>>
>>>
>>>
>>>
>>> --
>>> ================================
>>> Jason Amster | BeenVerified, Inc.
>>> Chief Technology Officer
>>> 307 5th Avenue, 16th Floor | New York, New York 10016
>>> jamster@beenverified.com | www.beenverified.com
>>>
>>
>>
>

Re: [resque] Scaling Resque: multiple Redis servers?

From:
Eoin Coffey
Date:
2013-06-17 @ 22:03
Exactly right Jonathan.  That could be an interesting feature for Resque,
but don't know how people would feel about the complexity tradeoff.

I'd be interested in hearing your implementation thoughts, and details
around worker-fleet-per-redis-instance and if there is anything Resque
could do to make that a little less tedious.

I'm guessing you'd do your own round-robin enqueuing logic, and just "hard
code" the specific redis server into each "fleet"?


On Mon, Jun 17, 2013 at 3:57 PM, Jonathan Hyman <hyman.jon@gmail.com> wrote:

> The ideal solution is probably something where Resque.enqueue round-robins
> or randomly enqueues a job to a different Redis server, and workers also
> alternate checking on each of the servers. I don't know if anyone has
> written something like that, though. But I think the key is distributing
> the enqueuing; worst case, I can have a worker fleet per Redis server.
>
>
> On Mon, Jun 17, 2013 at 5:51 PM, Eoin Coffey <ecoffey@gmail.com> wrote:
>
>> I don't think it's going to be possible to distribute the load.
>>
>> All of the works are essentially sitting in a loop trying to RPOP off of
>> their queues, relying on the fact that Redis will make the atomic,
>> consistent decision of what worker gets what job.
>>
>> I don't think you can distribute N worker RPOPs over M redis instances
>> (what happens if two redis servers give two diff workers the same job?,
>> etc).
>>
>> Just my two cents and of course would love to be proved wrong :-)
>>
>> -Eoin
>>
>>
>> On Mon, Jun 17, 2013 at 3:41 PM, Jonathan Hyman <hyman.jon@gmail.com>wrote:
>>
>>> Doesn't that just provide master-slave failover? My issue, from what I
>>> can glean, is that so much network traffic is being pushed to/from my Redis
>>> box during bursty traffic that the network buffer causes lag in my
>>> application -- that is, Redis operations are actually taking longer to
>>> complete. I have a separate solution for failover (Sentinel), I just need
>>> something to distribute the load.
>>>
>>>
>>> On Mon, Jun 17, 2013 at 5:34 PM, Jason Amster | BeenVerified <
>>> jamster@beenverified.com> wrote:
>>>
>>>> I would try redis_failover (
>>>> https://github.com/ryanlecompte/redis_failover).  We just did an
>>>> implementation but it didn't work for us due to our environment and it's
>>>> dependency on ZooKeeper which kept on causing segfaults, but if you are
>>>> running the right rubies, then you should be fine. In our case, we just put
>>>> it on a beefier box and threw up Redis Sentinal in case anything went
>>>> wrong, to failover to the salve.
>>>>
>>>>
>>>> On Mon, Jun 17, 2013 at 5:07 PM, Jonathan Hyman <hyman.jon@gmail.com>wrote:
>>>>
>>>>> Hey all,
>>>>>
>>>>> We've noticed some spikes in our Redis server during peak load which
>>>>> cause Redis slowness in our app and are working out how we are going to
>>>>> scale out our Resque fleet, which is the bulk of our Redis traffic. I see
>>>>> that there's an old project called 
resque_redis_composite<https://github.com/redsquirrel/resque_redis_composite>
but,
>>>>> even if it still works well, fans out per queue, which seems suboptimal.
>>>>>
>>>>> Has anyone on this list had to grow Resque to multiple Redis servers?
>>>>> What sorts of things have you done? Any tips would be appreciated.
>>>>>
>>>>> Thanks,
>>>>> Jon
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> ================================
>>>> Jason Amster | BeenVerified, Inc.
>>>> Chief Technology Officer
>>>> 307 5th Avenue, 16th Floor | New York, New York 10016
>>>> jamster@beenverified.com | www.beenverified.com
>>>>
>>>
>>>
>>
>

Re: [resque] Scaling Resque: multiple Redis servers?

From:
Jonathan Hyman
Date:
2013-06-17 @ 22:16
> I'm guessing you'd do your own round-robin enqueuing logic, and just
"hard code" the specific redis server into each "fleet"?

Yeah, I was thinking of doing something like that. Granted, I haven't put a
lot of thought into it yet. My null bid is to create a Redis connection
factory and abstract my Resque.enqueue calls everywhere into some class
that handles enqueuing jobs, and in there handle rpushing to the right
Redis key. Then use some configuration to set my rake resque:work workers
to point them to the correct server. Since my Resque workers already work
off configuration files (I currently run 7 workers per server and my Chef
recipe generates a separate config and monit file for each one), I could do
that pretty easily.

My first step is likely going to just be to separate out my Resque Redis
use from my non-Resque Redis use by splitting them into 2 different
servers, so it may be a few weeks before I attempt something like this.
I'll let you all know how it goes, though.

Jon

On Mon, Jun 17, 2013 at 6:03 PM, Eoin Coffey <ecoffey@gmail.com> wrote:

> Exactly right Jonathan.  That could be an interesting feature for Resque,
> but don't know how people would feel about the complexity tradeoff.
>
> I'd be interested in hearing your implementation thoughts, and details
> around worker-fleet-per-redis-instance and if there is anything Resque
> could do to make that a little less tedious.
>
> I'm guessing you'd do your own round-robin enqueuing logic, and just "hard
> code" the specific redis server into each "fleet"?
>
>
> On Mon, Jun 17, 2013 at 3:57 PM, Jonathan Hyman <hyman.jon@gmail.com>wrote:
>
>> The ideal solution is probably something where Resque.enqueue
>> round-robins or randomly enqueues a job to a different Redis server, and
>> workers also alternate checking on each of the servers. I don't know if
>> anyone has written something like that, though. But I think the key is
>> distributing the enqueuing; worst case, I can have a worker fleet per Redis
>> server.
>>
>>
>> On Mon, Jun 17, 2013 at 5:51 PM, Eoin Coffey <ecoffey@gmail.com> wrote:
>>
>>> I don't think it's going to be possible to distribute the load.
>>>
>>> All of the works are essentially sitting in a loop trying to RPOP off of
>>> their queues, relying on the fact that Redis will make the atomic,
>>> consistent decision of what worker gets what job.
>>>
>>> I don't think you can distribute N worker RPOPs over M redis instances
>>> (what happens if two redis servers give two diff workers the same job?,
>>> etc).
>>>
>>> Just my two cents and of course would love to be proved wrong :-)
>>>
>>> -Eoin
>>>
>>>
>>> On Mon, Jun 17, 2013 at 3:41 PM, Jonathan Hyman <hyman.jon@gmail.com>wrote:
>>>
>>>> Doesn't that just provide master-slave failover? My issue, from what I
>>>> can glean, is that so much network traffic is being pushed to/from my Redis
>>>> box during bursty traffic that the network buffer causes lag in my
>>>> application -- that is, Redis operations are actually taking longer to
>>>> complete. I have a separate solution for failover (Sentinel), I just need
>>>> something to distribute the load.
>>>>
>>>>
>>>> On Mon, Jun 17, 2013 at 5:34 PM, Jason Amster | BeenVerified <
>>>> jamster@beenverified.com> wrote:
>>>>
>>>>> I would try redis_failover (
>>>>> https://github.com/ryanlecompte/redis_failover).  We just did an
>>>>> implementation but it didn't work for us due to our environment and it's
>>>>> dependency on ZooKeeper which kept on causing segfaults, but if you are
>>>>> running the right rubies, then you should be fine. In our case, we just put
>>>>> it on a beefier box and threw up Redis Sentinal in case anything went
>>>>> wrong, to failover to the salve.
>>>>>
>>>>>
>>>>> On Mon, Jun 17, 2013 at 5:07 PM, Jonathan Hyman <hyman.jon@gmail.com>wrote:
>>>>>
>>>>>> Hey all,
>>>>>>
>>>>>> We've noticed some spikes in our Redis server during peak load which
>>>>>> cause Redis slowness in our app and are working out how we are going to
>>>>>> scale out our Resque fleet, which is the bulk of our Redis traffic. I see
>>>>>> that there's an old project called 
resque_redis_composite<https://github.com/redsquirrel/resque_redis_composite>
but,
>>>>>> even if it still works well, fans out per queue, which seems suboptimal.
>>>>>>
>>>>>> Has anyone on this list had to grow Resque to multiple Redis servers?
>>>>>> What sorts of things have you done? Any tips would be appreciated.
>>>>>>
>>>>>> Thanks,
>>>>>> Jon
>>>>>>
>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> ================================
>>>>> Jason Amster | BeenVerified, Inc.
>>>>> Chief Technology Officer
>>>>> 307 5th Avenue, 16th Floor | New York, New York 10016
>>>>> jamster@beenverified.com | www.beenverified.com
>>>>>
>>>>
>>>>
>>>
>>
>

Re: [resque] Scaling Resque: multiple Redis servers?

From:
Eoin Coffey
Date:
2013-06-17 @ 22:20
Awesome, good luck!

Let us know if you need help, or otherwise want to share cool stuff about
it :)


On Mon, Jun 17, 2013 at 4:16 PM, Jonathan Hyman <hyman.jon@gmail.com> wrote:

> > I'm guessing you'd do your own round-robin enqueuing logic, and just
> "hard code" the specific redis server into each "fleet"?
>
> Yeah, I was thinking of doing something like that. Granted, I haven't put
> a lot of thought into it yet. My null bid is to create a Redis connection
> factory and abstract my Resque.enqueue calls everywhere into some class
> that handles enqueuing jobs, and in there handle rpushing to the right
> Redis key. Then use some configuration to set my rake resque:work workers
> to point them to the correct server. Since my Resque workers already work
> off configuration files (I currently run 7 workers per server and my Chef
> recipe generates a separate config and monit file for each one), I could do
> that pretty easily.
>
> My first step is likely going to just be to separate out my Resque Redis
> use from my non-Resque Redis use by splitting them into 2 different
> servers, so it may be a few weeks before I attempt something like this.
> I'll let you all know how it goes, though.
>
> Jon
>
> On Mon, Jun 17, 2013 at 6:03 PM, Eoin Coffey <ecoffey@gmail.com> wrote:
>
>> Exactly right Jonathan.  That could be an interesting feature for Resque,
>> but don't know how people would feel about the complexity tradeoff.
>>
>> I'd be interested in hearing your implementation thoughts, and details
>> around worker-fleet-per-redis-instance and if there is anything Resque
>> could do to make that a little less tedious.
>>
>> I'm guessing you'd do your own round-robin enqueuing logic, and just
>> "hard code" the specific redis server into each "fleet"?
>>
>>
>> On Mon, Jun 17, 2013 at 3:57 PM, Jonathan Hyman <hyman.jon@gmail.com>wrote:
>>
>>> The ideal solution is probably something where Resque.enqueue
>>> round-robins or randomly enqueues a job to a different Redis server, and
>>> workers also alternate checking on each of the servers. I don't know if
>>> anyone has written something like that, though. But I think the key is
>>> distributing the enqueuing; worst case, I can have a worker fleet per Redis
>>> server.
>>>
>>>
>>> On Mon, Jun 17, 2013 at 5:51 PM, Eoin Coffey <ecoffey@gmail.com> wrote:
>>>
>>>> I don't think it's going to be possible to distribute the load.
>>>>
>>>> All of the works are essentially sitting in a loop trying to RPOP off
>>>> of their queues, relying on the fact that Redis will make the atomic,
>>>> consistent decision of what worker gets what job.
>>>>
>>>> I don't think you can distribute N worker RPOPs over M redis instances
>>>> (what happens if two redis servers give two diff workers the same job?,
>>>> etc).
>>>>
>>>> Just my two cents and of course would love to be proved wrong :-)
>>>>
>>>> -Eoin
>>>>
>>>>
>>>> On Mon, Jun 17, 2013 at 3:41 PM, Jonathan Hyman <hyman.jon@gmail.com>wrote:
>>>>
>>>>> Doesn't that just provide master-slave failover? My issue, from what I
>>>>> can glean, is that so much network traffic is being pushed to/from my Redis
>>>>> box during bursty traffic that the network buffer causes lag in my
>>>>> application -- that is, Redis operations are actually taking longer to
>>>>> complete. I have a separate solution for failover (Sentinel), I just need
>>>>> something to distribute the load.
>>>>>
>>>>>
>>>>> On Mon, Jun 17, 2013 at 5:34 PM, Jason Amster | BeenVerified <
>>>>> jamster@beenverified.com> wrote:
>>>>>
>>>>>> I would try redis_failover (
>>>>>> https://github.com/ryanlecompte/redis_failover).  We just did an
>>>>>> implementation but it didn't work for us due to our environment and it's
>>>>>> dependency on ZooKeeper which kept on causing segfaults, but if you are
>>>>>> running the right rubies, then you should be fine. In our case, we just put
>>>>>> it on a beefier box and threw up Redis Sentinal in case anything went
>>>>>> wrong, to failover to the salve.
>>>>>>
>>>>>>
>>>>>> On Mon, Jun 17, 2013 at 5:07 PM, Jonathan Hyman <hyman.jon@gmail.com>wrote:
>>>>>>
>>>>>>> Hey all,
>>>>>>>
>>>>>>> We've noticed some spikes in our Redis server during peak load which
>>>>>>> cause Redis slowness in our app and are working out how we are going to
>>>>>>> scale out our Resque fleet, which is the bulk of our Redis traffic. I see
>>>>>>> that there's an old project called 
resque_redis_composite<https://github.com/redsquirrel/resque_redis_composite>
but,
>>>>>>> even if it still works well, fans out per queue, which seems suboptimal.
>>>>>>>
>>>>>>> Has anyone on this list had to grow Resque to multiple Redis
>>>>>>> servers? What sorts of things have you done? Any tips would be 
appreciated.
>>>>>>>
>>>>>>> Thanks,
>>>>>>> Jon
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> ================================
>>>>>> Jason Amster | BeenVerified, Inc.
>>>>>> Chief Technology Officer
>>>>>> 307 5th Avenue, 16th Floor | New York, New York 10016
>>>>>> jamster@beenverified.com | www.beenverified.com
>>>>>>
>>>>>
>>>>>
>>>>
>>>
>>
>

Re: [resque] Scaling Resque: multiple Redis servers?

From:
Ben Golden
Date:
2013-06-17 @ 21:40
We are using http://redis.io/topics/sentinel to scale redis instances.
Though, we've not really hit enough volume yet for me to give you any
insight on how robust it is. So far its worked great, but our loads have
been very light so far.


On Mon, Jun 17, 2013 at 3:34 PM, Jason Amster | BeenVerified <
jamster@beenverified.com> wrote:

> I would try redis_failover (https://github.com/ryanlecompte/redis_failover).
>  We just did an implementation but it didn't work for us due to our
> environment and it's dependency on ZooKeeper which kept on causing
> segfaults, but if you are running the right rubies, then you should be
> fine. In our case, we just put it on a beefier box and threw up Redis
> Sentinal in case anything went wrong, to failover to the salve.
>
>
> On Mon, Jun 17, 2013 at 5:07 PM, Jonathan Hyman <hyman.jon@gmail.com>wrote:
>
>> Hey all,
>>
>> We've noticed some spikes in our Redis server during peak load which
>> cause Redis slowness in our app and are working out how we are going to
>> scale out our Resque fleet, which is the bulk of our Redis traffic. I see
>> that there's an old project called 
resque_redis_composite<https://github.com/redsquirrel/resque_redis_composite>
but,
>> even if it still works well, fans out per queue, which seems suboptimal.
>>
>> Has anyone on this list had to grow Resque to multiple Redis servers?
>> What sorts of things have you done? Any tips would be appreciated.
>>
>> Thanks,
>> Jon
>>
>
>
>
> --
> ================================
> Jason Amster | BeenVerified, Inc.
> Chief Technology Officer
> 307 5th Avenue, 16th Floor | New York, New York 10016
> jamster@beenverified.com | www.beenverified.com
>



-- 
Ben Golden
Software Engineer
SendGrid Inc.
<http://sendgrid.com/careers.html>