librelist archives

« back to archive

mongrel2 server cluster zmq patterns

mongrel2 server cluster zmq patterns

From:
Maxime
Date:
2013-09-07 @ 13:07
Hello, I've tried reading as much as I can about this but couldn't find
quite the answer to my question in docs.

My plan is to use a multiple mongrel2 servers pointing to a cluster of
handlers written in python by me (they are very simple endpoints), which
will then forward the requests into a cloud of python actors for processing
via zmq PUSH sockets (so it's a pipeline, not a req/rep), the response
needs to be eventually sent back to the right mongrel2 server for response.
The messages as they transit through the cloud will keep the original
envelope parameters required to be sent back to the correct mongrel2 server.

The problem I am seeing is that the Mongrel2 servers' response endpoint is
a SUB (that's fine) in BIND mode. If it was in CONNECT mode I would simply
point all the connection strings towards a XSUB/XPUB device to do the
many-to-many PUB SUB between the Mongrel2 servers and handlers. The only
thing I could think of but could not find an example anywhere is that I can
indeed do multiple "connect_out" calls on the zmq device, once for each
Mongrel2 server. All the examples of device I see do a single bind_in and
bind_out call (or connect_), never more than one.

Maybe I am missing something, maybe it's a zmq question really, but I'd
like to see some sample backend architectures to be used with Mongrel2
handlers.

Any comments, thought?

Thanks

Re: [mongrel2] mongrel2 server cluster zmq patterns

From:
Brian McQueen
Date:
2013-09-07 @ 14:06
That's an interesting design.  I think it ought to work too by having the
python actors publish to the SUB queues on the originating mongrel2 host
and the actors would use the send_ident provided by the originating
mongrel2 handler spec for the request.  They'd also have to talk to that
originating host's queue using the mongrel2 protocol, as if they were a
mongrel2 handler.  The protocol is very simple, so that should be easy to
setup.  I don't see how the BIND mode would mess it up, but I haven't
studied that.


On Sat, Sep 7, 2013 at 6:07 AM, Maxime <maximelb@gmail.com> wrote:

> Hello, I've tried reading as much as I can about this but couldn't find
> quite the answer to my question in docs.
>
> My plan is to use a multiple mongrel2 servers pointing to a cluster of
> handlers written in python by me (they are very simple endpoints), which
> will then forward the requests into a cloud of python actors for processing
> via zmq PUSH sockets (so it's a pipeline, not a req/rep), the response
> needs to be eventually sent back to the right mongrel2 server for response.
> The messages as they transit through the cloud will keep the original
> envelope parameters required to be sent back to the correct mongrel2 server.
>
> The problem I am seeing is that the Mongrel2 servers' response endpoint is
> a SUB (that's fine) in BIND mode. If it was in CONNECT mode I would simply
> point all the connection strings towards a XSUB/XPUB device to do the
> many-to-many PUB SUB between the Mongrel2 servers and handlers. The only
> thing I could think of but could not find an example anywhere is that I can
> indeed do multiple "connect_out" calls on the zmq device, once for each
> Mongrel2 server. All the examples of device I see do a single bind_in and
> bind_out call (or connect_), never more than one.
>
> Maybe I am missing something, maybe it's a zmq question really, but I'd
> like to see some sample backend architectures to be used with Mongrel2
> handlers.
>
> Any comments, thought?
>
> Thanks
>



-- 
the news wire of the 21st century - twitchy.com

Re: mongrel2 server cluster zmq patterns

From:
Maxime
Date:
2013-09-07 @ 14:14
Right, that design would work fine with one or a few mongrel2 servers, it's
when I get to 20 servers, ideally I'd like my Actors to publish the
response to a single endpoint (like a xpub/xsub device) so that I can
centralize the management of the list of mongrel2 servers (vs having all
response Actors knowing which mongrel2 servers are out there). And that's
where the bind becomes an issue, let's say I do have a xsub/xpub device
that all mongrel2 servers know about and all response Actors know about,
how do I get my device to talk with the servers if all the servers do their
own bind (vs the servers connect-ing to the device)...

Maybe I should draw it out to help visualize. :-)

On Saturday, September 7, 2013, Brian McQueen wrote:

> That's an interesting design.  I think it ought to work too by having the
> python actors publish to the SUB queues on the originating mongrel2 host
> and the actors would use the send_ident provided by the originating
> mongrel2 handler spec for the request.  They'd also have to talk to that
> originating host's queue using the mongrel2 protocol, as if they were a
> mongrel2 handler.  The protocol is very simple, so that should be easy to
> setup.  I don't see how the BIND mode would mess it up, but I haven't
> studied that.
>
>
> On Sat, Sep 7, 2013 at 6:07 AM, Maxime 
<maximelb@gmail.com<javascript:_e({}, 'cvml', 'maximelb@gmail.com');>
> > wrote:
>
>> Hello, I've tried reading as much as I can about this but couldn't find
>> quite the answer to my question in docs.
>>
>> My plan is to use a multiple mongrel2 servers pointing to a cluster of
>> handlers written in python by me (they are very simple endpoints), which
>> will then forward the requests into a cloud of python actors for processing
>> via zmq PUSH sockets (so it's a pipeline, not a req/rep), the response
>> needs to be eventually sent back to the right mongrel2 server for response.
>> The messages as they transit through the cloud will keep the original
>> envelope parameters required to be sent back to the correct mongrel2 server.
>>
>> The problem I am seeing is that the Mongrel2 servers' response endpoint
>> is a SUB (that's fine) in BIND mode. If it was in CONNECT mode I would
>> simply point all the connection strings towards a XSUB/XPUB device to do
>> the many-to-many PUB SUB between the Mongrel2 servers and handlers. The
>> only thing I could think of but could not find an example anywhere is that
>> I can indeed do multiple "connect_out" calls on the zmq device, once for
>> each Mongrel2 server. All the examples of device I see do a single bind_in
>> and bind_out call (or connect_), never more than one.
>>
>> Maybe I am missing something, maybe it's a zmq question really, but I'd
>> like to see some sample backend architectures to be used with Mongrel2
>> handlers.
>>
>> Any comments, thought?
>>
>> Thanks
>>
>
>
>
> --
> the news wire of the 21st century - twitchy.com
>

Re: [mongrel2] Re: mongrel2 server cluster zmq patterns

From:
Pat Collins
Date:
2013-09-07 @ 18:43
A danger in relying on a single endpoint is that it's a single point of 
failure. If you could rely on configuration mgmt to allow the list of 
mongrel2 servers to be synced across all actors that could alleviate that 
concern.

--
Patrick Collins

On Sep 7, 2013, at 10:14 AM, Maxime <maximelb@gmail.com> wrote:

> Right, that design would work fine with one or a few mongrel2 servers, 
it's when I get to 20 servers, ideally I'd like my Actors to publish the 
response to a single endpoint (like a xpub/xsub device) so that I can 
centralize the management of the list of mongrel2 servers (vs having all 
response Actors knowing which mongrel2 servers are out there). And that's 
where the bind becomes an issue, let's say I do have a xsub/xpub device 
that all mongrel2 servers know about and all response Actors know about, 
how do I get my device to talk with the servers if all the servers do 
their own bind (vs the servers connect-ing to the device)...
> 
> Maybe I should draw it out to help visualize. :-)
> 
> On Saturday, September 7, 2013, Brian McQueen wrote:
>> That's an interesting design.  I think it ought to work too by having 
the python actors publish to the SUB queues on the originating mongrel2 
host and the actors would use the send_ident provided by the originating 
mongrel2 handler spec for the request.  They'd also have to talk to that 
originating host's queue using the mongrel2 protocol, as if they were a 
mongrel2 handler.  The protocol is very simple, so that should be easy to 
setup.  I don't see how the BIND mode would mess it up, but I haven't 
studied that.
>> 
>> 
>> On Sat, Sep 7, 2013 at 6:07 AM, Maxime <maximelb@gmail.com> wrote:
>>> Hello, I've tried reading as much as I can about this but couldn't 
find quite the answer to my question in docs.
>>> 
>>> My plan is to use a multiple mongrel2 servers pointing to a cluster of
handlers written in python by me (they are very simple endpoints), which 
will then forward the requests into a cloud of python actors for 
processing via zmq PUSH sockets (so it's a pipeline, not a req/rep), the 
response needs to be eventually sent back to the right mongrel2 server for
response. The messages as they transit through the cloud will keep the 
original envelope parameters required to be sent back to the correct 
mongrel2 server.
>>> 
>>> The problem I am seeing is that the Mongrel2 servers' response 
endpoint is a SUB (that's fine) in BIND mode. If it was in CONNECT mode I 
would simply point all the connection strings towards a XSUB/XPUB device 
to do the many-to-many PUB SUB between the Mongrel2 servers and handlers. 
The only thing I could think of but could not find an example anywhere is 
that I can indeed do multiple "connect_out" calls on the zmq device, once 
for each Mongrel2 server. All the examples of device I see do a single 
bind_in and bind_out call (or connect_), never more than one.
>>> 
>>> Maybe I am missing something, maybe it's a zmq question really, but 
I'd like to see some sample backend architectures to be used with Mongrel2
handlers.
>>> 
>>> Any comments, thought?
>>> 
>>> Thanks
>> 
>> 
>> 
>> -- 
>> the news wire of the 21st century - twitchy.com

Re: mongrel2 server cluster zmq patterns

From:
Maxime
Date:
2013-09-07 @ 19:17
That's a good point, but it felt like the xpub/xsub solution was simply an
extension of the philosophy behind mongrel2's use of sub to avoid
configuration and connection sprawl as well as extended the "add a handler
with no configuration change" with "add a server with no configuration
change".

Seems like the ability to have a mongrel2 server's recv endpoint do a
connect would be a nice feature to have in the future.

Thanks for your answers.

On Saturday, September 7, 2013, Pat Collins wrote:

> A danger in relying on a single endpoint is that it's a single point of
> failure. If you could rely on configuration mgmt to allow the list of
> mongrel2 servers to be synced across all actors that could alleviate that
> concern.
>
> --
> Patrick Collins
>
> On Sep 7, 2013, at 10:14 AM, Maxime 
<maximelb@gmail.com<javascript:_e({}, 'cvml', 'maximelb@gmail.com');>>
> wrote:
>
> Right, that design would work fine with one or a few mongrel2 servers,
> it's when I get to 20 servers, ideally I'd like my Actors to publish the
> response to a single endpoint (like a xpub/xsub device) so that I can
> centralize the management of the list of mongrel2 servers (vs having all
> response Actors knowing which mongrel2 servers are out there). And that's
> where the bind becomes an issue, let's say I do have a xsub/xpub device
> that all mongrel2 servers know about and all response Actors know about,
> how do I get my device to talk with the servers if all the servers do their
> own bind (vs the servers connect-ing to the device)...
>
> Maybe I should draw it out to help visualize. :-)
>
> On Saturday, September 7, 2013, Brian McQueen wrote:
>
>> That's an interesting design.  I think it ought to work too by having the
>> python actors publish to the SUB queues on the originating mongrel2 host
>> and the actors would use the send_ident provided by the originating
>> mongrel2 handler spec for the request.  They'd also have to talk to that
>> originating host's queue using the mongrel2 protocol, as if they were a
>> mongrel2 handler.  The protocol is very simple, so that should be easy to
>> setup.  I don't see how the BIND mode would mess it up, but I haven't
>> studied that.
>>
>>
>> On Sat, Sep 7, 2013 at 6:07 AM, Maxime <maximelb@gmail.com> wrote:
>>
>>> Hello, I've tried reading as much as I can about this but couldn't find
>>> quite the answer to my question in docs.
>>>
>>> My plan is to use a multiple mongrel2 servers pointing to a cluster of
>>> handlers written in python by me (they are very simple endpoints), which
>>> will then forward the requests into a cloud of python actors for processing
>>> via zmq PUSH sockets (so it's a pipeline, not a req/rep), the response
>>> needs to be eventually sent back to the right mongrel2 server for response.
>>> The messages as they transit through the cloud will keep the original
>>> envelope parameters required to be sent back to the correct mongrel2 server.
>>>
>>> The problem I am seeing is that the Mongrel2 servers' response endpoint
>>> is a SUB (that's fine) in BIND mode. If it was in CONNECT mode I would
>>> simply point all the connection strings towards a XSUB/XPUB device to do
>>> the many-to-many PUB SUB between the Mongrel2 servers and handlers. The
>>> only thing I could think of but could not find an example anywhere is that
>>> I can indeed do multiple "connect_out" calls on the zmq device, once for
>>> each Mongrel2 server. All the examples of device I see do a single bind_in
>>> and bind_out call (or connect_), never more than one.
>>>
>>> Maybe I am missing something, maybe it's a zmq question really, but I'd
>>> like to see some sample backend architectures to be used with Mongrel2
>>> handlers.
>>>
>>> Any comments, thought?
>>>
>>> Thanks
>>>
>>
>>
>>
>> --
>> the news wire of the 21st century - twitchy.com
>>
>

Re: [mongrel2] Re: mongrel2 server cluster zmq patterns

From:
Pat Collins
Date:
2013-09-07 @ 19:22
Let us know what you end up doing!

--
Patrick Collins

On Sep 7, 2013, at 3:17 PM, Maxime <maximelb@gmail.com> wrote:

> That's a good point, but it felt like the xpub/xsub solution was simply 
an extension of the philosophy behind mongrel2's use of sub to avoid 
configuration and connection sprawl as well as extended the "add a handler
with no configuration change" with "add a server with no configuration 
change". 
> 
> Seems like the ability to have a mongrel2 server's recv endpoint do a 
connect would be a nice feature to have in the future.
> 
> Thanks for your answers.
> 
> On Saturday, September 7, 2013, Pat Collins wrote:
>> A danger in relying on a single endpoint is that it's a single point of
failure. If you could rely on configuration mgmt to allow the list of 
mongrel2 servers to be synced across all actors that could alleviate that 
concern.
>> 
>> --
>> Patrick Collins
>> 
>> On Sep 7, 2013, at 10:14 AM, Maxime <maximelb@gmail.com> wrote:
>> 
>>> Right, that design would work fine with one or a few mongrel2 servers,
it's when I get to 20 servers, ideally I'd like my Actors to publish the 
response to a single endpoint (like a xpub/xsub device) so that I can 
centralize the management of the list of mongrel2 servers (vs having all 
response Actors knowing which mongrel2 servers are out there). And that's 
where the bind becomes an issue, let's say I do have a xsub/xpub device 
that all mongrel2 servers know about and all response Actors know about, 
how do I get my device to talk with the servers if all the servers do 
their own bind (vs the servers connect-ing to the device)...
>>> 
>>> Maybe I should draw it out to help visualize. :-)
>>> 
>>> On Saturday, September 7, 2013, Brian McQueen wrote:
>>>> That's an interesting design.  I think it ought to work too by having
the python actors publish to the SUB queues on the originating mongrel2 
host and the actors would use the send_ident provided by the originating 
mongrel2 handler spec for the request.  They'd also have to talk to that 
originating host's queue using the mongrel2 protocol, as if they were a 
mongrel2 handler.  The protocol is very simple, so that should be easy to 
setup.  I don't see how the BIND mode would mess it up, but I haven't 
studied that.
>>>> 
>>>> 
>>>> On Sat, Sep 7, 2013 at 6:07 AM, Maxime <maximelb@gmail.com> wrote:
>>>>> Hello, I've tried reading as much as I can about this but couldn't 
find quite the answer to my question in docs.
>>>>> 
>>>>> My plan is to use a multiple mongrel2 servers pointing to a cluster 
of handlers written in python by me (they are very simple endpoints), 
which will then forward the requests into a cloud of python actors for 
processing via zmq PUSH sockets (so it's a pipeline, not a req/rep), the 
response needs to be eventually sent back to the right mongrel2 server for
response. The messages as they transit through the cloud will keep the 
original envelope parameters required to be sent back to the correct 
mongrel2 server.
>>>>> 
>>>>> The problem I am seeing is that the Mongrel2 servers' response 
endpoint is a SUB (that's fine) in BIND mode. If it was in CONNECT mode I 
would simply point all the connection strings towards a XSUB/XPUB device 
to do the many-to-many PUB SUB between the Mongrel2 servers and handlers. 
The only thing I could think of but could not find an example anywhere is 
that I can indeed do multiple "connect_out" calls on the zmq device, once 
for each Mongrel2 server. All the examples of device I see do a single 
bind_in and bind_out call (or connect_), never more than one.
>>>>> 
>>>>> Maybe I am missing something, maybe it's a zmq question really, but 
I'd like to see some sample backend architectures to be used with Mongrel2
handlers.
>>>>> 
>>>>> Any comments, thought?
>>>>> 
>>>>> Thanks
>>>> 
>>>> 
>>>> 
>>>> -- 
>>>> the news wire of the 21st century - twitchy.com

Re: [mongrel2] Re: mongrel2 server cluster zmq patterns

From:
Justin Karneges
Date:
2013-09-08 @ 17:28
You can do multiple connects with zmq. That's how all the load balancing /
routing magic comes into play.

FWIW, bind is used for whichever side shall be considered the known/stable
entity. Mongrel2's designers figured that it made more sense to have an
arbitrary number of handler entities connected to a known Mongrel2
instance, as opposed to the other way around: an arbitrary number of
Mongrel2 instances connected to a known handler. Note that you can always
write little zmq adapter modules if you ever want to invert these kinds of
things. For example, if you did want a Mongrel2 that connects out, then you
could make a small program that does connects on both sides (to Mongrel2
and to your handler).

I do also agree that being able to configure a program whether to bind or
connect can be handy. My Zurl program (for outbound HTTP) does this. By
default it binds like Mongrel2, but with a config option you can make it
connect out instead, if a user considers that model more suitable.

On Sat, Sep 7, 2013 at 12:22 PM, Pat Collins <pat@burned.com> wrote:

> Let us know what you end up doing!
>
> --
> Patrick Collins
>
> On Sep 7, 2013, at 3:17 PM, Maxime <maximelb@gmail.com> wrote:
>
> That's a good point, but it felt like the xpub/xsub solution was simply an
> extension of the philosophy behind mongrel2's use of sub to avoid
> configuration and connection sprawl as well as extended the "add a handler
> with no configuration change" with "add a server with no configuration
> change".
>
> Seems like the ability to have a mongrel2 server's recv endpoint do a
> connect would be a nice feature to have in the future.
>
> Thanks for your answers.
>
> On Saturday, September 7, 2013, Pat Collins wrote:
>
>> A danger in relying on a single endpoint is that it's a single point of
>> failure. If you could rely on configuration mgmt to allow the list of
>> mongrel2 servers to be synced across all actors that could alleviate that
>> concern.
>>
>> --
>> Patrick Collins
>>
>> On Sep 7, 2013, at 10:14 AM, Maxime <maximelb@gmail.com> wrote:
>>
>> Right, that design would work fine with one or a few mongrel2 servers,
>> it's when I get to 20 servers, ideally I'd like my Actors to publish the
>> response to a single endpoint (like a xpub/xsub device) so that I can
>> centralize the management of the list of mongrel2 servers (vs having all
>> response Actors knowing which mongrel2 servers are out there). And that's
>> where the bind becomes an issue, let's say I do have a xsub/xpub device
>> that all mongrel2 servers know about and all response Actors know about,
>> how do I get my device to talk with the servers if all the servers do their
>> own bind (vs the servers connect-ing to the device)...
>>
>> Maybe I should draw it out to help visualize. :-)
>>
>> On Saturday, September 7, 2013, Brian McQueen wrote:
>>
>>> That's an interesting design.  I think it ought to work too by having
>>> the python actors publish to the SUB queues on the originating mongrel2
>>> host and the actors would use the send_ident provided by the originating
>>> mongrel2 handler spec for the request.  They'd also have to talk to that
>>> originating host's queue using the mongrel2 protocol, as if they were a
>>> mongrel2 handler.  The protocol is very simple, so that should be easy to
>>> setup.  I don't see how the BIND mode would mess it up, but I haven't
>>> studied that.
>>>
>>>
>>> On Sat, Sep 7, 2013 at 6:07 AM, Maxime <maximelb@gmail.com> wrote:
>>>
>>>> Hello, I've tried reading as much as I can about this but couldn't find
>>>> quite the answer to my question in docs.
>>>>
>>>> My plan is to use a multiple mongrel2 servers pointing to a cluster of
>>>> handlers written in python by me (they are very simple endpoints), which
>>>> will then forward the requests into a cloud of python actors for processing
>>>> via zmq PUSH sockets (so it's a pipeline, not a req/rep), the response
>>>> needs to be eventually sent back to the right mongrel2 server for response.
>>>> The messages as they transit through the cloud will keep the original
>>>> envelope parameters required to be sent back to the correct mongrel2 server.
>>>>
>>>> The problem I am seeing is that the Mongrel2 servers' response endpoint
>>>> is a SUB (that's fine) in BIND mode. If it was in CONNECT mode I would
>>>> simply point all the connection strings towards a XSUB/XPUB device to do
>>>> the many-to-many PUB SUB between the Mongrel2 servers and handlers. The
>>>> only thing I could think of but could not find an example anywhere is that
>>>> I can indeed do multiple "connect_out" calls on the zmq device, once for
>>>> each Mongrel2 server. All the examples of device I see do a single bind_in
>>>> and bind_out call (or connect_), never more than one.
>>>>
>>>> Maybe I am missing something, maybe it's a zmq question really, but I'd
>>>> like to see some sample backend architectures to be used with Mongrel2
>>>> handlers.
>>>>
>>>> Any comments, thought?
>>>>
>>>> Thanks
>>>>
>>>
>>>
>>>
>>> --
>>> the news wire of the 21st century - twitchy.com
>>>
>>