librelist archives

« back to archive

SUB_ADDR of Task

SUB_ADDR of Task

From:
William Martin
Date:
2011-10-17 @ 09:42
Hi loic,

Did i need to set a different address for each task ?
All my task are running on the same PC.

Thanks
William MARTIN

Re: [photon.users] SUB_ADDR of Task

From:
Loic d'Anterroches
Date:
2011-10-17 @ 09:45
Hello,

> Did i need to set a different address for each task ?
> All my task are running on the same PC.

Yes, each address is specific to a given task.

loïc

Suggestions

From:
Ulf Byskov
Date:
2011-10-18 @ 04:40
Hi,

I haven't done much whit Photon yet, but so far I like what the
framework enables me to do.

For you developers to have more success (cant measure how much you have
at this time) I believe you could do a few things.

1. Remove all the template/route (URL)/DB stuff from the core and
promote Photon as a pure PHP daemon/server. It is easy to start a PHP
script and connect ZMQ to Mongrel2, but managing the process (PHP
crashes/memory leaks/restart) is much more complicated. What I am saying
is that your approach to doing the core is really great and solves a
host of problem, you would normally have to implement from scratch. I
really think Photon could become the 'Tomcat' or 'JBOSS' of PHP,
especially since you already seem to have a head start on everyone else.

2. Implement an addon/extension system to facilitate templating,
routing, databases and other application specific purposes. People are
always complaining about the choice of template system and databases, so
if you can easily choose something besides the included stuff, it would
be more attractive to some. I can see that I could easily use my own
templating engine, but plugging it into Photon at its current state
somehow feels like a hack as opposed to a proper way to do things.

3. Focus a little more on promotion and documentation. IMHO this project
has great potention but seems to need some more attention (I'll do what
little I can). Especially I miss examples about the best way to work
with media (images, uploads, etc), JavaScript (jQuery) and running
multiple instances of Proton (and communicate between them). When I have
done some more coding with the framework I may be able to help out a
little in the documentation area.

I hope you don't find my ramblings offensive or condescending. I only
wish the best for this project.
And of course it is not my project after all, so if you don't like any
of my suggestions...no problem.

Regards
Ulf

Re: [photon.users] Suggestions

From:
Loic d'Anterroches
Date:
2011-10-18 @ 12:49
Hello,

thanks a lot. I really care about the end user feedback and I can tell
you, I read your email with a lot of attention.

> I haven't done much whit Photon yet, but so far I like what the
> framework enables me to do.

This already is great. What would be even better is that you tell us
what you like. Because this means these are most likely the strong
points of Photon and having them will help us better promote it.

> For you developers to have more success (cant measure how much you have
> at this time) I believe you could do a few things.

Mongrel2 is still pretty new, so the success is a bit hard to evaluate.
I think it will come when a couple of really innovative applications
will be developped on top of Mongrel2. This will bring more interest to
it and thus to the frameworks using it.

> 1. Remove all the template/route (URL)/DB stuff from the core and
> promote Photon as a pure PHP daemon/server. It is easy to start a PHP
> script and connect ZMQ to Mongrel2, but managing the process (PHP
> crashes/memory leaks/restart) is much more complicated. What I am saying
> is that your approach to doing the core is really great and solves a
> host of problem, you would normally have to implement from scratch. I
> really think Photon could become the 'Tomcat' or 'JBOSS' of PHP,
> especially since you already seem to have a head start on everyone else.

This is a rabbit hole. Managing the processes is extremely hard and
requires a lot of low level functions which are not really available
from PHP (even with the proc extension).

Take a look at this code, just to correctly manage the processes:
http://projects.ceondo.com/p/diprocd/source/tree/master/lib/utils/process.py

It is insanely hard. In fact, in the current develop, all these have
been removed. But let me explain a bit more the reasons for you to
understand. And will welcome contradictions and remarks.

1. Correct management of the processes is hard, see above.

2. Mongrel2 is distributed by virtue of ZeroMQ.

You start to take benefit of Mongrel2/ZeroMQ when you can start to have
Photon processes handling the Mongrel2 requests all over your VMs. For
example, on 3 AWS micro instances. This also means that you can have a
task on VM1 and another on VM2. This means that: the "host" is not
anymore the "unit of work". You stop reasoning in terms of "A photon
server answering behind Mongrel2", but a flock of processes answering
behind Mongrel2.

Very nice: http://www.google.com/search?tbm=isch&q=flock+birds

So you have:

a. Mongrel2
b. processes with ZeroMQ end points

You do not care anymore about the hosts, they are the workers of Heroku.
This is why the project diprocd:

http://projects.ceondo.com/p/diprocd/

This is to make the management of the processes across VM (and even on a
single VM) easy. diprocd is written in Python to take benefit of the
extremely robust library created by some googlers to not to have to
reinvent the wheel. The status is not yet good enough (especially the
documentation which is only on my disk) to announce it officially, but
Photon will provide all the input to easily control your processes with
diprocd.

This is also very important because when you start to use
Photon/Mongrel2 you very fast figure out that sometimes a small Python
script or C or whatever is better to make a service. But your Photon
project may depend on it, so having an homogenous way to manage all
these processes is important.

> 2. Implement an addon/extension system to facilitate templating,
> routing, databases and other application specific purposes. People are
> always complaining about the choice of template system and databases, so
> if you can easily choose something besides the included stuff, it would
> be more attractive to some. I can see that I could easily use my own
> templating engine, but plugging it into Photon at its current state
> somehow feels like a hack as opposed to a proper way to do things.

For the database, at the moment, if you take a look at the code,
especially the current develop branch, it is totally database
independent. Everything is done by implicit interfaces. In fact, if you
consider what is the core of Photon, it is just:

A system to answer a Mongrel2 request (including parsing the payload),
route it to the correct view and send back the possible answers.

Nothing more. So templating etc. are delivered in but not used in the
core. You can use the template engine you want and this is not a hack.
The available template is there because it is very very compact (just
two files and not that many lines of code).

Now, the real core of the remark: Extensions.

Yes, I will do it. In fact, I have the getparticles.org domain to allow
this. The goal is to be able to easily publish components which would be
packaged as .phar to be easily downloaded, extracted in your current
project and go. So yes, this is planned.

> 3. Focus a little more on promotion and documentation. IMHO this project
> has great potention but seems to need some more attention (I'll do what
> little I can). Especially I miss examples about the best way to work
> with media (images, uploads, etc), JavaScript (jQuery) and running
> multiple instances of Proton (and communicate between them). When I have
> done some more coding with the framework I may be able to help out a
> little in the documentation area.

Totally right. Also, the website design could be improved. I will go to
a relatively big PHP conference in France end of November where I will
promote Mongrel2/Photon.

> I hope you don't find my ramblings offensive or condescending. I only
> wish the best for this project.
> And of course it is not my project after all, so if you don't like any
> of my suggestions...no problem.

Thanks a lot for your ramblings, they are very welcomed!

I am here to help and make Photon better, so, do not hesitate to push,
ask, sometimes the answer may come only after a couple of days, but I
try to answer all the questions.

loïc

Re: [photon.users] Suggestions

From:
Ulf Byskov
Date:
2011-10-21 @ 05:40
Hi,

Just one last question for this thread.

> - one session leader not answering any Mongrel2 requests;
> - 3 handlers answering Mongrel2 requests.

The first thing that comes to my mind is how do you handle persistency?
Each of the 3 handler would run as its own process having its own
memory space (I assume you are not using anything fancy like shared memory),
so how do you share the state between these processes. Do you send data
back and forth between the individual process and process leader, or by
some other means?


Then about the router:

> Just tell me what you need, this does not seems complicated to do anyway
> and especially, one can do it efficiently for what I can see.

Your suggestion about using a callable is good.
What about something like this:

  $http = function($req, $resp) {
    //
    // Do something
    //
  }

return array($http);

Now you would just have to check if the array item is another array or a function.
And then if it is a function just call it instead of using a view to 
generate the response.
Also the callback function would not be among the 'installed_apps', but 
would be able to coexist
with them.

A callback like this could also be used as a form of custom middleware, it
would allow us
investigate the request before sending it on to the normal url router and 
add things to the
response before it is sent:


    $is_middleware = callback($req, $resp)


Example:

== urls.php ==

$myLogger = function($req, $resp) {
  static $logDetails;

  // When getting the request
  if(is_null($resp)) {
    static $logDetails = logTheRequest($req);
    return true;  // Tell that this callack is middleware
  }

  // When getting the response
  addLogReport($resp, $logDetails); // Modifying the response
  $logDetails = null;
}


$http = array(
    array('regex' => '#^/hello#',
          'sub'   => include __DIR__ . '/apps/helloworld/urls.php'));


return array($myLogger, $http);


== server.php ==

When getting a request

    // All of this is probably running in a loop

    $handler = array_shift($array_returned_from_urls.php);
 
    if( ! is_array($handler) ) {  // Assume we have a callback if we dont 
have an array
        $is_middleware = $handler($req, null);

        if($is_middleware) {
            saveCallbackToStack($handler);
        }
    }

    // if the next item is an array
    $handler = array_shift($array_returned_from_urls.php);

    if(is_array($handler)) {
        $resp = getResponseFromView($handler, $req); // I havent 
investigated how you use the dispatcher
        $callback = getCallBackFromStack();
        $newResponse = $callback(null, $resp);
    }


My code probably doesn't fit into your implementation, but it should be
sufficient to describe what I was thinking about.

In any case, I like a middleware approach because I can easily remove such
items before putting a site into production.


Regards

Ulf

Re: [photon.users] Suggestions

From:
Loic d'Anterroches
Date:
2011-10-21 @ 06:31
Hello,

> Just one last question for this thread.
> 
>> - one session leader not answering any Mongrel2 requests;
>> - 3 handlers answering Mongrel2 requests.
> 
> The first thing that comes to my mind is how do you handle persistency?
> Each of the 3 handler would run as its own process having its own
> memory space (I assume you are not using anything fancy like shared memory),
> so how do you share the state between these processes. Do you send data
> back and forth between the individual process and process leader, or by
> some other means?

Share nothing. So effectively, nothing is shared, the same way nothing
is shared between your fastcgi processes. But, you can create a task
which will store state information. This is what I am doing with the
chat. You have a chat task which basically stores the liste of connected
users. A single task can answer 8000 req/s on a single core for simple
requests, so you have space to grow.

> Then about the router:
> 
>> Just tell me what you need, this does not seems complicated to do anyway
>> and especially, one can do it efficiently for what I can see.
> 
> Your suggestion about using a callable is good.
> What about something like this:
> 
>   $http = function($req, $resp) {
>     //
>     // Do something
>     //
>   }
> 
> return array($http);
> 
> Now you would just have to check if the array item is another array or a
function.
> And then if it is a function just call it instead of using a view to 
generate the response.
> Also the callback function would not be among the 'installed_apps', but 
would be able to coexist
> with them.
> 
> A callback like this could also be used as a form of custom middleware, 
it would allow us
> investigate the request before sending it on to the normal url router 
and add things to the
> response before it is sent:
> 
> 
>     $is_middleware = callback($req, $resp)
> 
> 
> Example:
> 
> == urls.php ==
> 
> $myLogger = function($req, $resp) {
>   static $logDetails;
> 
>   // When getting the request
>   if(is_null($resp)) {
>     static $logDetails = logTheRequest($req);
>     return true;  // Tell that this callack is middleware
>   }
> 
>   // When getting the response
>   addLogReport($resp, $logDetails); // Modifying the response
>   $logDetails = null;
> }
> 
> 
> $http = array(
>     array('regex' => '#^/hello#',
>           'sub'   => include __DIR__ . '/apps/helloworld/urls.php'));
> 
> 
> return array($myLogger, $http);
> 
> 
> == server.php ==
> 
> When getting a request
> 
>     // All of this is probably running in a loop
> 
>     $handler = array_shift($array_returned_from_urls.php);
>  
>     if( ! is_array($handler) ) {  // Assume we have a callback if we 
dont have an array
>         $is_middleware = $handler($req, null);
> 
>         if($is_middleware) {
>             saveCallbackToStack($handler);
>         }
>     }
> 
>     // if the next item is an array
>     $handler = array_shift($array_returned_from_urls.php);
> 
>     if(is_array($handler)) {
>         $resp = getResponseFromView($handler, $req); // I havent 
investigated how you use the dispatcher
>         $callback = getCallBackFromStack();
>         $newResponse = $callback(null, $resp);
>     }
> 
> 
> My code probably doesn't fit into your implementation, but it should be
> sufficient to describe what I was thinking about.
> 
> In any case, I like a middleware approach because I can easily remove such
> items before putting a site into production.

Ok, more or less I understand. I am not totally sure if this is good to
have both a special callback + normal routing as it kinds of overlap
with the current middleware approach, but the idea to give you the
ability to do whatever you want with the request is definitely something
which can be interesting. For example if you want to create a SOAP
server where the complete URL tree is dynamically "loaded" from the
state/methods of underlying objects.

Thanks a lot, I will keep this in mind!
loïc

Re: [photon.users] Suggestions

From:
Ulf Byskov
Date:
2011-10-20 @ 06:28
Hi,

Thanks for taking time to respond to my suggestions.

> This is a rabbit hole

I'm not sure if that means something good or bad !

> Photon will provide all the input to easily control your processes with diprocd.

So how does it currently work? What I am not sure of is what will happen after I
have started Proton with 'hnu server start' and PHP crashes because of some syntax
error. I can see that 4 PHP processes (servers) and 1 chat server has been
started, what mechanism
is going to restart any of these servers after a crash?
Also I assume (don't know for sure) that any of 4 PHP processes can 
process incoming Mongrel2
requests. How exactly is the load balanced between those processes or is 
it not balanced at all?
You could, of course, just use the processes as backups, should 1 or more 
process(es) fail.

I can't find (from running processes) any running "master" daemon taking 
care of this. Mongrel2 is
running of course but it cant restart failed PHP processes, although it 
may be able
to choose from a pool of PHP servers.

I am sorry if I sound a little ignorant, but it is a little hard to dig 
this information out
from the current documentation. What I am after is to ensure that before I
invest my efforts 
in Photon I can be reasonable sure that it has fairly high availability 
(crashing should not
make it unavailable) 

> if you take a look at the code,especially the current develop branch, it
is totally database independent

> So templating etc. are delivered in but not used in the core.

Great. Now about the routing engine. Would it be possible do this so that 
I could design my own router,
or simply expose the request and response objects (perhaps with pre-parsed
headers and body) to the
urls.php file. From the default apps (when you create the server with hnu)
it seems that I cant access
the request (or can I) before the View (... index($requets, $match) ), 
something that would not be great
should I decide not to use MVC.

> The goal is to be able to easily publish components which would be 
packaged as .phar

Excellent idea. This would be a great way to distribute turn-key 
solutions. A phar file could include
Sqlite db files, enabling apps that need a storage to be runnable out of 
the box. If proper DB abstraction
is used, it would then be rather painless to switch to a better DB server 
after you have seen that the app
is working. If you ever plan doing something in this area, you should 
really take a look at the SQL handler
and ORM of the F3 PHP framework (http://fatfree.sourceforge.net).


Regards

Ulf

Re: [photon.users] Suggestions

From:
Loic d'Anterroches
Date:
2011-10-20 @ 08:09
Hello,

> Thanks for taking time to respond to my suggestions.
> 
>> This is a rabbit hole
> 
> I'm not sure if that means something good or bad !

A rabbit hole is a problem where you start and never see the end. So for
me, not really something I enjoy managing :)

>> Photon will provide all the input to easily control your processes with
diprocd.
> 
> So how does it currently work? What I am not sure of is what will happen after I
> have started Proton with 'hnu server start' and PHP crashes because of 
some syntax
> error. I can see that 4 PHP processes (servers) and 1 chat server has 
been started, what mechanism
> is going to restart any of these servers after a crash?
> Also I assume (don't know for sure) that any of 4 PHP processes can 
process incoming Mongrel2
> requests. How exactly is the load balanced between those processes or is
it not balanced at all?
> You could, of course, just use the processes as backups, should 1 or 
more process(es) fail.
> 
> I can't find (from running processes) any running "master" daemon taking
care of this. Mongrel2 is
> running of course but it cant restart failed PHP processes, although it 
may be able
> to choose from a pool of PHP servers.
> 
> I am sorry if I sound a little ignorant, but it is a little hard to dig 
this information out
> from the current documentation. What I am after is to ensure that before
I invest my efforts 
> in Photon I can be reasonable sure that it has fairly high availability 
(crashing should not
> make it unavailable) 

Ok, this is a fundamental question, so here is the answer and it
requires a bit of understanding of ZeroMQ but not that much.

I recommend you first to do a simple test. Start mongrel2, access a page
of a Photon application with your browser without starting Photon yet,
see that nothing comes, start Photon, see that the answer is suddenly
here. This tells you: if your handler processes are not available, the
client request will be pooled and sent to the handlers as soon as they
come up again. It also mean you can restart all your Photon processes
without losing a single client connection. This is all managed by the
ZeroMQ inboxes. You do not have to worry about it, this is one level
deeper that where you code, you just have to know it works.

This is in general, for Photon or any kind of ZeroMQ communication.

Now, back to Photon itself when you run the hnu serve command, at the
moment you have by default:

- one session leader not answering any Mongrel2 requests;
- 3 handlers answering Mongrel2 requests.

The thing is that if one of the 3 processes dies, the session leader
will not restart it. If all the processes die, nobody will take care of
them. Mongrel2 is not in charge of starting/stopping the handler processes.

So, what you need is a robust process manager. You have many of them,
procer (available with Mongrel2) is one, you have also upstart etc. but,
going back to Mongrel2 and the handlers, what you have is:

- Mongrel2 with a series of ZeroMQ end points.
- handlers connecting to the ZeroMQ end points.

What is really really important to notice is that you can connect to the
Mongrel2 end points from anywhere as long as the IP:port is accessible.

Basically you have:

+----------+
| Mongrel2 |
+--+--+-+--+
   |  | |
   |  | +----------+
   |  | | handler  |
   |  | +----------+
   |  |
   |  +----------+
   |  | handler2 |
   |  +----------+
   |
   +------------+
   | handler3   |
   +------------+

and many more handlers. Each handler is getting a request in round robin
from Mongrel2 and as shown in the little test before, each one is
getting it in a very robust way. There are cases where you can lose some
messages but they are very rare.

Now, take a look at Heroku: http://www.heroku.com/how

- Mongrel2 + ZeroMQ are the real routing of client requests to your
handlers.
- Each Photon handler process is a dyno which can be anywhere as long as
it can connect to Mongrel2.
- Now, you need the process management of "Heroku" to
scale/stop/start/reload your "dynos".

Diprocd is this system. Another project I am developing at the moment is
the git integration to deploy the Photon processes on your VMs and
automatically manage the process definition for diprocd.

Yes, this means that you will get a personal Heroku.

But if you understand correctly, adding at the Photon level a process
manager means that you need 2 process managers. One to manage the
session leader (outside of Photon) and then one inside Photon to manage
the handlers. You do two times the work for no real benefits. So my
approach was: reuse the process manager routines of Google which has
been used for years in a very robust project (Ganeti) and make a
distributed system on top of it.

>> if you take a look at the code,especially the current develop branch, 
it is totally database independent
> 
>> So templating etc. are delivered in but not used in the core.
> 
> Great. Now about the routing engine. Would it be possible do this so 
that I could design my own router,
> or simply expose the request and response objects (perhaps with 
pre-parsed headers and body) to the
> urls.php file. From the default apps (when you create the server with 
hnu) it seems that I cant access
> the request (or can I) before the View (... index($requets, $match) ), 
something that would not be great
> should I decide not to use MVC.

Clone the git repository and in the develop branch take a look at
photon/mongrel2.php and photon/server.php especially line 265.

Line 265, you have the processRequest method. Line 277 is the call to
the current dispatcher/router. Maybe one can allow you to have your own
there. I think the $req here should be kept as it is handling the
parsing of the headers and POST payload in a smart and memory efficient
way, but the way you push these "data structure" to your code to
generate the possible answer could be different to match your needs.

Maybe simply having in the config: 'dispatcher' => callable which would
accept a request as parameter and return array(req, response) with the
right methods on response would be ok.

Just tell me what you need, this does not seems complicated to do anyway
and especially, one can do it efficiently for what I can see.


>> The goal is to be able to easily publish components which would be 
packaged as .phar
> 
> Excellent idea. This would be a great way to distribute turn-key 
solutions. A phar file could include
> Sqlite db files, enabling apps that need a storage to be runnable out of
the box. If proper DB abstraction
> is used, it would then be rather painless to switch to a better DB 
server after you have seen that the app
> is working. If you ever plan doing something in this area, you should 
really take a look at the SQL handler
> and ORM of the F3 PHP framework (http://fatfree.sourceforge.net).

And this is exactly why I am working very hard to not have a single DB
dependency in Photon. You like your ORM or SQL helper library, these are
the tools you like and you enjoy, so you should be able to use them.

It is always easy to force the dependencies on a given ORM, templating
engine etc. it is way harder to not do it while keeping the performance.
This is why I am not bundling a massive library like the dependency
injection of Symfony (2848 lines of code) in Photon. I just carefully
allow the configuration of the components as needed.

So yes, you will be able to provide a complete project as a .phar with
the user just adding a "config.php" in the same folder of the .phar to
customize his project (path to the db, passwords, whatever) or publish a
component which will then be extracted as a collection of files in your
project and then you create a single .phar out your project. I will need
to see how a collection of .phar could be managed into a single project.
For example you have golf management website: golfmanager.phar
configured in config.php and you can provide an extension
handicapcalculator.phar which configured in config.php and stored in the
same folder would allow the website to calculate the handicap of a
player based on the previous scores and thus customize the main application.

This would be the optimal approach.

I hope it clears the goal a bit and shows where I want to bring the project.

loïc