Symfony and ReactPHP Series — Chapter 2

Apisearch
3 min readMay 18, 2019

--

In the first chapter, we made a simple application with Symfony4, we created a basic Nginx server with PHP-FPM and a small benchmark. Remember the first 50% response time?

232ms

Let’s try to improve that number.

PHP-PM to the rescue

Let’s take a look at the first alternative by start using ReactPHP. There’s a project called PHP-PM that allow you to work with several ReactPHP workers, using them as soon as a request needs to be handled, and providing some control over them, ignoring all these workers that are busy, and using them as soon as they are available.

This is interesting because let us start using ReactPHP in our servers with not much pain and Promises knowledge, improving so much the response numbers.

To install PHP-PM, we must include both the php-pm library and the http-kernel-adapter for Symfony

{
"php-pm/php-pm": "*",
"php-pm/httpkernel-adapter": "*"
}

Once composer updates the dependencies, we will find a nice file under vendor/bin directory called ppm . And with this single line, you will be able to create a new server for your Symfony 4 application. Check that --cgi-path points to a real file in your system in case it fails.

php vendor/bin/ppm start \
--host=0.0.0.0 \
--port=8100 \
--workers=3 \
--bootstrap=symfony \
--app-env=prod \
--debug=0 \
--logging=0 \
--cgi-path=/usr/bin/php-cgi

As you can see, in this case we’re creating 3 workers. That means that we will create 3 single ReactPHP services, orchestrated by the principal server, each one will take care of a Request as long as is available. Once a Request is accepted by one of the workers, this worker turns busy. As soon as the request is handled properly, the worker turns available again, and ready to manage the next Request.

Let’s check some benchmark.

Percentage of the requests served within a certain time (ms)
50% 351
66% 357
75% 358
80% 359
90% 361
95% 362
98% 364
99% 1278
100% 1342 (longest request)

Oops. By using 3 workers, the 50% of requests are resolved in much more than the Nginx infrastructure. Interesting.

Now take a look at what we have created here. 3 workers working at the same time. Each worker with a blocking queue. First in, first out. Our 3 first requests are sent and the 3 workers become busy, and during 20ms they will stay busy. Next 3 requests after 20ms, and so on. 1 second / 20ms = 50 requests per second per thread. That means that our infrastructure can handle 150 requests per query. 1000 queries means 6.6 seconds to handle all requests.

Let’s add more workers. 10.

Percentage of the requests served within a certain time (ms)
50% 105
66% 106
75% 107
80% 107
90% 108
95% 109
98% 123
99% 1089
100% 1109 (longest request)

Voilà! Adding 7 more workers, we start seing better results here. 105ms is quite good with these numbers. But… do you think we can do better? Of course, we could add here workers and workers, and we could easily turn this 100 into a 25 or 30 (remember that we have a curl call inside that lasts 20ms, so 25ms is a good number). But is that what we want? Do we want to improve the application by adding more and more servers? Each server means memory comsumption, even if the server is not doing absolutely anything.

Why don’t we just improve the application with the minimum resources? Should’nt be that our work?

In fact, what we’re doing here is modeling our application for the traffic peaks. We must have the application ready for the number of long-term requests we may have, and depending of our business and requests, this could easily turn a really high memory required infrastructure.

I’m not happy with that. Are you?

I promise that we can improve this number.

You can continue to Symfony and ReactPHP Series — Chapter 3

--

--

Apisearch

An Open Source search engine. Give to your users instant and relevant results of all your data! 🔍💨😁