Maintain the PHP Apps as Daemon

Mert Simsek
Beyn Technology
Published in
8 min readJun 1, 2022

In this post, we will see the chains of challenges after moving PHP daemon services in some concepts such as ReactPHP, Swoole, Laravel Octane, etc. The daemon is just a process that runs in an endless loop, whether or not a daemon can be helpful for your particular app is entirely up to the daemon and the requirements of your app. It just waits for jobs. As long as you can refer to it for your apps in some way dramatically depends on your app and what you expect to do with a daemon. But on the other hand, that is not recommended for PHP. Why? There are some challenges and issues with it, but we can indeed find a solution for each issue.

How PHP Works?
For short, it works stateless. PHP is an interpreted language. This means that you will write code statements (lines of code) and when a page is requested, the PHP interpreter will load your PHP code, parse it and then execute it. Finally, erase all of context, objects and variables from the memory.

The problem is here. A daemon is always running and it refreshes anything on-demand. It works differently than the common PHP way.

Swoole

To refer to daemon for our PHP apps, we use Swoole. Let’s look at its description on the website:

It’s designed for building large-scale concurrency systems. It is written in C/C++ and installed as a PHP extension. Enabling PHP developers to write code in a more efficient manner, taking advantage of event-loops and fibers/coroutines Providing you with an easy-to-use coroutine API, and allowing you to use existing libraries within the PHP ecosystem.

Basically, PHP apps are built for once and they are ready to run on-demand. It means it’s always alive. Here is critical, keep it in mind. It’s always alive and this is the dangerous part, we’ll see why. Let’s go further with the first challenge with this approach.

Challenge #1: MySQL server has gone away

The solution is simple. Connect (or reconnect) after closed connection. For instance, there is absolutely no way to close a PDO connection, it is the wrong DB layer for long-running background tasks. Because this message means that the existing connection is not usable anymore. Something must have happened and the connection was closed by the MySQL server. There could be many reasons for this. However, for long-running PHP apps, MySQL timeout values cause this situation in general. When we start our daemon, the PHP connection is made and if there is no query for some time, the DB connection actually dies. For our case, it causes by the MySQL server closing the idle connection after ‘wait timeout’. When MySQL closes the idle connection, PDO will not receive any event so the next time you initiate a query it will return MySQL has gone away error.

- Symfony

For the Symfony framework, we refer to the “RequestSubscriber” file to handle this situation. Basically, we send a sample query to check the existing connection and if it is cut off by MySQL we re-connect to the database. It runs for each HTTP request from the clients. Actually, we connect to the database if the connection is lost for some reason. If I need to tell the truth, this is obviously annoying, but it’s the only way of doing it right now.

try {
$this->entityManager->getConnection()->executeQuery("SELECT 1")->fetchOne();
} catch (\Exception $e) {
if (str_contains($e->getMessage(), 'gone away')) {
$this->entityManager->getConnection()->close();
$this->entityManager->getConnection()->connect();
}
}

- Phalcon

Same thing for Phalcon. Let’s say you have got Phalcon tasks and they’re alive always. In this case, you will get closed connections for MySQL. If you got the error you may re-connect in this way distinctly.

try {
return $model::find($where);
} catch (\Exception $e) {
if (strpos($e->getMessage(), 'MySQL server has gone away') !== false) {
$model->getReadConnection()->close();
$model->getReadConnection()->connect();
return $model::find($where);
} else {
echo PHP_EOL . "[ERROR]:" . $e->getCode() . " ||| " . $e->getMessage() . PHP_EOL;
}
}

Challenge #2: Redis Idle Timeout (‘read error on connection’)

Usually, opening a connection has a cost for operation so modern best practices are to keep them open. In spite of this, open connections require resources to manage so keeping a lot of idle connections open can also be problematic. This trade-off is usually resolved via the use of connection pools. We can imagine this situation as the same as the MySQL case.

  • We can set “default_socket_timeout” (not recommended)
    ini_set('default_socket_timeout', -1);
  • We can also set “OPT_READ_TIMEOUT” of the Redis client (not recommended)
    $redis->setOption(Redis::OPT_READ_TIMEOUT, -1);
  • Last but not least, reuse the disconnected connection. I prefer to do it this way.
$rds = new Redis();
try {
$ret = $rds->pconnect("127.0.0.1", 6390);
if ($ret == false) {
echo "Redis client couldn't be initialized.";
exit;
}

var_dump($rds->get("key_1"));
} catch (Exception $e) {
$ret = $rds->pconnect("127.0.0.1", 6390);
}

I’d also like to share this Redis web page to get information more.

https://redis.io/docs/reference/clients/

Challenge #3: TCP Connections and “CLOSE_WAIT” States

Let’s say we have got the following design and our app requests external services for each request from the clients

CLIENTS -> request -> OUR APP -> request-> EXTERNAL SERVICES

If you initialize the cURL handler for each request from the clients, you might have a ton of zombie connections and all of them would be useless for the new requests.

class HttpService
{

private $curlHandler;

public function __construct()
{
}

public function sendRequest()
{
$this->curlHandler = curl_init();

$url = "\\\";

curl_setopt(
$this->curlHandler,
CURLOPT_HTTPHEADER,
array(
'Connection: keep-alive',
),
);
curl_setopt($this->curlHandler, CURLOPT_URL, $url);
curl_setopt($this->curlHandler, CURLOPT_CUSTOMREQUEST, "POST");
curl_setopt($this->curlHandler, CURLOPT_POSTFIELDS, []);
curl_setopt($this->curlHandler, CURLOPT_HEADER, true);
curl_setopt($this->curlHandler, CURLOPT_RETURNTRANSFER, true);
curl_setopt($this->curlHandler, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($this->curlHandler, CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($this->curlHandler, CURLOPT_VERBOSE, true);


$verbose = fopen('php://temp', 'w+');
curl_setopt($this->curlHandler, CURLOPT_STDERR, $verbose);
$response = curl_exec($this->curlHandler);
dump("[RESPONSE]:" . $response);
rewind($verbose);
$verboseLog = stream_get_contents($verbose);
dump("[VERBOSE]:" . $verboseLog);
dump("[CURL_INFO]" . json_encode(curl_getinfo($this->curlHandler),true));
}

}

Distinctly, we’re going to create the new connections but the previous ones will be zombies as useless. If you have referred to PHP-FPM runtime it’d make sense but in our scenario, this is the wrong way. We’re supposed to use some connections because as I said creating new connections has a big cost for our resources and network.

Let’s convert our HttpService is like the following.

class HttpService
{

private $curlHandler;

public function __construct()
{
if (empty($this->curlHandler)) {
$this->curlHandler = curl_init();
}
}

public function sendRequest()
{
$url = "\\\";

curl_setopt(
$this->curlHandler,
CURLOPT_HTTPHEADER,
array(
'Connection: keep-alive',
),
);
curl_setopt($this->curlHandler, CURLOPT_URL, $url);
curl_setopt($this->curlHandler, CURLOPT_CUSTOMREQUEST, "POST");
curl_setopt($this->curlHandler, CURLOPT_POSTFIELDS, []);
curl_setopt($this->curlHandler, CURLOPT_HEADER, true);
curl_setopt($this->curlHandler, CURLOPT_RETURNTRANSFER, true);
curl_setopt($this->curlHandler, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($this->curlHandler, CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($this->curlHandler, CURLOPT_VERBOSE, true);


$verbose = fopen('php://temp', 'w+');
curl_setopt($this->curlHandler, CURLOPT_STDERR, $verbose);
$response = curl_exec($this->curlHandler);
dump("[RESPONSE]:" . $response);
rewind($verbose);
$verboseLog = stream_get_contents($verbose);
dump("[VERBOSE]:" . $verboseLog);
dump("[CURL_INFO]" . json_encode(curl_getinfo($this->curlHandler),true));
}

}

We won’t have “CLOSE_WAIT” states anymore. Because we’re re-using the same cURL handler and our “keep-alive” mechanism will be worked. Let’s check out the output. Normally, I always see this output for the new requests. (I replaced domains and IP information with ///). When my app got a new request from the client and if my app sends multiple requests, it’s reusing and does not create a new connection. But, when the client sent the next request, my app creates a new connection to external services. It doesn’t use the current cURL handler.

Trying ///…
Connected to /// port 80 (#0)
POST ** HTTP/1.1
Host:///
Connection: Keep-Alive
Accept: /
User-Agent: Symfony HttpClient/Curl
Accept-Encoding: gzip
Content-Length: 244
Content-Type: application/x-www-form-urlencoded

upload completely sent off: 244 out of 244 bytes
Mark bundle as not supporting multiuse
HTTP/1.1 200 OK
Date: Sun, 22 May 2022 20:59:59 GMT
Server: Apache/2.2.15 (Red Hat)
X-Powered-By: Servlet 2.5; JBoss-5.0/JBossWeb-2.1
Content-Length: 32
Cache-Control: max-age=0
Expires: Sun, 22 May 2022 20:59:59 GMT
Keep-Alive: timeout=15, max=100
Connection: Keep-Alive
Content-Type: text/plain;charset=ISO-8859–1

Let’s point here: “Keep-Alive: timeout=15, max=100” This max value is reduced for the same client request but it’s being 100 after a new request. Check out not initializing “curl_init” for each request and the “max” value is decreased as expected.

Found bundle for host ///: 0x5607b85661c0 [serially]
Re-using existing connection! (# 42) with host ///
Connected to /// port 80 (# 42)
POST /// HTTP/1.1
Host: ///
Accept: /
Connection: keep-alive
Content-Type: application/x-www-form-urlencoded
Content-Length: 250

upload completely sent off: 250 out of 250 bytes
Mark bundle as not supporting multiuse
HTTP/1.1 200 OK
Date: Mon, 23 May 2022 19:58:56 GMT
Server: Apache/2.2.15 (Red Hat)
X-Powered-By: Servlet 2.5; JBoss-5.0/JBossWeb-2.1
Content-Length: 30
Cache-Control: max-age=0
Expires: Mon, 23 May 2022 19:58:56 GMT
Keep-Alive: timeout=15, max=89
Connection: Keep-Alive
Content-Type: text/plain;charset=ISO-8859–1

Challenge #4: Resource Usage

PHP is not a language that is sufficiently mature to run for hours, days, weeks, or months. PHP is written in C, and all of the magic that it provides has to be handled. Garbage collection, depending on your version, might or might not work, depending on what extensions you have compiled and used. We could check out the lifecycle of 2 different runtimes. I also mentioned in my previous blog post in more detail. As long as a daemon runs endlessly its usage of the resources will be getting increased.

The lifecycle of a PHP-FPM request:

1- Receive the request
2- Load and compile the PHP files and codes
3- Initialize the context object the variables
4- Execute functions
5- Send the response
6- Recycle the resources

All the above 6 steps are handled for each request. The lifecycle of a Swoole request:

3- Receive the request
4- Execute functions
5- Send the response

When we started the daemons they started with around 2 or 2.5 GB RAM usage. However day by day it’s increased. For example, let’s look at that for a few days of usage. It has come from 2 GB to about 4 GB. Obviously, this is a trade-off and expected stuff and behavior. We just need to keep the usage of resources in the mind. To reduce memory usage for the daemon mode, we’re supposed to refactor and review our codes.

To Sum Up

I’d like to point challenges of running PHP apps as a daemon or building and executing always. There is really critical problem in this runtime but there are some solutions to apply for. I’d like to refer to daemon PHP apps generally because why should I build them from the scratch for each client request. This is strongly bothering me. I hope this article will help you with these challenges in a positive way.

--

--

Mert Simsek
Beyn Technology

I’m a software developer who wants to learn more. First of all, I’m interested in building, testing, and deploying automatically and autonomously.