Stefan KoopmanschapSilex is (almost) dead, long live my-lex (17.11.2017, 10:30 UTC)

SymfonyCon is happening in Cluj and on Thursday the keynote by Fabien Potencier announced some important changes. One of the most important announcements was the EOL of Silex in 2018.

EOL next year for Silex! #SymfonyCon ( -@gbtekkie)

Silex

Silex has been and is still an important player in the PHP ecosystem. It has played an extremely important role in the Symfony ecosystem as it showed many Symfony developers that there was more than just the full Symfony stack. It was also one of the first microframeworks that showed the PHP community the power of working with individual components, and how you can glue those together to make an extremely powerful foundation to build upon which includes most of the best practices.

Why EOL?

Now I wasn't at the keynote so I can only guess to the reasons, but it does make sense to me. When Silex was released the whole concept of taking individual components to build a microframework was pretty new to PHP developers. The PHP component ecosystem was a lot more limited as well. A huge group of PHP developers was used to working with full stack frameworks, so building your own framework (even with components) was by many deemed to be reinventing the wheel.

Fastforward to 2017 and a lot of PHP developers are by now used to individual components. Silex has little to prove on that topic. And with Composer being a stable, proven tool, the PHP component ecosystem growing every day and now the introduction of Symfony Flex to easily setup and manage projects maintaining a seperate microframework based on Symfony components is just an overhead. Using either Composer or Symfony Flex, you can set up a project similar to an empty Silex project in a matter of minutes.

Constructicons

I have been a happy user of Composer with individual components for a while now. One of my first projects with individual components even turned into a conference talk. I'll update the talk soon, as I have since found a slightly better structure, and if I can make the time for it, I'll also write something about this new and improved structure. I've used it for a couple of projects now and I'm quite happy with this structure. I also still have to play with Symfony Flex. It looks really promising and I can't wait to give it a try.

So the "my-lex" in the title, what is that about? It is about the choice you now have. You can basically build your own Silex using either Composer and components or Symfony Flex. I would've laughed hard a couple of years ago if you'd said to me that I would say this but: Build your own framework!

Is Silex being EOL'ed a bad thing?

No. While it is sad to see such an important project go I think by now the Symfony and PHP ecosystems have already gone past the point of needing Silex. Does this mean we don't need microframeworks anymore? I won't say that, but with Slim still going strong the loss of Silex isn't all that bad. And with Composer, Flex and the huge amount of PHP components, you can always build a microframework that suits your specific needs.

The only situation where Silex stopping is an issue is for open source projects such as Bolt (who already anticipated this that are based on Silex, as well as of course your personal or business projects based on Silex. While this software will keep on working, you won't get new updates of the core of those projects, so eventually you'll have to put in effort to rewrite it to something else.

Link
Remi ColletFedora 27: changes in httpd and php (17.11.2017, 08:42 UTC)

The Apache HTTP server and PHP configuration have changed in Fedora 27, here is some explanations.

1. Switch of the Apache HTTP server in event mode

Since the first days of the distribution, the severs use the prefork MPM.

For obvious performance reasons, we choose to follow the upstream project recommandations and to use the event MPM by default.

This change is also required to have the full benefit and feature of the HTTP/2 protocol via mod_http2.

2. The problem of mod_php

The mod_php module is only supported when the prefork MPM is used

In the PHP documentation, we can read:

Warning We do not recommend using a threaded MPM in production with Apache 2.

And, indeed, we already have some bug reports about crashes in this configuration.

So it doesn't make sense to keep mod_php by default.

Furthermore, this module have some annoying limitations:

  • integrated in the web server, it shares its memory, which may have some negative security impacts
  • a single version can be loaded

3. Using FastCGI

For many years, we are working to make the PHP execution as much flexible as possible, using various combinations, without configuration change:

  • httpd + mod_php
  • httpd + php-fpm (when mod_php is disabled or missing and with a running php-fpm server)
  • nginx + php-fpm

The FPM way have become the default recommend configuration for a safe PHP execution:

  • support of multiple web servers (httpd, nginx, lighttpd)
  • frontend isolation for security
  • multiple backends
  • micro-services architecture
  • containers (docker)
  • multiple versions of PHP

4. FPM by default

Since Fedora 27, mod_php ZTS (multi-threaded) is still provided, but disabled, so FastCGI is now used by default.

To not break existing configuration during the distribution upgrade, and to have a working server after installation, we choose to implement some solutions, probably temporarily:

  • the php package have a optional dependency on the php-fpm package, so it is now installed by default
  • the httpd service have a dependency on the php-fpm service, so it is started automatically

5. Known issues

5.1. Configuration change

After a configuration change, or after a new extension installation, it is now required to restart the php-fpm service.

5.2. Configuration files

With mod_php, it is common to to use the php_value or php_flag directives in the Apache HTTP server configuration or in some .htaccess file.

It is now required to use the php_value or php_flag directives in the FPM pool configuration file, or to use some .user.ini file in the application directory.

6. Switching back to mod_php

If you really want to keep using (temporarily) mod_php, this is still possible, either way:

  • Switch back to prefork MPM in the /etc/httpd/conf.modules.d/00-mpm.conf file
 LoadModule mpm_prefork_module modules/mod_mpm_prefork.so
 #LoadModule mpm_worker_module modules/mod_mpm_worker.so
 #LoadModule mpm_event_module modules/mod_mpm_event.so
  • Enable the module in the /etc/httpd/conf.modules.d/15-php.conf file. Warning, this configuration will not be supported, no bug report will be accepted.
 # ZTS module is not supported, so FPM is preferred
 LoadModule php7_module modules/libphp7-zts.so

After this change, the php-fpm package can be removed.

7. Conclusion

Fedora 27 now uses a modern configuration, matching the upstream projects recommendations. Security and performance are improved.

Any change may raise some small issues, and lot of gnashing of teeth, but we will try to take care of any difficulties, and to improve what must be in the next updates, or in the next fedora versions.

I plan to update this entry according to feedback.

Link
SitePoint PHPHow to Read Big Files with PHP (Without Killing Your Server) (16.11.2017, 18:00 UTC)

It’s not often that we, as PHP developers, need to worry about memory management. The PHP engine does a stellar job of cleaning up after us, and the web server model of short-lived execution contexts means even the sloppiest code has no long-lasting effects.

There are rare times when we may need to step outside of this comfortable boundary --- like when we're trying to run Composer for a large project on the smallest VPS we can create, or when we need to read large files on an equally small server.

Fragmented terrain

It’s the latter problem we'll look at in this tutorial.

The code for this tutorial can be found on GitHub.

Measuring Success

The only way to be sure we’re making any improvement to our code is to measure a bad situation and then compare that measurement to another after we’ve applied our fix. In other words, unless we know how much a “solution” helps us (if at all), we can’t know if it really is a solution or not.

There are two metrics we can care about. The first is CPU usage. How fast or slow is the process we want to work on? The second is memory usage. How much memory does the script take to execute? These are often inversely proportional --- meaning that we can offload memory usage at the cost of CPU usage, and vice versa.

In an asynchronous execution model (like with multi-process or multi-threaded PHP applications), both CPU and memory usage are important considerations. In traditional PHP architecture, these generally become a problem when either one reaches the limits of the server.

It's impractical to measure CPU usage inside PHP. If that’s the area you want to focus on, consider using something like top, on Ubuntu or macOS. For Windows, consider using the Linux Subsystem, so you can use top in Ubuntu.

For the purposes of this tutorial, we’re going to measure memory usage. We’ll look at how much memory is used in “traditional” scripts. We’ll implement a couple of optimization strategies and measure those too. In the end, I want you to be able to make an educated choice.

The methods we’ll use to see how much memory is used are:

// formatBytes is taken from the php.net documentation

memory_get_peak_usage();

function formatBytes($bytes, $precision = 2) {
    $units = array("b", "kb", "mb", "gb", "tb");

    $bytes = max($bytes, 0);
    $pow = floor(($bytes ? log($bytes) : 0) / log(1024));
    $pow = min($pow, count($units) - 1);

    $bytes /= (1 << (10 * $pow));

    return round($bytes, $precision) . " " . $units[$pow];
}

We’ll use these functions at the end of our scripts, so we can see which script uses the most memory at one time.

What Are Our Options?

There are many approaches we could take to read files efficiently. But there are also two likely scenarios in which we could use them. We could want to read and process data all at the same time, outputting the processed data or performing other actions based on what we read. We could also want to transform a stream of data without ever really needing access to the data.

Let’s imagine, for the first scenario, that we want to be able to read a file and create separate queued processing jobs every 10,000 lines. We’d need to keep at least 10,000 lines in memory, and pass them along to the queued job manager (whatever form that may take).

For the second scenario, let’s imagine we want to compress the contents of a particularly large API response. We don’t care what it says, but we need to make sure it’s backed up in a compressed form.

In both scenarios, we need to read large files. In the first, we need to know what the data is. In the second, we don’t care what the data is. Let’s explore these options…

Reading Files, Line By Line

There are many functions for working with files. Let’s combine a few into a naive file reader:

// from memory.php

function formatBytes($bytes, $precision = 2) {
    $units = array("b", "kb", "mb", "gb", "tb");

    $bytes = max($bytes, 0);
    $pow = floor(($bytes ? log($bytes) : 0) / log(1024));
    $pow = min($pow, count($units) - 1);

    $bytes /= (1 << (10 * $pow));

    return round($bytes, $precision) . " " . $units[$pow];
}

print formatBytes(memory_get_peak_usage());

// from reading-files-line-by-line-1.php

function readTheFile($path) {
    $lines = [];
    $handle = fopen($path, "r");

    while(!feof($ha

Truncated by Planet PHP, read more at the original (another 7383 bytes)

Link
Cees-Jan KiewietExtending ReactPHP&#039;s Child Processes Part Two (16.11.2017, 00:00 UTC)

react/child-process is very flexible and can work a lot of ways but sometimes you don't want to be bothered with the details of how it works and just want a simpler API to do that.

Link
Voices of the ElePHPantInterview with Joshua Ray Copeland (15.11.2017, 12:30 UTC) Link
Rob AllenImplementing CORS in Zend Expressive (15.11.2017, 11:03 UTC)

On a recent project, I needed to implement CORS support for my Expressive API. The easiest way to do this is to use Mike Tuupola's PSR-7 CORS Middleware.

As this is a standard Slim-Style PSR-7 middleware implementation, we need to wrap it for Expressive, so we make a factory:

App/Factory/CorsMiddleware.php:

<?php declare(strict_types=1);
namespace App\Factory;

use Tuupola\Middleware\Cors;
use Zend\Diactoros\Response;
use Zend\Stratigility\Middleware\CallableMiddlewareWrapper;

class CorsMiddlewareFactory
{
    public function __invoke($container)
    {
        return new CallableMiddlewareWrapper(
            new Cors([
                "origin" => ["*"],
                "methods" => ["GET", "POST", "PUT", "PATCH", "DELETE"],
                "headers.allow" => ["Content-Type", "Accept"],
                "headers.expose" => [],
                "credentials" => false,
                "cache" => 0,
            ]),
            new Response()
        );
    }
}

We then register this in our App\ConfigProvider::getDependencies() by adding to the factories key so that it looks something like this:

'factories'  => [
        Action\HomePageAction::class => Action\HomePageFactory::class,
        \Tuupola\Middleware\Cors::class => Factory\CorsMiddlewareFactory::class,
    ],

If you don't want to fully qualify, add use Tuupola\Middleware\Cors; to the top of the file so that you can just use Cors::class here.

Lastly, we register the middleware in config/pipeline.php:

$app->pipe(\Tuupola\Middleware\Cors::class);

Place it somewhere near the top of the list; personally, I place it just after piping the ServerUrlMiddleware::class.

We now have working CORS:

$ curl -X OPTIONS -H "Access-Control-Request-Method: POST" \
  -H "Access-Control-Request-Headers: Accept, Content-Type"  \
  -H "Origin: http://localhost" -H "Accept: application/json" http://localhost:8890/
HTTP/1.1 200 OK
Host: localhost:8890
Connection: close
Access-Control-Allow-Origin: http://localhost
Vary: Origin
Access-Control-Allow-Headers: content-type, accept
Content-type: text/html; charset=UTF-8

Failure looks like this:

$ curl -X OPTIONS -H "Access-Control-Request-Method: POST" \
  -H "Access-Control-Request-Headers: Accept, Content-Type X-Clacks-Overhead"  \
  -H "Origin: http://localhost" -H "Accept: application/json" http://localhost:8890/
HTTP/1.1 401 Unauthorized
Host: localhost:8890
Connection: close
Content-type: text/html; charset=UTF-8

By default, it doesn't tell you what went wrong, which isn't too helpful.

Providing JSON error responses

To provide a JSON error response, you need to set the error option to a callable and you can then return a JsonResponse:

App/Factory/CorsMiddleware.php:

<?php declare(strict_types=1);
namespace App\Factory;

use Psr\Http\Message\RequestInterface;
use Psr\Http\Message\ResponseInterface;
use Tuupola\Middleware\Cors;
use Zend\Diactoros\Response;
use Zend\Diactoros\Response\JsonResponse;
use Zend\Stratigility\Middleware\CallableMiddlewareWrapper;

class CorsMiddlewareFactory
{
    public function __invoke($container)
    {
        return new CallableMiddlewareWrapper(
            new Cors([
                "origin" => ["*"],
                "methods" => ["GET", "POST", "PUT", "PATCH", "DELETE"],
                "headers.allow" => ["Content-Type", "Accept"],
                "headers.expose" => [],
                "credentials" => false,
                "cache" => 0,
                "error" => [$this, 'error'],
            ]),
            new Response()
        );
    }

    public static function error(
        RequestInterface $request,
        ResponseInterface $response,
        $arguments) {

        return new JsonResponse($arguments);
    }
}

As you can see, we've created a new error method that returns a JsonResponse. We can then encode the $arguments as that contains the information about what the problem is:

$ curl -X OPTIONS -H "Access-Control-Request-Method: POST" \
  -H "Access-Control-Request-Headers: Accept, Content-Type X-Clacks-Overhead"  \
  -H "Origin: http://localhost" -H "Accept: application/json" http://localhost:8890/
HTTP/1.1 401 Unauthorized
Host: localhost:8890
Date: Sat, 21 Oct 2017 10:41:18 +0000
Connection: close
X-Powered-By: PHP/7.1.8
Content-Type: application/json

{"message":"CORS requested header i

Truncated by Planet PHP, read more at the original (another 1587 bytes)

Link
Remi ColletEnd of PHP 7.2 FTBFS marathon (14.11.2017, 10:10 UTC)

QA is a very important part of my daily work, and since PHP 7.2 is available in Fedora rawhide, we have to ensure everything works as expected with this new version.

 

As already explained, Koschei is our QA tool, used to monitor the full PHP stack, including ~60 extensions and ~500 libraries.

After the initial build of PHP 7.2.0RC3 in rawhide (September 29th) we have around one hundred FTBFS packages (Failed To Build From Sources).

Today everything is ok, all FTBFS have been fixed.

1. Extensions

Most PHP extensions are now compatible with PHP 7.2, excepted

  • XDebug, but version 2.6.0-dev works and a beta should be released soon
  • Timecop, this have been reported upstream, searching for a fix.

2. Mcrypt

Lot of packages were broken because they sadly still rely on the old deprecated mcrypt extension.

Despite I'm fighting for years to be able to remove it (see about libmcrypt and php-mcrypt), we still need it, so I have created the new php-pecl-mcrypt package from the PECL cemetery. This is obviously only a temporary solution, this extension is deprecated, un-maintained and should die.

3. Upstream patches

Most of PHP projects consider fix for new PHP version as standard bugfix, which means, could be done in a simple minor version, without requiring any major change.

So, some projects have already made the few minor changes needed, but have not yet released new version including theses changes. So the work was only about finding these fix, and applying them in the Fedora packages.

4. Pull Requests

Most projects are not yet monitoring PHP 7.2 (not enabled in travis) so were not really aware of needed changes.

So, of course, the first work was to report this failure upstream, and usually providing a possible fix (PR).

Some are already merged, some are still waiting for review.

5. Skip some

For a very few packages, as no real good fix exists for now, we have to temporarily skip some tests with 7.2. Most are about the session change, which breaks unit tests (session_start() failing)  without any real impact on the real usage.

6. Common errors

The more common errors, requiring a fix, are :

  • count on not countable (and NULL is not countable)
  • stricter prototype checking, fix for #73987, originally applied in 7.1.2RC1 then reverted as introduce a small BC break.
  • object is a reserved keywork

7. Conclusion

We are ready for PHP 7.2 in Fedora, and as usually, we done this the Fedora way: upstream first.

I also consider that having most extensions / libraries ready is a important criteria for the new version adoption by users.

Link
Jordi BoggianoPHP Versions Stats - 2017.2 Edition (13.11.2017, 08:34 UTC)

It's stats o'clock! See 2014, 2015, 2016.1, 2016.2 and 2017.1 for previous similar posts.

A quick note on methodology, because all these stats are imperfect as they just sample some subset of the PHP user base. I look in the packagist.org logs of the last month for Composer installs done by someone. Composer sends the PHP version it is running with in its User-Agent header, so I can use that to see which PHP versions people are using Composer with.

PHP usage statistics November 2017 (+/- diff from May 2017)

All versions Grouped PHP 7.1.10 11.63% PHP 7.1 36.63% (+18.99) PHP 7.0.22 7.95% PHP 7.0 30.76% (-5.36) PHP 5.6.31 7.38% PHP 5.6 23.28% (-8.16) PHP 5.6.30 7.23% PHP 5.5 6.11% (-4.5) PHP 7.0.24 5.45% PHP 5.4 1.51% (-1.6) PHP 7.1.11 4.55% PHP 5.3 0.76% (-0.22)

A few observations: I find it quite incredible that PHP 7.1 is now the most used version, even though Ubuntu LTS does not yet ship with it. I don't know if it means people use Docker or alternative PPAs but regardless it is good news! For the first time since I started these blog posts, the version usage actually matches the order in which they were released, with the older ones having less and less usage. That's also great news. We have a total of 90% of installs done on PHP versions that are still maintained, which is awesome. If you are still on 5.6 or 7.0 though you only have one year of security fixes left so consider upgrading to 7.2 which should come out in the next week or two.

Here is the aggregate chart covering all my blog posts and the last four years.

PHP requirements in Packages

The second dataset is which versions are required by the PHP packages present on packagist. I only check the require statement in their current master version to see what the latest requirement is, and the dataset only includes packages that had commits in the last year to exclude all EOL'd projects as they don't update their requirements.

PHP Requirements - Recent Master - November 2017 (+/- diff from Recent Master May 2017)

5.21.28% (-0.24) 5.318.75% (-4.4) 5.420.29% (-4.12) 5.519.07% (-4.63) 5.620.4% (3.59) 7.014.85% (6.12) 7.15.32% (3.65) 7.20.03% (0.03)

This moves at a decent pace with EOL'd versions slowly being abandoned. I still think it could go faster though ;) Please consider bumping to PHP 7.0 at the very least when you update your libraries.

Link
Stefan KoopmanschapOne year without -m (10.11.2017, 13:10 UTC)

One year ago I blogged about starting a new practice: Not using -m when committing something to Git. -m allows you to directly insert the commit message, which makes the whole process of committing faster, but not necessarily better.

Committing to Git

When you commit your work to Git, you not only make sure the code is in your version control, but you also have an opportunity to document that exact moment in the history of your software. When using the -m option, you're very likely to write a very short message. You're not really encouraged to actually document the current state of your code, because writing longer or even multi-line messages is harder in a console.

Not using -m anymore

So, about a year ago I stopped using the -m parameter when committing changes to Git. Has it really changed anything?

Yes and no.

Yes, it has changed something in that I now take more time to write the commit message and sometimes take the time to document what is in the change and why the change was made.

No, because all too often I'm still tempted to write a pretty short commit message.

It is still something that I need to focus on more, to take the time to write useful commit messages. Things that allow you to create a timeline of your development out of the list of commits. But most certainly, commit messages have improved since making this little change in my process.

Link
Christian WeiskePost comments to Known with Micropub (10.11.2017, 06:34 UTC)

Last year I wanted to backup a friend's instagram account, and chose a local Known instance as target.

Posting "normal" text and photo posts into a blog is standardized now: Simply use Micropub with a client of your choice. I wrote shpub in that time, which is a micropub client for the shell, neatly packaged up into a single .phar file.

With Known and shpub in place, I wrote a script that regularly checked the Instagram account site, extracted text, image and geo coordinates of new posts and pushed them into Known.

One thing was missing: Social reactions - comments and likes.

So I sat down and extended Known's micropub plugin to support receiving likes, comments and RSVPs. At the time this patch got merged, my instagram backup project was sunken into deep sleep and I never got around making the script import the reactions as well.

Some weeks ago I wanted to write a blog post about the comments-via-micropub functionality and saw that it .. did not work at all. My patch had a serious flaw (! at the wrong place, a debug leftover) and nobody had noticed it :/ So now that problem is patched in latest git and will land in the version that follows Known 0.99.

Usage

Instead of sending a h=entry micropub post type, h=annotation with a couple of extra parameters has to be sent:

Post a comment

$ curl -X POST\
 -H 'Content-Type: application/x-www-form-urlencoded'\
 -H 'Authorization: Bearer deadbeefcafe'\
 -d 'h=annotation'\
 -d 'url=http://example.org/some-blog-post'\
 -d 'type=reply'\
 -d 'username=barryf'\
 -d 'userurl=http://example.org/~barryf'\
 -d 'userphoto=http://example.org/~barryf/avatar.jpg'\
 -d "content=There is a typo in paragraph 1 - 'Fou' should be 'Foo'"\
 'http://example.org/micropub/endpoint'

Alternatively you can use shpub:

$ ./bin/shpub.php x annotation\
 -x url=http://example.org/some-blog-post\
 -x type=reply\
 -x username=barryf\
 -x userurl=http://example.org/~barryf\
 -x userphoto=http://example.org/~barryf/avatar.jpg\
 -x content="There is a typo in paragraph 1. 'Fou' should be 'Foo'"

Post a like

$ curl -X POST\
 -H 'Content-Type: application/x-www-form-urlencoded'\
 -H 'Authorization: Bearer deadbeefcafe'\
 -d 'h=annotation'\
 -d 'url=http://example.org/some-blog-post'\
 -d 'type=like'\
 -d 'username=barryf'\
 -d 'userurl=http://example.org/~barryf'\
 -d 'userphoto=http://example.org/~barryf/avatar.jpg'\
 'http://example.org/micropub/endpoint'

Post a RSVP

$ curl -X POST\
 -H 'Content-Type: application/x-www-form-urlencoded'\
 -H 'Authorization: Bearer deadbeefcafe'\
 -d 'h=annotation'\
 -d 'url=http://example.org/some-blog-post'\
 -d 'type=like'\
 -d 'username=barryf'\
 -d 'userurl=http://example.org/~barryf'\
 -d 'userphoto=http://example.org/~barryf/avatar.jpg'\
 -d 'content=yes'\
 'http://example.org/micropub/endpoint'
Link
LinksRSS 0.92   RDF 1.
Atom Feed   100% Popoon
PHP5 powered   PEAR
ButtonsPlanet PHP   Planet PHP
Planet PHP