Brandon SavageIntroducing a new error handler for PHP projects (3.3.2015, 19:56 UTC)

Dealing with errors is one of the most frustrating and challenging parts of developing an application. Nobody likes to think about their application in a failure state, and there’s nothing more deflating than writing a bunch of great code, and getting a message that you forgot a semicolon on line 4. After trying a few […]

The post Introducing a new error handler for PHP projects appeared first on

Pádraic BradySecurely Distributing PHARs: Pitfalls and Solutions (3.3.2015, 16:45 UTC)
Maximum Security (comics)

The PHAR ecosystem has become a separate distribution mechanism for PHP code, distinct from what we usually consider PHP packages via PEAR and Composer. However, they still suffer from all of the same problems, namely the persisting whiff of security weaknesses in how their distribution is designed.

What exactly can go wrong when distributing any sort of PHAR?

  • Downloading PHARs from a HTTP URL not protected by TLS.
  • Downloading PHARs from a HTTPS URL with TLS verification disabled.
  • Downloading PHARs which are unsigned by the authors.
  • Downloading any PHAR “installer” unnecessarily.

All of the above introduce an element of risk that the code you receive is not actually the code the author intended to distribute, i.e. it may decide to go do some crazy things that spell bad news when executed. A hacker could mount a Man-In-The-Middle attack on your connection to the PHAR server, or compromise the PHAR server and replace the file, or employ some DNS spoofing trickery to redirect download requests to their server.

I’ve started to distribute a CLI app phar of my own recently for Humbug, so I had to go and solve these problems and make installing, and updating, that phar both simple and secure. Here’s the outline of the solution I’ve arrived at which is quite self-evident.

  • Distribute the PHAR over HTTPS
  • Enforce TLS Verification
  • Sign your PHAR with a private key
  • Avoid PHAR Installer scripts
  • Manage Self Updates Securely
  • Do all of this consistently

Some details and a discussion on each point…

Distribute the PHAR over HTTPS

If you really don’t already have a TLS enabled download location, you can avail yourself of which supports HTTPS URLs. I’m using this for Humbug‘s development builds. You can also use Github Releases for your project and attach the phars there for new versions. If you do need to host the PHAR on your own server, get a TLS certificate for your domain.

Enforce TLS verification

PHP supports TLS verification out of the box, for the most part. It was disabled by default until PHP 5.6. Enforce it! If a user cannot make a simple request to a simple HTTPS URL, then their server is quite obviously misconfigured. That is not your problem, so don’t make it your problem. You use HTTPS, you enforce TLS, and other programmers should be more than capable of fixing their own stuff. Insecure broken systems are not the lowest common denominate you should be targeting.

Enabling TLS verification for PHP’s stream functions, e.g. file_get_contents(), is basically a disaster waiting to happen because its configuration can be fairly long winded to get just right. As something of a shim, I’ve created the humbug_file_contents package which has a ready to roll TLS-loving function that can replace file_get_contents() transparently, but only when it detects a PHP version less than 5.6.

PHP 5.6 introduced significant TLS improvements which were enabled by default. In certain areas, it actually exceeds what might be expected from other options, and it’s certainly better than any combination of pre-5.6 options can currently achieve

Truncated by Planet PHP, read more at the original (another 6868 bytes)

Brandon SavageThe conference talk creation process (3.3.2015, 13:04 UTC)

There’s been a lot made in the last 24 hours about the process of submitting and accepting conference talks, including whether or not such talks should be written beforehand. There are many valid points of view on the issue, and here are a few of my thoughts. When it comes to creating conference talks, I […]

The post The conference talk creation process appeared first on

Derick RethansXdebug 2.3: Enhanced xdebug_debug_zval() (3.3.2015, 09:44 UTC)

Xdebug 2.3: Enhanced xdebug_debug_zval()

This is the second article in a series about the new features in Xdebug 2.3, which was first released on February 22nd.

xdebug_debug_zval() has been around for quite some time, to provide correct information about how PHP internally stores a variable. Unlike PHP's built in debug_zval_dump() function, it does not modify the variable information that it tries to show. This is because instead of passing in a variable, you pass in its name. Passing a variable into a function, can modify the various parameters that are associated with this variable, such as the is_ref and refcount fields.

xdebug_debug_zval() does not suffer from these inadvertent modifications, as you pass in the variable's name, and the function looks up the information about a variable in the symbol tables itself.

The difference becomes clear with the following two examples. With debug_zval_dump():

$a = array(1, 2, 3);
$b =& $a;
$c =& $a[2];


Which outputs (after a little formatting):

array(3) refcount(1){
        [0]=> long(1) refcount(2)
        [1]=> long(2) refcount(2)
        [2]=> &long(3) refcount(3)

And with xdebug_debug_zval():

$a = array(1, 2, 3);
$b =& $a;
$c =& $a[2];


Which outputs (after a little formatting):

a: (refcount=2, is_ref=1)=array (
        0 => (refcount=1, is_ref=0)=1,
        1 => (refcount=1, is_ref=0)=2,
        2 => (refcount=2, is_ref=1)=3

In the debug_zval_dump() example, the refcounts for the array elements are all one too high, and the refcount for the array itself is one too low. The array is also not marked as reference.

However, before Xdebug 2.3, the xdebug_debug_zval() function would only accept a variable name, but not any array subscripts or property deferences. Meaning that you couldn't really dump a sub array. Xdebug 2.3 adds support for dereferencing properties and array elements by reusing the variable name parser of the remote debugging. Hence, you can now do the following:

$a = array(1, 2, 3);
$b =& $a;
$c =& $a[2];


Which outputs:

a[2]: (refcount=2, is_ref=1)=3


$a = new StdClass;
$a->prop = [3.14, 2.72];


Which outputs:

a->prop: (refcount=1, is_ref=0)=array (
        0 => (refcount=1, is_ref=0)=3.14,
        1 => (refcount=1, is_ref=0)=2.72
a->prop[1]: (refcount=1, is_ref=0)=2.72

Other parts in this series:

Cal EvansInterview with Morgan Tocker (3.3.2015, 06:00 UTC)

Morgan Tocker @morgo

Show Notes

The post Interview with Morgan Tocker appeared first on Voices of the ElePHPant.

Anna FilinaShould conference talks be written in advance? (3.3.2015, 02:54 UTC)

I had a discussion on Twitter today regarding a conference’s selection process. It was suggested that speakers should be forced to write their entire talk before submitting it to a conference, perhaps even presented at a local event. This might work for speakers who give the same talks over and over again, but that’s not the majority.

I organize a conference where I select nearly 150 talks each year. I also speak around the world (South Africa, Spain, Germany, USA, etc.) That gives me a unique perspective of the relationship between speakers and organizers and how they can better collaborate.

How hard is it to write a talk?

It’s extremely time-consuming. I personally spend between 20 and 60 hours to prepare a conference talk, depending on how much code and research is required. It used to take me more when I was still new to the speaking scene.

I submit new abstracts all the time: different abstracts depending on the type of conference. I know these subjects very well, but I can’t possibly write 10 or more talks each year if only a handful of them will be picked. First, it would take too much time. Second, those that will not be selected would be wasted and might not be relevant next year.

Is it fair to demand that speakers potentially waste so much time? After spending hundreds of hours preparing the talks, it would be even more crushing to receive a rejection e-mail from the committee.

How do organizers know if the talk will be good?

We look at both the speaker and the topic. A speaker who has a history of giving great talks would normally write subsequent talk of equal or superior quality. So if a speaker is good and has experience to show, we don’t need the slides or the code. I have occasionally asked speakers to provide an outline when they had nothing else to show and that worked out well.

To determine whether the talk is good, we read the title and the abstract, and check whether the speaker has experience related to the topic. We discuss the interest in the topic, and in our case, check how many people voted for it. There are other criteria that are unrelated to this question. I blogged about them earlier here and here.


We know that the job of a speaker is hard enough. Even harder for those who are still trying to get into their first conference. We don’t want to give speakers arbitrary work to counterbalance our own laziness. This is why ConFoo will continue asking only a title and an abstract for each proposal submitted.

Evert PotDropbox starts using POST, and why this is poor API design. (2.3.2015, 21:12 UTC)

Today Dropbox announced in a blogpost titled "Limitations of the GET method in HTTP" that it will start allowing POST requests for APIs that would otherwise only be accessible using GET.

It's an interesting post, and addresses a common limitation people run into when developing RESTful webservices. How do you deal with complex queries? Using URL parameters is cumbersome for a number of reasons:

  1. There's a limitation to the amount of data you can send. Somewhere between 2KB and 8KB apparently.
  2. URL parameters don't allow nearly enough flexibility in terms of how you can define your query. The percent-encoded string doesn't really have a universal way to define in what character set its bytes are,
  3. The URL is not nearly as versatile and expressive as JSON, and let alone XML.

Their solution to this problem is to now allow POST requests on endpoints that traditionally only alowed GET.

Is this the best solution? Well, it's certainly a pragmatic one. We're clearly running into artificial limitations here that are poorly solved by existing technology.

The problem with POST

Switching to POST discards a number of very useful features though. POST is defined as a non-safe, non-idempotent method. This means that if a POST request fails, an intermediate (such as a proxy) cannot just assume they can make the same request again.

It also ensures that HTTP caches no longer work out of the box for those requests.

Using REPORT instead

The HTTP specification has an extension that defines the PATCH request, this spec is picking up some steam, and a lot of people are starting to use it to solve common problems in API design.

In the same vein, there's been another standard HTTP method for a while with the name REPORT, which specifically addresses some of the issues with POST.

The REPORT request:

  1. Can have a request body
  2. Is safe
  3. Is idempotent

It appears in the IANA HTTP Method list and is actually quite great for this use-case. The main reason it's off people's radar, is because it originally appeared in a WebDAV-related spec a long time ago.

However, its semantics are well defined and it works everywhere. I would love to see more people start picking this up and adding it to their HTTP API toolbelt.

Using GET with a request body

Whenever this topic comes up on Hacker News, there's almost guaranteed to be a comment about using GET with a request body.

I wondered about this myself (6 years ago now apparently!) and it's my top question on stackoverflow. Clearly a lot of people have the same thinking process and wonder about this.

Using a request body with GET is bad. It might be allowed, but it's specifically defined as meaningless. This means that any HTTP server, client or proxy is free to discard it without altering the semantic meaning of the request, and I guarantee that some of them will.

Furthermore, the benefits of using GET are then completely gone. Caching is not based on request bodies, and these requests are not addressable with a URI.

Literally the only reason why anyone would do this is because GET looks nicer, it's an aesthetic decision, and nothing more.

Why real GET requests are great: addressability

Whether you use POST or the superiour REPORT request, you still miss the biggest advantage of using GET requests.

A GET query is always a URI. Anyone can link from it. Parts of your service can link to specific results. Even external services can integrate with you by referring to specific reports.

A POST query can not be linked and neither can a REPORT query. All we can do is explain that a certain URI accepts certain http methods with certain media-types, but this is not nearly as elegant as a simple URI. Linking rocks.

An alternative approach

One way to solve this issue entirely and fix all problems related to this, is disconnect the query you are doing from its result.

To do this, you could create a /queri

Truncated by Planet PHP, read more at the original (another 1312 bytes)

Nomad PHPMay 2015 – US (2.3.2015, 19:32 UTC)

An In-depth Look at Slim Framework 3.0

Presented By
Josh Lockhart
May 21, 2015 20:00 CDT

The post May 2015 – US appeared first on Nomad PHP.

Paul M. JonesBookdown: DocBook-Like HTML Output From Markdown (2.3.2015, 17:04 UTC)


Bookdown generates DocBook-like HTML output using Markdow> n and JSON files instead of XML.

Bookdown is especially well-suited for publishing project documentation to > GitHub Pages.

Features include:

  • Automatic table-of-contents generation as index pages at each hierarchy level

  • Custom index-page titles via JSON configuration

  • Automatic numbering of page headings

  • Automatic previous/next/up navigation headers and footers

  • Multi-page hierarchical output

  • Loading of remote content files via HTTP

  • Templated output for theming

  • Overridable page processing, especially for rendering

Bookdown can be used as a static site generator, or as a way to publish static pages as a subdirectory in an existing site.

Yes, I know, there’s a ton of static site generators for PHP out there already. Sculpin seems to be the big one (hi Beau!) but it’s not specifically for documentation. Then there’s Couscous (hi Matthieu!), which is for documentation, but it’s not DocBook-like documentation.

By “DocBook-like”, I mean (among other things) numbered headers, auto-generated tables-of-contents on their own pages, hierarchical multi-page presentation, and the next/previous/up linking at the top and bottom of pages. Look at the Solar documentation sites for a better idea; the content on those pages was generated with DocBook.

And frankly, look at the dependency listings on those two projects (Scuplin, Couscous). They’re rather extensive. It that a bad thing? No, but it’s not my speed. I think we we all know at this point that I’m about reducing dependencies as much as possible, and those are just too much for me.

Also: I can’t stand YAML. I don’t like YAML embedded in pages, and I don’t like YAML config files. I much prefer JSON, and I don’t want to add YAML frontmatter on Markdown pages.

So: Bookdown. This scratches my particular itch, with very few dependencies.

Bookdown, although it can be used as a site generator, is only incidentally a site generator. What it really is is a page generator, with the idea that you can integrate the pages into any other site you want.

Additionally, Bookdown allows you to pull content from remote locations. This is especially interesting to me because of the decoupled nature of Aura libraries. I would like very much to keep the manual documentation on each library in the same repo as that library, then publish each alone, and as part of a collection, without having to copy files around. Bookdown remote content should allow for that.

I’m happy with the architecture as well. It took two weekends of experimenting, and then almost exactly a week of dedicated development, to build Bookdown.

The library is fully separated from the project. That means you can either run it as a project on its own, or integrate the core library into your own project and glue its services and commands into your own work.

Everything uses dependency injection through an application-specific container which helps to keep the concerns well-separated. Everything uses factories and builders, which helps to enable the dependency injection.

All the underlying processes are decoupled from each other, which should make it easy to replace them with custom processes. For example, the ConversionProcess currently uses CommonMark, but I find it easy to imagine end-users replacing that with Textile, ReStructuredText, or even a combination of conversions that examines the filename extension.

Finally, the code style is a little bit of a departure for me as well. I have previously used $snake_case variables, b

Truncated by Planet PHP, read more at the original (another 511 bytes)

SitePoint PHPBuilding APIs You Won’t Hate: Review (2.3.2015, 17:00 UTC)

This is a review of Phil Sturgeon’s book Build APIs You Won’t Hate.

Build APIs You Won’t Hate

A bit of an edgy title, isn’t it? It makes sense, though. The potential of a developer hating anything he built given enough time to work on it is enormous. It’s an inverse parabola of sorts - your enthusiasm will grow for a given amount of time, and then proportionally drop until you sink below the starting point of pleasure. If you push through this depression, learn new techniques, and then apply them to your work you get a kind of sine wave in which your enthusiasm again rises until it starts dropping, and so on and so forth.

Continue reading %Building APIs You Won’t Hate: Review%

LinksRSS 0.92   RDF 1.
Atom Feed   100% Popoon
PHP5 powered   PEAR
ButtonsPlanet PHP   Planet PHP
Planet PHP