SitePoint PHPTesting APIs with RAML (23.2.2015, 17:00 UTC)

In a recent article I looked at RESTful API Modeling Language (RAML). I provided an overview of what RAML is all about, how to write it and some of its uses.

This time, I’m going to look at some of the ways in which you can use RAML for testing. We’ll start by using RAML to validate responses from an API. Then we’ll look at an approach you could take to mock an API server, using a RAML file to create mock HTTP responses.

Validating API Responses

First, let’s define a simple RAML file for a fictional API. I’ve left out some routes, but it will be enough to demonstrate the principles.

#%RAML 0.8
title: Albums
version: v1
baseUri: http://localhost:8000
  - secured:
      description: Some requests require authentication
          displayName: Access Token
          description: An access token is required for secure routes
          required: true
  - unsecured:
      description: This is not secured
  displayName: Account
    description: Get the currently authenticated user's account details.    
    is: [secured]
            schema: |
              { "$schema": "",
                "type": "object",
                "description": "A user",
                "properties": {
                  "id":  { 
                    "description": "Unique numeric ID for this user",
                    "type": "integer" 
                  "username":  { 
                    "description": "The user's username",
                    "type": "string" 
                  "email":  { 
                    "description": "The user's e-mail address",
                    "type": "string",
                    "format": "email" 
                  "twitter": {
                    "description": "User's Twitter screen name (without the leading @)",
                    "type": "string",
                                        "maxLength": 15
                "required": [ "id", "

Truncated by Planet PHP, read more at the original (another 5627 bytes)

Liip REST API for a content repository (23.2.2015, 14:50 UTC)

When we launched the Symfony CMF initiative back in 2010, one of the first decisions that was made was to adopt JCR as the basis for our work as we felt that one of the biggest short comings of CMS at the time was in the hard coupling of the storage and business layers. However JCR only defines language level interfaces and APIs. It doesn't define a remoting, let alone, REST API. Thankfully the reference implementation of JCR, called Jackrabbit, did provide an WebDav inspired HTTP API with some JSON mixed-in. We submitted several patches to improve its performance and reduce round trips. We also actively participated in the definition of the JCR 2.1 version of the spec to make it more useful in a client-server scenario. On top of that we have invested a lot of time to create Jackalope, a reference implementation for PHPCR, a port of the JCR spec to PHP. In fact runs on Jackrabbit using Symfony CMF.

Now however Jackrabbit Oak has been released, which is a from scratch rewrite. Jackrabbit is an Apache project but a lot of the development is done by Adobe. This is especially true for the successor Jackrabbit Oak that essentially aims to be the "git of content repositories" both in terms of market share but also in terms of internal architecture. Along with that there are now plans to also provide a new, cleaner remoting API. Adobe invited us to their offices in Basel earlier this month to discuss the API to ensure that the API fits our needs with PHPCR. Of course now a days we also do projects with Adobe's Jackrabbit based CMS called AEM. All the more reasons to take Adobe up on its offer.

REST is about resources

Our "delegation" consisted of David and myself along with Alfu, who David and Angela mentored during his thesis adding ACL support to the old Jackrabbit HTTP API. We met up with Adobe developers Angela, one of the long time lead developers of Jackrabbit, and Francesco, who is leading the initative for the new remoting API. The first point of discussion was what is the granularity that we want to expose as resources. The initial idea was to expose nodes (ie. individual documents with their properties) as resources. But when doing the initial design work Francesco realized that in many use cases, remote users will want to modify multiple nodes at once. If one would expose nodes as individual resources it would then become necessary to provide some kind of session/transaction mechanism to submit those changes and have them be applied in an atomic fashion. But such mechanism complicate the client side use, more importantly they hurt in horizontal scalability on the server side.

As such he proposes to actually not expose individual nodes as resources, but instead to expose the repository as a whole as the smallest granual unit. This also fits well because in our experience not only do most writes affect multiple nodes, in most cases we also wanted to read multiple nodes. For example one of the first things we added to the old Jackrabbit remote API was the ability to fetch multiple nodes at once. Furthermore one feature we heavily used was the ability to automatically fetch children of nodes up to a given depth.

Protocol format

Basically a write would then be a POST consisting of a series of operations. Now a delete of a node would in this logic also be send a part of such a POST. As repositories (though this might evolve to actually mean workspace) are the smallest granular unit a DELETE would then we used to delete an entire repository only. One of the concerns here is how to make the format compact, yet readable. Jackrabbit previously made use of JSOP, but Jackrabbit Oak will likely use a JSON Patch inspired format. That being said the implementation inside Oak will keep the serialization logic separate, so it should be possible to implement different protocols.

Reducing round trips

As stated above the decision to make repositories the smallest granular unit has a lot to do with reducing network round trips. In a way if

Truncated by Planet PHP, read more at the original (another 2239 bytes)

SitePoint PHPIntroduction to Silex – A Symfony Micro-framework (20.2.2015, 17:00 UTC)

Silex is a PHP micro-framework based on Symfony components and inspired by the Sinatra Ruby framework. In this article, we are going to get started with the framework and see the how it fits our needs.



The best and recommended way to install Silex is through composer:

// composer.json
    "require": {
        "silex/silex": "1.3.*@dev",
        "twig/twig": "1.17.*@dev"
    "require-dev": {
        "symfony/var-dumper": "dev-master"

Run composer update --dev to load the dependencies and generate the autoloader. We also required twig because we want to use it as our template engine, and the new var-dumper from Symfony as a development dependency - read more about it here.

Creating a Folder Structure

One of the things I like about Silex is that it gives you a bare bones framework that you can organize in any way you want.

Continue reading %Introduction to Silex – A Symfony Micro-framework%

Evert PotHTTP/2 finalized - a quick overview (19.2.2015, 22:49 UTC)

The HTTP/2 specification has been finalized and will soon be released as an RFC. Every major browser will get support for it soon.

This is a major new release of the specification, and builds upon earlier work such as Google's SPDY protocol (which is now being deprecated). It's the first major release of the protocol since 1999, which is 16(!) years ago.

While E-mail may still be the most popular protocol on the internet, http certainly is the most visible and relevant to many developers day-to-day work.

Even the bbc is talking about it!

As many of you develop HTTP-based applications, here are a few things you should know:

HTTP/2 is an new way to transmit HTTP/1.1 messages

HTTP/2 does not make any major changes to how HTTP works. The biggest difference is in how the information is submitted.

HTTP/1.1 (and 1.0, 0.9) sent everything in plain text, HTTP/2 will use a binary encoding.

HTTP/2 still has requests, responses, headers, status codes and the same HTTP methods.

Subtle differences

  1. HTTP/2 encodes HTTP headers as lowercase. HTTP/1.1 headers were already case-insensitive, but not everbody adhered to that rule.
  2. HTTP/2 does away with the 'reason phrase'. In HTTP/1.1 it was possible for servers to submit a human readable reason along with the status code, such as HTTP/1.1 404 Can't find it anywhere!. Or HTTP/1.1 200 I like pebbles, but in HTTP/2 this is removed.
  3. A new status code! (HTTP geeks love status codes), 421 Misdirected Request allows the server to tell a client that they received a HTTP request that the server is not able to produce a response for. For instance, a client may have picked the wrong HTTP/2 server to talk to.

Upgrades to HTTP/2 can be invisible and transparent

Most browsers will start supporting HTTP/2 extremely soon. When a browser makes a normal HTTP/1.1 request in the future, they will include some information that tells the server they support HTTP/2 using the Upgrade HTTP header. For HTTPS, this is done using a different mechanism.

If the server supports HTTP/2 as well, the switch will happen instantly and this will be invisible to the user.

Everyone will still use http:// and https:// urls.

If a HTTP client already knows the server will support HTTP/2, they can start speaking HTTP/2 right from the get-go.

Many server-side developers don't have to think much about this. If you are a PHP developer, you can just upgrade to a HTTP server that does HTTP/2, and the rest will be transparent.

HTTP/2 is probably faster

There are a few major features that improve speed when switching to HTTP/2:

A lot of bytes in HTTP/1.0 are wasted because of bytes being sent back and forward in the HTTP headers. In HTTP/2 HTTP headers can be compressed using the new HPACK compression algorithm.

A big feature that came with HTTP/1.1 was 'pipelining'. This is a feature that allows a HTTP client to send multiple requests in a connection without having to wait for a response. Because of poor and broken implementations, this feature has never really been enabled in browsers. In HTTP/2 this feature comes out of the box. Only one TCP connection is needed per client, and a client can send and receive multiple requests and responses in parallel. If one of the HTTP responses is stalled, this doesn't block the rest of the HTTP responses.

So for application this can mean:

  1. Less HTTP connections open
  2. Less data being sent
  3. Less round-tripping

Server push

In HTTP/2 it's possible to preemptively send a client responses to requests, before the requests were made.

This can seriously speed up application load time. Normally when a HTML application is loaded, a client has to wait to receive all the <img>, <script> and <link> tags to know what else needs to be requested.

Server push allows the server to just send out those resources before the client even requested them.

In the case of a server push, the server actually sends back both the HTTP response, and the actual request that the client would have had to send in order to receive the response. The request is sent in a PU

Truncated by Planet PHP, read more at the original (another 4743 bytes)

Ilia AlshanetskyConFoo - Deep Dive into Browser Performance (19.2.2015, 16:07 UTC)
My slides from ConFoo 2015 about "Deep Dive into Browser Performance" are now available for download here.
PHP: Hypertext PreprocessorPHP 5.4.38 Released (19.2.2015, 00:00 UTC)
The PHP development team announces the immediate availability of PHP 5.4.38. Seven security-related bugs were fixed in this release, including CVE-2015-0273 and mitigation for CVE-2015-0235. All PHP 5.4 users are encouraged to upgrade to this version. For source downloads of PHP 5.4.38 please visit our downloads page, Windows binaries can be found on The list of changes is recorded in the ChangeLog.
PHP: Hypertext PreprocessorPHP 5.5.22 is available (19.2.2015, 00:00 UTC)
The PHP development team announces the immediate availability of PHP 5.5.22. This release fixes several bugs and addresses CVE-2015-0235 and CVE-2015-0273. All PHP 5.5 users are encouraged to upgrade to this version. For source downloads of PHP 5.5.22 please visit our downloads page, Windows binaries can be found on The list of changes is recorded in the ChangeLog.
PHP: Hypertext PreprocessorPHP 5.6.6 is available (19.2.2015, 00:00 UTC)
The PHP development team announces the immediate availability of PHP 5.6.6. This release fixes several bugs and addresses CVE-2015-0235 and CVE-2015-0273. All PHP 5.6 users are encouraged to upgrade to this version. For source downloads of PHP 5.6.6 please visit our downloads page, Windows binaries can be found on The list of changes is recorded in the ChangeLog.
Ilia AlshanetskyConfoo - Profiling with XHProf (18.2.2015, 19:54 UTC)
My slides from ConFoo 2015 on "Profiling with XHProf" are now available for download here.
SitePoint PHPAPI Client TDD with Mocked Responses (18.2.2015, 17:00 UTC)

In parts one and two, we built some very basic functionality and used TDD with PHPUnit to make sure our classes are well tested. We also learned how to test an abstract class in order to make sure its concrete methods worked. Now, let’s continue building our library.

Catch up

I took the liberty of implementing the functionality and the test for the abstract API class’ constructor, requiring the URL to be passed in. It’s very similar to what we did with the Diffbot and DiffbotTest classes.

I also added some more simple methods, and testing of the different API instantiations and custom fields for the APIs into the mix with dynamic setters and getters using __call. This seemed like too menial work to bother you with as it’s highly repetitive and ultimately futile at this point, but if you’re curious, please leave a comment below and we’ll go through part2-end > part3-start differences in another post - you can even diff the various files and ask about specific differences in the forums, I’d be happy to answer them to the best of my knowledge, and also take some advice regarding their design. Additionally, I have moved the “runInSeparateProcess” directive from the entire DiffbotTest class to just the test that needs an empty static class, which reduced the duration of the entire testing phase to mere seconds.

If you’re just now joining us, please download the part 3 start branch and catch up.

Data Mocking

We mentioned before we would be data mocking in this part. This might sound more confusing than it is, so allow me to clarify. When we request a URL through Diffbot, we expect a certain result. Like, requesting a specific Amazon product, we expect to get the parsed values for that product. However, if we rely on this live data in our tests, we face two problems:

  1. The tests become slower by X, where X is the time required to fetch the data from Amazon
  2. The data can change and break our tests. Suddenly, some information our tests relied upon before can break due to different values being returned.

Because of this, it’s best if we cache the entire response to a given API call offline - headers and all - and use it to fake a response to Guzzle (functionality Guzzle has built in). This way, we can feed Diffbot a fake every time during tests and make sure it gets the same data, thereby giving us consistent results. Matthew Setter wrote about data mocking with Guzzle and PHPUnit before here, if you’d like to take a look.

To get to the testing level we need, we’ll be faking the data that Diffbot returns. Doesn’t this mean that we aren’t effectively testing Diffbot itself but only our ability to parse the data? Exactly, it does. It’s not on us to test Diffbot - Diffbot’s crew does that. What we’re testing here is the ability to initiate API calls and parse the data they return - that’s all.

Continue reading %API Client TDD with Mocked Responses%

LinksRSS 0.92   RDF 1.
Atom Feed   100% Popoon
PHP5 powered   PEAR
ButtonsPlanet PHP   Planet PHP
Planet PHP