SitePoint PHPPopular Photos, Filters and User Profiles with the 500px API (4.3.2015, 17:00 UTC)

500px is a photo community for discovering, sharing, buying and selling inspiring photography. In this article we are going to explore their API and build a small showcase app. Let’s get started.

Logo

What We’re Building

In this article, we are going to build a small Laravel app with three main features. You can check the final result on Github.

  • The index page will contain the latest popular photos on the 500px website.
  • A filter will let the user choose from popular, upcoming, etc. and also sort the result according to votes, rating, etc.
  • A photographer’s profile page with the list of available photos will be available.

Setting up

Before using their API, we need to get some access credentials. After signing in to the website you can go to the application settings and register for a new test application. The consumer key and the consumer secret will be used for accessing the API.

We are going to use Guzzle to make HTTP requests, and Guzzle Oauth subscriber for using OAuth. If you are not familiar with these two, you can check out this Guzzle introduction and read about using Guzzle OAuth.

Let’s first add the Guzzle packages to our composer.json and autoload our src folder where we put our authentication class. Run composer update to update our app.

// composer.json
...
"require": {
      ...
      "guzzle/guzzle": "3.9.*",
      "guzzlehttp/oauth-subscriber": "0.2.*"
},
"autoload": {
    "classmap": [
        ...
            "app/src"
    ]
}
...

Inside our src/PxOAuth.php file we are going to provide our consumer_key and consumer_secret to build our OAuth object.

Continue reading %Popular Photos, Filters and User Profiles with the 500px API%

Link
Anna FilinaHow skiing made me a better developer (4.3.2015, 02:26 UTC)

I’m currently sitting at T-Bar & Grill in Panorama, British-Columbia, Canada. I had a great week of skiing and another week to come. While ripping the slopes at ear-popping speeds, my mind was surprisingly free to roam and explore ideas that lead me to this article. I’m going to tell you how skiing made me a better developer. You’d be amazed how much in common these two activities have. I picked up skiing 5 years ago. Since then, I made significant progress in programming and project management as well.

Decision-making

When you’re going at great speeds on ever-changing terrain, you must develop the ability to make quick decisions. You feel a patch of ice under your feet and your skis start to wobble, you see an exposed rock or tree stump ahead, an inexperienced skier is turning unpredictably below. You must decide your course of action in a second or risk injury. With enough practice, you can make a rational decision under extreme time pressure. I have noticed this translated into my programming and project-management abilities.

Looking far ahead

I take a lesson at least once a year to reinforce my technique. Usually twice. The first thing that my instructor told me was: “Look far ahead. You don’t need to contemplate the tips of your skis. You need to know what’s coming up and how to tackle it.” That’s what I do in programming. I don’t look at my current code or what’s trendy today. I look far ahead. As far as my foresight and experience will allow it. This is especially useful in project management and I became significantly better at it.

Overcoming obstacles

One false turn and you end up in the worst slope of your life. Just like today: I ended up in a steep one, with huge bumps with ice in between, rocks everywhere, branches sticking out and narrow passages. There was no way out of it, unless I wanted to venture into a dense conifer forest. You learn to not cry about your bad decisions, because you couldn’t have known it. You can’t undo the decision either. You follow through and overcome the obstacles. You can curse later at the bottom of the chairlift. When you’re in a bad situation, keeping yourself together and staying rational is key to avoid hurting yourself and getting off the mountain. Look at the obstacles ahead and plan your path. This brings me to my next point.

Planning

If you value your life, plan your path. Don’t charge and see what happens. I’ve heard enough people dying in avalanches because they ignored the warning signs and didn’t probe before going. Although planning is not a guarantee, skipping this step can be fatal to a project. Too many people jump into code without having any sort of planning: not a single diagram and not a single piece of documentation. I used to be much more hasty and arrogant, but now I never code without thorough planning.

Security

Being appropriately equipped and watching for warning signs is crucial to success. If you’re not dressed warmly enough, if you don’t wear a helmet, if you go out of bounds without avalanche gear, if you ignore the red & yellow signs, you’re taking risks. In programming, it taught me to take more measures to ensure that nothing goes wrong. It taught me to watch for the signs and calculate risks more carefully. This is especially important in project rescue, when you’re trying to mitigate risks and ship projects on time, despite being behind schedule and having plenty of other constraints.

Confidence

In skiing, if you don’t have confidence, you’ll stay a beginner all your life. You’ll be afraid to go fast, because you might fall and hurt yourself. To counter that, you must gear up properly, watch the experts, push your limits and practice a lot. In programming, confidence is what gives you momentum. I no longer second-guess every decision. I know I might make a few bad ones, but that doesn’t matter, because I know that I have the skill to pull myself together and get back on track. I also know that my ability to recognize a cliff from above will protect me, so I don’t worry all the time.

Rhythm & Endurance

I see some younger skiers give it all in their first run. They don’t save energy for later. They ride the mountain like it’s their last day on Earth. Also, many people ignore rhythm. That’s the thing that makes you turn as if to a beat. If you do it right, you’re effortlessly dancing on the slope. You can go on like that for a whole day and not be sore tomorrow. If you don’t have rhythm, you’ll be tired in an hour and will be more prone to falling, which will cause injury and put you out of commission for half a week or sometimes months. In programming, I used to push myself too hard. Now, I know better and pace myself. I became more productive and I almost never get tired from programming anymore.

Conclusion

You can find inspiration everywhere. Having many diverse hobbies will help you grow as a person and as a professional, so I suggest that you pick up as many as you can.

Link
Evert PotPSR-7 is imminent, and here's my issues with it. (4.3.2015, 02:22 UTC)

PSR-7 is pretty close to completion. PSR-7 is a new 'PHP standard recommendation', put out by the PHP-FIG group, of which I'm a member of.

It describes how to create PHP representations of a HTTP Request and a HTTP response. I think the potential impact of PSR-7 can be quite large. If large PHP framework authors are on board (and at least some of them are), it means that these interfaces might in the future be used indirectly or directly by an extremely large portion of the PHP community.

PSR-7 gets a lot of things right, and is very close to nailing the abstract data model behind HTTP, better than many other implementations in many programming languages.

But it's not perfect. I've been pretty vocal about a few issues I have with the approach. Most of this has fallen on deaf ears. I accept that I might be a minority in feeling these are problems, but I feel compelled to share my issues here anyway. Perhaps as a last attempt to sollicit change, or maybe just to get it off my chest.

If anything, it will allow me to say 'I told you so' when people start to using it and run into the edge cases that it doesn't cover well.

PSR-7 doesn't just represent a HTTP request and HTTP response in PHP, it tells you how to build your HTTP application.

Immutability

More recently in the process the decision has been made to make the objects immutable. This means that after the objects have been created, they are set in stone and cannot be changed.

In practice, this means instead of this:

<?php

$response->setHeader('X-Powered-By', 'Captain Planet');

?>

We need to do:

<?php

$response = $response->withHeader('X-Powered-By', 'Captain Planet');

?>

The difference is small in this isolated example, but the impact is massive.

One obvious issue is that for every change that you want to make to request or response objects, an entirely new instance needs to be created.

This bit of code creates a total of 4 copies of the request.

<?php

$request = $request
    ->withMethod('POST')
    ->withUrl(new Url('http://example.org/')
    ->withHeader('Content-Type', 'text/plain');

?>

The real impact in 'time spent' was proven to be quite low, so this part of the argument doesn't really bother me. Cloning objects is apparently pretty cheap in PHP.

What bothers me a bit more is that this is a pretty major departure of how we are used to using these objects. Most PHP frameworks will have some type of representation of the request and response object, and many APIs that use those objects. By forcing immutability, most of this APIs will have to change.

This decision has been made for sake of robustness. This apparently would "remove a whole class of bugs". Well, I believe the confusion that comes with an unusual API will definitely open the doors to a whole new class of bugs as well ;).

Silex

To give you an example of an API that is forced to change, here's an example from Silex. Silex has a set of events that allows a user to alter request and response objects:

Truncated by Planet PHP, read more at the original (another 12605 bytes)

Link
Brandon SavageIntroducing a new error handler for PHP projects (3.3.2015, 19:56 UTC)

Dealing with errors is one of the most frustrating and challenging parts of developing an application. Nobody likes to think about their application in a failure state, and there’s nothing more deflating than writing a bunch of great code, and getting a message that you forgot a semicolon on line 4. After trying a few […]

The post Introducing a new error handler for PHP projects appeared first on BrandonSavage.net.

Link
Pádraic BradySecurely Distributing PHARs: Pitfalls and Solutions (3.3.2015, 16:45 UTC)
Maximum Security (comics)

The PHAR ecosystem has become a separate distribution mechanism for PHP code, distinct from what we usually consider PHP packages via PEAR and Composer. However, they still suffer from all of the same problems, namely the persisting whiff of security weaknesses in how their distribution is designed.

What exactly can go wrong when distributing any sort of PHAR?

  • Downloading PHARs from a HTTP URL not protected by TLS.
  • Downloading PHARs from a HTTPS URL with TLS verification disabled.
  • Downloading PHARs which are unsigned by the authors.
  • Downloading any PHAR “installer” unnecessarily.

All of the above introduce an element of risk that the code you receive is not actually the code the author intended to distribute, i.e. it may decide to go do some crazy things that spell bad news when executed. A hacker could mount a Man-In-The-Middle attack on your connection to the PHAR server, or compromise the PHAR server and replace the file, or employ some DNS spoofing trickery to redirect download requests to their server.

I’ve started to distribute a CLI app phar of my own recently for Humbug, so I had to go and solve these problems and make installing, and updating, that phar both simple and secure. Here’s the outline of the solution I’ve arrived at which is quite self-evident.

  • Distribute the PHAR over HTTPS
  • Enforce TLS Verification
  • Sign your PHAR with a private key
  • Avoid PHAR Installer scripts
  • Manage Self Updates Securely
  • Do all of this consistently

Some details and a discussion on each point…

Distribute the PHAR over HTTPS

If you really don’t already have a TLS enabled download location, you can avail yourself of Github.io which supports HTTPS URLs. I’m using this for Humbug‘s development builds. You can also use Github Releases for your project and attach the phars there for new versions. If you do need to host the PHAR on your own server, get a TLS certificate for your domain.

Enforce TLS verification

PHP supports TLS verification out of the box, for the most part. It was disabled by default until PHP 5.6. Enforce it! If a user cannot make a simple request to a simple HTTPS URL, then their server is quite obviously misconfigured. That is not your problem, so don’t make it your problem. You use HTTPS, you enforce TLS, and other programmers should be more than capable of fixing their own stuff. Insecure broken systems are not the lowest common denominate you should be targeting.

Enabling TLS verification for PHP’s stream functions, e.g. file_get_contents(), is basically a disaster waiting to happen because its configuration can be fairly long winded to get just right. As something of a shim, I’ve created the humbug_file_contents package which has a ready to roll TLS-loving function that can replace file_get_contents() transparently, but only when it detects a PHP version less than 5.6.

PHP 5.6 introduced significant TLS improvements which were enabled by default. In certain areas, it actually exceeds what might be expected from other options, and it’s certainly better than any combination of pre-5.6 options can currently achieve

Truncated by Planet PHP, read more at the original (another 6868 bytes)

Link
Brandon SavageThe conference talk creation process (3.3.2015, 13:04 UTC)

There’s been a lot made in the last 24 hours about the process of submitting and accepting conference talks, including whether or not such talks should be written beforehand. There are many valid points of view on the issue, and here are a few of my thoughts. When it comes to creating conference talks, I […]

The post The conference talk creation process appeared first on BrandonSavage.net.

Link
Derick RethansXdebug 2.3: Enhanced xdebug_debug_zval() (3.3.2015, 09:44 UTC)

Xdebug 2.3: Enhanced xdebug_debug_zval()

This is the second article in a series about the new features in Xdebug 2.3, which was first released on February 22nd.

xdebug_debug_zval() has been around for quite some time, to provide correct information about how PHP internally stores a variable. Unlike PHP's built in debug_zval_dump() function, it does not modify the variable information that it tries to show. This is because instead of passing in a variable, you pass in its name. Passing a variable into a function, can modify the various parameters that are associated with this variable, such as the is_ref and refcount fields.

xdebug_debug_zval() does not suffer from these inadvertent modifications, as you pass in the variable's name, and the function looks up the information about a variable in the symbol tables itself.

The difference becomes clear with the following two examples. With debug_zval_dump():

<?php
$a = array(1, 2, 3);
$b =& $a;
$c =& $a[2];

debug_zval_dump($a);
?>

Which outputs (after a little formatting):

array(3) refcount(1){
        [0]=> long(1) refcount(2)
        [1]=> long(2) refcount(2)
        [2]=> &long(3) refcount(3)
}

And with xdebug_debug_zval():

<?php
$a = array(1, 2, 3);
$b =& $a;
$c =& $a[2];

xdebug_debug_zval('a');
?>

Which outputs (after a little formatting):

a: (refcount=2, is_ref=1)=array (
        0 => (refcount=1, is_ref=0)=1,
        1 => (refcount=1, is_ref=0)=2,
        2 => (refcount=2, is_ref=1)=3
)

In the debug_zval_dump() example, the refcounts for the array elements are all one too high, and the refcount for the array itself is one too low. The array is also not marked as reference.

However, before Xdebug 2.3, the xdebug_debug_zval() function would only accept a variable name, but not any array subscripts or property deferences. Meaning that you couldn't really dump a sub array. Xdebug 2.3 adds support for dereferencing properties and array elements by reusing the variable name parser of the remote debugging. Hence, you can now do the following:

<?php
$a = array(1, 2, 3);
$b =& $a;
$c =& $a[2];

xdebug_debug_zval('a[2]');
?>

Which outputs:

a[2]: (refcount=2, is_ref=1)=3

Or:

<?php
$a = new StdClass;
$a->prop = [3.14, 2.72];

xdebug_debug_zval('a->prop');
xdebug_debug_zval('a->prop[1]');
?>

Which outputs:

a->prop: (refcount=1, is_ref=0)=array (
        0 => (refcount=1, is_ref=0)=3.14,
        1 => (refcount=1, is_ref=0)=2.72
)
a->prop[1]: (refcount=1, is_ref=0)=2.72


Other parts in this series:

Link
Cal EvansInterview with Morgan Tocker (3.3.2015, 06:00 UTC)

Twitter:
Morgan Tocker @morgo

Show Notes

The post Interview with Morgan Tocker appeared first on Voices of the ElePHPant.

Link
Anna FilinaShould conference talks be written in advance? (3.3.2015, 02:54 UTC)

I had a discussion on Twitter today regarding a conference’s selection process. It was suggested that speakers should be forced to write their entire talk before submitting it to a conference, perhaps even presented at a local event. This might work for speakers who give the same talks over and over again, but that’s not the majority.

I organize a conference where I select nearly 150 talks each year. I also speak around the world (South Africa, Spain, Germany, USA, etc.) That gives me a unique perspective of the relationship between speakers and organizers and how they can better collaborate.

How hard is it to write a talk?

It’s extremely time-consuming. I personally spend between 20 and 60 hours to prepare a conference talk, depending on how much code and research is required. It used to take me more when I was still new to the speaking scene.

I submit new abstracts all the time: different abstracts depending on the type of conference. I know these subjects very well, but I can’t possibly write 10 or more talks each year if only a handful of them will be picked. First, it would take too much time. Second, those that will not be selected would be wasted and might not be relevant next year.

Is it fair to demand that speakers potentially waste so much time? After spending hundreds of hours preparing the talks, it would be even more crushing to receive a rejection e-mail from the committee.

How do organizers know if the talk will be good?

We look at both the speaker and the topic. A speaker who has a history of giving great talks would normally write subsequent talk of equal or superior quality. So if a speaker is good and has experience to show, we don’t need the slides or the code. I have occasionally asked speakers to provide an outline when they had nothing else to show and that worked out well.

To determine whether the talk is good, we read the title and the abstract, and check whether the speaker has experience related to the topic. We discuss the interest in the topic, and in our case, check how many people voted for it. There are other criteria that are unrelated to this question. I blogged about them earlier here and here.

Conclusion

We know that the job of a speaker is hard enough. Even harder for those who are still trying to get into their first conference. We don’t want to give speakers arbitrary work to counterbalance our own laziness. This is why ConFoo will continue asking only a title and an abstract for each proposal submitted.

Link
Evert PotDropbox starts using POST, and why this is poor API design. (2.3.2015, 21:12 UTC)

Today Dropbox announced in a blogpost titled "Limitations of the GET method in HTTP" that it will start allowing POST requests for APIs that would otherwise only be accessible using GET.

It's an interesting post, and addresses a common limitation people run into when developing RESTful webservices. How do you deal with complex queries? Using URL parameters is cumbersome for a number of reasons:

  1. There's a limitation to the amount of data you can send. Somewhere between 2KB and 8KB apparently.
  2. URL parameters don't allow nearly enough flexibility in terms of how you can define your query. The percent-encoded string doesn't really have a universal way to define in what character set its bytes are,
  3. The URL is not nearly as versatile and expressive as JSON, and let alone XML.

Their solution to this problem is to now allow POST requests on endpoints that traditionally only alowed GET.

Is this the best solution? Well, it's certainly a pragmatic one. We're clearly running into artificial limitations here that are poorly solved by existing technology.

The problem with POST

Switching to POST discards a number of very useful features though. POST is defined as a non-safe, non-idempotent method. This means that if a POST request fails, an intermediate (such as a proxy) cannot just assume they can make the same request again.

It also ensures that HTTP caches no longer work out of the box for those requests.

Using REPORT instead

The HTTP specification has an extension that defines the PATCH request, this spec is picking up some steam, and a lot of people are starting to use it to solve common problems in API design.

In the same vein, there's been another standard HTTP method for a while with the name REPORT, which specifically addresses some of the issues with POST.

The REPORT request:

  1. Can have a request body
  2. Is safe
  3. Is idempotent

It appears in the IANA HTTP Method list and is actually quite great for this use-case. The main reason it's off people's radar, is because it originally appeared in a WebDAV-related spec a long time ago.

However, its semantics are well defined and it works everywhere. I would love to see more people start picking this up and adding it to their HTTP API toolbelt.

Using GET with a request body

Whenever this topic comes up on Hacker News, there's almost guaranteed to be a comment about using GET with a request body.

I wondered about this myself (6 years ago now apparently!) and it's my top question on stackoverflow. Clearly a lot of people have the same thinking process and wonder about this.

Using a request body with GET is bad. It might be allowed, but it's specifically defined as meaningless. This means that any HTTP server, client or proxy is free to discard it without altering the semantic meaning of the request, and I guarantee that some of them will.

Furthermore, the benefits of using GET are then completely gone. Caching is not based on request bodies, and these requests are not addressable with a URI.

Literally the only reason why anyone would do this is because GET looks nicer, it's an aesthetic decision, and nothing more.

Why real GET requests are great: addressability

Whether you use POST or the superiour REPORT request, you still miss the biggest advantage of using GET requests.

A GET query is always a URI. Anyone can link from it. Parts of your service can link to specific results. Even external services can integrate with you by referring to specific reports.

A POST query can not be linked and neither can a REPORT query. All we can do is explain that a certain URI accepts certain http methods with certain media-types, but this is not nearly as elegant as a simple URI. Linking rocks.

An alternative approach

One way to solve this issue entirely and fix all problems related to this, is disconnect the query you are doing from its result.

To do this, you could create a /queri

Truncated by Planet PHP, read more at the original (another 1312 bytes)

Link
LinksRSS 0.92   RDF 1.
Atom Feed   100% Popoon
PHP5 powered   PEAR
ButtonsPlanet PHP   Planet PHP
Planet PHP