Sammy Kaye Powers06: Submitting a PR to php-src (24.7.2017, 13:29 UTC)

We found some untested lines of code and wrote a useful test that covered the lines so let's submit our new test to the main php-src repo.

Don't we need to create an RFC to send a pull request? Not for bug fixes and tests, so we're in the clear.

Getting setup on GitHub

If you've never contributed to open source via GitHub before, check out my post, How to contribute to an open source project on GitHub.

First you need to sign up for GitHub and get your SSH keys set up.

We made sure we got the, "You've successfully authenticated" message to verify that our SSH keys were set up properly.

$ ssh -T git@github.com
Hi SammyK! You've successfully authenticated, but GitHub does not provide shell access.

Then we made sure to set our name & email in the global git config. You should use the same email address that you use on GitHub.

$ git config --global user.name "Sammy Kaye Powers"
$ git config --global user.email "foo@example.com"

Forking the php-src repo

You'll need to create a fork of the main php-src repo to your account.

We listed our remotes to see the origin URL was set from the URL we used to clone the repo.

$ git remote -v
origin  https://github.com/php/php-src.git (fetch)
origin  https://github.com/php/php-src.git (push)

We need to change the origin URL to point to our fork we just created and then add a new remote called upstream that points to the php source repo.

Note: Make sure to replace {your-username} with your GitHub username.

$ git remote set-url origin git@github.com:{your-username}/php-src.git
$ git remote add upstream git@github.com:php/php-src.git

Pushing the changes to our fork

Before we committed our change, we switched to the master branch since we want to create a new branch for our change off of master. We named the new branch test-json-depth-error but you can name it whatever you like.

$ git checkout master
$ git checkout -b test-json-depth-error

Then we staged, committed and pushed our new branch up to our fork on GitHub.

$ git add ext/json/tests/json_decode_error001.phpt
$ git commit -m "Add test for json_decode() depth error case"
$ git push origin test-json-depth-error

Send a pull request (PR)

We opened our fork up in GitHub and saw a message asking if we'd like to submit the new branch we created as a pull request back to the main php-src repo. So we clicked the button and created a pull request.

In order to keep our fork and local copy of the repo up to date, we made use of git fetch and git rebase.

$ git fetch upstream
$ git checkout master
$ git rebase upstream/master master
$ git push origin master

Congrats! You're now an official PHP internals contributor!

Congrats!

Resources

Truncated by Planet PHP, read more at the original (another 1140 bytes)

Link
Zeev SuraskiYour PHP Stories (24.7.2017, 10:23 UTC)
Do you have an interesting story that involves PHP?  Something awkward, unexpected or inspiring that happened to you or that you witnessed that was related to PHP and/or its community?  Did PHP help you meet your spouse, otherwise change your life or enable you to change other people's lives?

If you answered any of these questions with a yes, and you'd like to share it with the world - please drop me a note at zeev@php.net or @zeevs (Twitter).

Thanks!

Zeev


Link
Sarfraz AhmedSockets with PHP and Node (21.7.2017, 21:19 UTC)

I was looking to implement real time notifications system via sockets without having to use any third party services such as Pusher, etc. I just wanted to be able to send notifications from PHP side to the client and instantly show them on the web application similar to Facebook notifications.

First I came across ratchet library, it worked great but problem emerged when setting it up on secure connection (https) on my host (siteground), tried apache proxy module and everything else told on the internet but to no avail (it seems siteground has problem with ratchet on https) so in the end I had to drop ratchet.

Then I thought of using socket.io with Node+PHP and with my research I came across elephant.io library (though it isn't updated recently) and this one worked wonderfully well both on non-secure and secure protocols allowing us to send and receive messages from PHP with node-based server.

Here are the steps that I followed to get my notification system working.

Install elephant.io

For your PHP application, install elephant.io via composer:

composer require wisembly/elephant.io

Install Node Dependencies

Create a directory in your project root and under it create a file named package.json with these contents:

{
    "name": "elephantIO_example_emitter",
    "version": "3.0.0",
    "main": "server.js",

    "scripts": {
        "start": "supervisor --debug server.js"
    },

    "dependencies": {
        "socket.io": "~1",
        "winston": "*"
    }
}

On newly created directory run command npm install --save. This will install socket.io and logger library.

In same newly created directory, create a file server.js with these contents:

var server     = require('http').createServer(),
    io         = require('socket.io')(server),
    logger     = require('winston'),
    port       = 1337;

// Logger config
logger.remove(logger.transports.Console);
logger.add(logger.transports.Console, { colorize: true, timestamp: true });
logger.info('SocketIO > listening on port ' + port);

io.on('connection', function (socket){
    var nb = 0;

    logger.info('SocketIO > Connected socket ' + socket.id);

    socket.on('broadcast', function (message) {
        ++nb;
        logger.info('ElephantIO broadcast > ' + JSON.stringify(message));

        // send to all connected clients
        io.sockets.emit("broadcast", message);
    });

    socket.on('disconnect', function () {
        logger.info('SocketIO : Received ' + nb + ' messages');
        logger.info('SocketIO > Disconnected socket ' + socket.id);
    });
});

server.listen(port);

Run server.js file through node by typing node server.js, you should see message that server has started on specified port.

Client Side

Put following javascript code in your application's page/footer:

<script src='//cdnjs.cloudflare.com/ajax/libs/socket.io/1.7.4/socket.io.min.js'></script>

<script>
var socket = io.connect('//127.0.0.1:1337');

socket.on('connect', function () {
    console.log('connected');

    socket.on('broadcast', function (data) {
        //console.log(data);
        //socket.emit("broadcast", data);
        alert(data.text);
    });

    socket.on('disconnect', function () {
        console.log('disconnected');
    });
});
</script>

Sending Notification from PHP

Here is how you can send a message to all connected clients:

require __DIR__ . '/vendor/autoload.php';

use ElephantIO\Client;
use ElephantIO\Engine\SocketIO\Version1X;

$client = new Client(new Version1X('//127.0.0.1:1337'));

$client->initialize();
// send message to connected clients
$client->emit('broadcast', ['type' => 'notification', 'text' => 'Hello There!']);
$client->close();

and that's all there is to it.

Installing and Running Node on Production Site

I was on CentOSv6 and I installed node by following this guide. Then I created simple php file that will be run by cron so that node server is automatically started/restarted if it is not running:

$nodePath = 'your node binary path here';
$filePath = 'your server.js file path';
shell_exec($nodePath . ' ' . $filePath);

and then specify that file in cron to run at your specified time intervals.

Important Notes

  • I was having bit

Truncated by Planet PHP, read more at the original (another 868 bytes)

Link
Sammy Kaye Powers05: Finding untested code (21.7.2017, 19:15 UTC)

Now that we know how to create tests and debug them when they fail, let's make a useful test that actually covers some untested code.

Finding untested lines of code

The PHP gcov website shows what lines of C code are covered by the test suite.

We took a long tangent to talk about the PHP_FUNCTION macro as well as the two ways Zend parse parameters (ZPP) works: with the zend_parse_parameters() function and the multi-line macro.

Eventually we found some uncovered lines of code in ext/json/json.c for the json_decode() function that checked that value for the depth param was greater than 0.

Creating a new test

We made sure that we had the ext/json extension installed.

$ sapi/cli/php -m | grep json

We tried to create a new test.

$ vi ext/json/tests/json_decode_error.phpt

But quickly realized that there was already a test there so we created a new file variation.

$ vi ext/json/tests/json_decode_error001.phpt

Then we created our test.

--TEST--
json_decode() - depth error
--CREDITS--
Sammy Kaye Powers me at sammyk dot me
# TestFest Chicago PHP UG 2017-07-18
--SKIPIF--
<?php if (!extension_loaded('json')) die('skip ext/json required'); ?>
--FILE--
<?php
var_dump(json_decode('[]', false, 0));
?>
--EXPECTF--
Warning: json_decode(): Depth must be greater than zero in %s on line %d
NULL

We ran our test to see it pass with flying colors.

$ make test TESTS=ext/json/tests/json_decode_error001.phpt

At this point we could totally send our new test as a PR to the main php-src repo, but we wanted to see that this test actually covered the untested lines.

Generating a code coverage report

Since the PHP gcov website isn't updated regularly, we took Elizabeth Smith's advice and generated the code coverage reports locally.

First we have to install lcov.

$ sudo apt-get update
$ sudo apt-get install lcov

Then we can use the handy config.nice script to run configure again with all the previous flags in addition to any new ones. So we ran it with --enable-gcov since we already ran it with --enable-debug previously.

$ ./config.nice --enable-gcov

Next we had to delete all the previously compiled files with make clean so that everything could be recompiled with the appropriate flags that gcov needs.

$ make clean && make

Finally we wer

Truncated by Planet PHP, read more at the original (another 1981 bytes)

Link
Federico Cargnelutti[#3331208]: WordPress.com Skimlinks (21.7.2017, 09:14 UTC)

To reproduce this issue:

1) Open this page in “Incognito” mode
2) Refresh the page multiple times and observe how js converts “foo” to “foo}

import {assert} from 'chai';
import sinon from 'sinon';
import mockRequire from 'mock-require';

describe('My module', () => {

    let module; // module under test
    let configMock;

    beforeEach(() => {
        configMock = {
            init: sinon.stub().returns("foo")
        };

        // mock es6 import (tip: use the same import path)
        mockRequire("../../config.js", configMock);

        // require es6 module
        module = require("../../../app/services/content.js");
    });

    afterEach(() => {
        // remove all registered mocks
        mockRequire.stopAll();
    });

    describe('Initialisation', () => {

        it('should have an load function', () => {
            assert.isFunction(module.load);
        });

    });

});

Filed under: Programming
Link
Nomad PHPIterators & Generators (21.7.2017, 04:01 UTC)

October 2017 - EU
Presented By

Eli White
October 19, 2017
20:00 CEST

The post Iterators & Generators appeared first on Nomad PHP.

Link
SitePoint PHPHow to Write JavaScript-Style Test Watchers in PHP (20.7.2017, 16:00 UTC)

I didn't start out writing tests for my code. Like many before and since, my "testing" was to write code and refresh the page. "Does it look right?", I'd ask myself. If I thought so, I'd move on.

In fact, most of the jobs I've had have been with companies who don't much care for other forms of testing. It's taken many years, and wise words from people like Chris Hartjes, for me to see the value in testing. And I'm still learning what good tests look like.

Vector icon with eye

I recently started working on a few JavaScript projects which had bundled test watchers.

Here's a great premium video tutorial about test driven NodeJS development!

In the land of JavaScript, it's not uncommon to preprocess source code. In the land of JavaScript, developers write in syntax not widely supported, and the code is transformed into syntax that is widely supported, usually using a tool called Babel.

In order to reduce the burden of invoking the transformation scripts, boilerplate projects have started to include scripts to automatically watch for file changes; and thereafter invoke these scripts.

These projects I've worked on have used a similar approach to re-run unit tests. When I change the JavaScript files, these files are transformed and the unit tests are re-run. This way, I can immediately see if I've broken anything.

The code for this tutorial can be found on Github. I've tested it with PHP 7.1.

Setting Up The Project

Since starting to work on these projects, I've started to set a similar thing up for PHPUnit. In fact, the first project I set up the PHPUnit watcher script on was a PHP project that also preprocesses files.

It all started after I added preprocessing scripts to my project:

composer require pre/short-closures

These particular preprocessing scripts allow me to rename PSR-4 autoloaded classes (from path/to/file.phppath/to/file.pre), to opt-in to the functionality they provide. So I added the following to my composer.json file:

"autoload": {
    "psr-4": {
        "App\\": "src"
    }
},
"autoload-dev": {
    "psr-4": {
        "App\\Tests\\": "tests"
    }
}

This is from composer.json

I then added a class to generate functions with the details of the current user session:

namespace App;

use Closure;

class Session
{
    private $user;

    public function __construct(array $user)
    {
        $this->user = $user;
    }

    public function closureWithUser(Closure $closure)
    {
        return () => {
            $closure($this->user);
        };
    }
}

This is from src/Session.pre

To check if this works, I've set up a small example script:

require_once __DIR__ . "/vendor/autoload.php";

$session = new App\Session(["id" => 1]);

$closure = ($user) => {
    print "user: " . $user["id"] . PHP_EOL;
};

$closureWithUser = $session->closureWithUser($closure);
$closureWithUser();

This is from example.pre

...And because I want to use the short closures in a non-PSR-4 class, I also need to set up a loader:

require_once __DIR__ . "/vendor/autoload.php";

Pre\Plugin\process(__DIR__ . "/example.pre");

This is from loader.php

This is a lot of code to illustrate a small point. The Session class has a closureWithUser method, which accepts a closure and returns another. When called, this new closure will call the original closure, providing the user session array as an argument.

To run all of this, type into terminal:

php loader.php

As a side-note, the valid PHP syntax that these preprocessors generated is lovely. It looks like this:

$closure = function ($user) {
   print "user: " . $user["id"] . PHP_EOL;
};

...and

public function closureWithUser(Closure $closure)
{
   return [$closure 

Truncated by Planet PHP, read more at the original (another 3644 bytes)

Link
PHP: Hypertext PreprocessorPHP 7.2.0 Beta 1 Released (20.7.2017, 00:00 UTC)
The PHP development team announces the immediate availability of PHP 7.2.0 Beta 1. This release is the first beta for 7.2.0. All users of PHP are encouraged to test this version carefully, and report any bugs and incompatibilities in the bug tracking system. THIS IS A DEVELOPMENT PREVIEW - DO NOT USE IT IN PRODUCTION!PHP 7.2.0 Beta 1 builds on previous releases with:The much anticipated Sodium extensionOpcache improvementsCountable support for DOMNodeList and DOMNamedNodeMapImproved handling for invalid UTF8 in json_decode()And many bugfixes... For more information on the new features and other changes, you can read the NEWS file, or the UPGRADING file for a complete list of upgrading notes. These files can also be found in the release archive. For source downloads of PHP 7.2.0 Beta 1 please visit the download page, Windows sources and binaries can be found at windows.php.net/qa/. The second beta will be released on the 3rd of August. You can also read the full list of planned releases on our wiki. Thank you for helping us make PHP better.
Link
Federico CargneluttiNode.js: How to mock the imports of an ES6 module (18.7.2017, 21:58 UTC)

The package mock-require is useful if you want to mock require statements in Node.js. It has a simple API that allows you to mock anything, from a single exported function to a standard library. Here’s an example:

app/config.js

function init() {
    // ...
}

module.exports = init;

app/services/content.js

import config from '../../config.js';

function load() {
    // ...
}

module.exports = load;

test/services/content_spec.js

import {assert} from 'chai';
import sinon from 'sinon';
import mockRequire from 'mock-require';

describe('My module', () => {

    let module; // module under test
    let configMock;

    beforeEach(() => {
        configMock = {
            init: sinon.stub().returns("foo")
        };

        // mock es6 import (tip: use the same import path)
        mockRequire("../../config.js", configMock);

        // require es6 module
        module = require("../../../app/services/content.js");
    });

    afterEach(() => {
        // remove all registered mocks
        mockRequire.stopAll();
    });

    describe('Initialisation', () => {

        it('should have an load function', () => {
            assert.isFunction(module.load);
        });

    });

});

Filed under: Node.js, Programming
Link
Paul M. JonesDomain Logic and Email Templates (18.7.2017, 12:00 UTC)

From an email conversation with a reader:

Hi Paul,

I’ve been following your writing and examples about the ADR pattern for some time now. It’s taken me awhile to wrap my head around it but ADR has really added a lot of structure to my code and I’m thankful for your work!

One dilemma that I’ve been struggling with is how to organize emails that are sent out by my service classes. Currently my business logic dictates when to send an email and most of them contain html and text templates. It just feels wrong to include those email templates within the Domain. Do you have any recommendations? Any help you can provide would be greatly appreciated.

In a somewhat related question – Recently I’ve been organizing my “views” folders within the folders of their parents (Http, Mail). I think I based it on your ADR example on github. Do you still prefer this method or do you lean towards organizing within a “resources” folder?

My intuition is that you are right to keep the email templates out of the domain.

In a way, sending an email as part of a web request/response cycle is like sending two responses: the normal HTTP response, and the email response. With that in mind, it might make sense to think of the HTML + Text email templates as part of a presentation layer. Or, as a combination of infrastructure (the email-sending client) plus presentation (the templates). That would be how to think about the separation of concerns there.

Here’s an example of what that separation of concerns might look like in a package directory structure:

resources/
    templates/
        web/
            # web page templates
        email/
            message-1/
                message-1.html
                message-1.txt
            message-2/
                message-2.html
                message-2.txt
src/
    Domain/
        AppService/
            FooAppService.php
            BarAppService.php
        EmailInterface.php
        # ... other domain classes
    Infrastructure/
        DataSource/
            # ... mappers, tables, etc
        Emailer.php # implements Domain\EmailInterface
    Web/
        Foo/
            FooAction.php
            FooResponder.php
        Bar/
            BarAction.php
            BarResponder.php

The specifics of directory structure are not important, as long as you see that the Emailer class is separated from the Domain application services (or use cases, or whatever).

The Emailer class, for its part, might be a facade [1] that coordinates between a “real” emailer (e.g. Swiftmailer or PhpMailer) and a template class, to put together and then send the email. You could configure the Emailer class with the template location (resources/templates/email/*) and inject it into your application service (which depends on the EmailInterface).

Now, sending emails inline as part of the web request might be fine in a lower-traffic situation. But as volume scales up, this kind of separation will make it easy to extract all email-sending to a workers. Then the Emailer can queue emails to the workers instead of sending them inline with the web request; the email-sending can become the job of a queue worker, and the template work will go there instead.

As far as where to put templates for views:

The extended example ADR code on GitHub is a few years old at this point. I still think it’s a reasonable setup, especially for people transitioning out of pseudo-MVC toward ADR, but it might do well to receive an update (or some examples of variations).

I don’t prefer any particular method or structure on where to put templates. Sometimes it makes sense to keep templates near the things using them, sometimes it makes sense to collect them all one place. The needs of the system, and the prior experiences of the developer(s), will be the deciding factor as far as I’m concerned. I had PHP-PDS on the brain when I replied, so the pds/skeleton with its “resources” directory was at hand.


[1] A real facade, not a Laravel one.

Link
LinksRSS 0.92   RDF 1.
Atom Feed   100% Popoon
PHP5 powered   PEAR
ButtonsPlanet PHP   Planet PHP
Planet PHP