It’s a recurring joke (or is it?) in the developing community that if you review 10 lines of code you will find 10 issues, but review 500 and everything is fine. I get it, too much code is too much code, we lose focus, we get tired, we just want to finish the review. There is also the Law of triviality to consider, although that’s a subject for another time perhaps.
Having big Pull Requests, with a lot of files changed, makes it harder to review them. It can also be very difficult, for a reviewer, to get a general sense of what, how and why something has been changed. The Jira ticket (other issue tracking systems are available) and the PR summary can help, but we developers do tend to not take too much care on either of those, let’s be honest.
This can result in a review which is rushed and not as in-depth as one would like, simply because there is too much.
Wikipedia has a great explanation for what an atomic commit is in this context. What we want to achieve is a series of small, contained, ordered commits:
By “rely on future commits” we mean that the application as a whole must be working at any commit. Therefore, if one commit relies on something that has been committed later, at that point the application would not work, and therefore the commits break our requirement of “ordered commits”.
As a high-level example, let’s consider adding a new API for a resource to a Laravel application. There are a few things that needs done: migrations, model and factory, controller, routes, permissions, etc.
The first thing to do is the migration. This can be your first commit. Then come the model and the factory. These could go into separate commits, but they are tightly related and usually quite small, so they could even go in the same commit as the migration.
Now, the controller has a few “dependencies”: it needs permissions, maybe a couple of form requests, or a job. So we would not work on the controller until those other things have been done.
Permissions seems to be the first logical things to add next, the form requests and the jobs if any, and then finally the controller. Routes should probably be the last, as that is what would make our API available, so everything must have been set up and working at this point.
The git history may then look something like the following:
82957de4 Add routes for /api/v1/cars
2058d849 Add CarsController
63d8348e Add MakeCar job
ce45ead6 Add CreateCarRequest and UpdateCarRequest
1940fca4 Add permissions for Car resource
ae8f2248 Add Car model and factory
8295ed92 Add migration for Car resource
Some of this is left to the developer’s judgement. As I said the migration, model and factory could go in the same commit, and so does the query builder if necessary. The key thing to remember is to keep a commit small and contained, and have them in order of dependencies.
As I said there are three main properties of an atomic commit: small, contained and ordered.
Small commits reduce the amount of code to review, which speeds up the review and improve its quality.
Contained commits allow the reviewer to focus on that specific, well defined goal, achieved by the commit, which makes it easier for them to understand the context and evaluate the changes, again improving the quality of the review.
But the commits are also ordered, with one commit only depending on the changes from previous commits. This allows the reviewer to stop at any time if they think it’s necessary.
But in my opinion adopting atomic commits will also help teams producing better code. By having contained commits we can produce better tests, as we are focusing our attention on one piece of functionality at a time. And by having ordered commits, we will be forced to think about how to organise the code and how it interacts with itself and the rest of the application.
If you’re not using atomic commits yet, this would be a huge change for you. You are probably used to commit multiple times, sometimes just to correct mistakes you made, with commit messages that are practically irrelevant now.
I am convinced, however, that any team will benefit enormously from this strategy in the long run. You will introduce fewer bugs because code reviews will be more accurate. And you will have better code because during development you will be forced to actively think how to organise it.
So, let’s consider some of the challenged that you and you team will face and how to overcome them.
Commit messages are now extremely important as they must clearly convey what the changes are about. Having small commits should help with this, as there is less to describe. So no more “quick fix” or “WIP”.
We should also take advantage of the fact that commit messages can be on multiple lines. I suggest having the first line for a short description of what the commit is about and then more lines for a more in-depth explanation. Just don’t get overboard and write an essay, two or three lines are enough.
Add MakeCar job
This job is responsabile for creating a Car record.
Having atomic commits is actually only important before submitting the PR for review. What I mean is that you are still free to commit however you like, but you must make sure all commits are atomic before sending the PR up for review. Obviously it’s easier if you try and use atomic commits from the start, but it’s not unusual to forgot to change something in a file which should have been done in a previous commit.
To reorganise your commits you will have to get familiar with git rebase
.
This command allows you to reordered the commits, squash two or more together, and even edit a commit to change
what is actually committed. It is quite a powerful command, but it’s not that hard to master.
Sometimes you need to refactor some files just for cosmetic changes, like formatting, or adding return types. These changes should all be in a separate atomic commit, so that they don’t “pollute” the other changes and the reviewer can safely skip the entire commit (if allowed).
It may seem this does not matter that much, but these are the type of changes that can really make a code review more difficult. If these changes are part of other commits we need to navigate through them to find the ones that are not cosmetic, which takes effort and time, and we’re likely to miss some.
It’s important that code reviewers are not afraid to send back a PR because it doesn’t use atomic commits, or they are not correct, even before they looks at the code.
For example, if some formatting has been committed with code changes, the PR should be rejected and the developer should extract those formatting and putting them in its own commit (or add it to another cosmetic commit).
Or if in a commit we start using a class that will be added in a later commit, the PR should, again, be rejected and the developer should reorder the commits.
At the beginning it will feel odd and maybe petty to reject a PR for those reasons, but you need to remember that this is a process. It will happen more often at the beginning because you are not used to it, but it will soon become second nature and the development and code review processes will become faster, smoother and more accurate.
As I said this is a big change for any team. To help transitioning to atomic commits your team could start small to get used to the process before applying it to all your work.
For example, you could designate a Jira ticket as “atomic” somehow, maybe the ones that your team envisage would be small. This will let the developer and the code reviewer know that atomic commit must be used for that ticket.
After everybody in your team is comfortable with atomic commits, using git rebase
and reviewing atomic PRs,
then you can think of either expanding the number of Jira ticket to be “atomic” or just move to it across the board.
Therefore I recently embark in a project to port all my Dusk tests to Cypress. The first step though was to setup my Laravel project AND my CI pipeline to run Cypress tests.
My development machine is a Mac, and I use Docker and Laravel Sail, because I don’t want to install many tools on my machine. I like the idea of keeping everything in containers and separate from the host, although I had to make a small exception for this project, as we will see.
The objectives for this project, as I’m sure you can guess, were to be able to run Cypress Test Runner locally and run all my Cypress tests during my CI pipeline, which in my case is a GitHub Action Workflow.
The requirements were to do all this in container, as much as possible, and with the latest version of Cypress, which
was 10.0.1 at the time. This meant I didn’t want to install Cypress as a node modules, so no npm install cypress
.
There are already Docker images available to run Cypress, but to be able to run its Test Runner you need to do some more work.
To get started I follow the excellent article Run Cypress with a single Docker command by Gleb Bahmutov. It was a great starting point but not quite right for me, especially when it comes to run the Test Runner.
The main difference was that the article suggest running the container directly, while I wanted to add everything in my
already existing docker-compose.yml
file, since I already have one for Laravel Sail.
This is how I have defined the service in docker-compose.yml
:
cypress:
image: 'cypress/included:10.0.1'
profiles:
- 'on-demand-only'
volumes:
- '.:/e2e'
- '/tmp/.X11-unix:/tmp/.X11-unix'
working_dir: '/e2e'
environment:
- CYPRESS_baseUrl=http://laravel.test
- CYPRESS_VIDEO=false
- DISPLAY=host.docker.internal:0
networks:
- sail
depends_on:
- laravel.test
entrypoint: cypress
Let me explain some of the specs.
profiles
to avoid spinning up the container when I run sail up
as the
cypress container would only be used on-demand.baseUrl
must be name of the service for your site as defined inside the docker-compose.yml
file, as
both container will run on the same network, sail
in this case.DISPLAY
environment variable is for the
host machine, using the Docker special address host.docker.internal
.cypress
so that I can use any Cypress command when starting the container.Now, following that article you still need to install XQuartz on my Mac (this is the little exception I made to my “no local install” policy) and set it to allow connections from network clients. But the rest of the instructions did not work for me.
Everytime I tried to run xhost
to allow
connection I had the /usr/X11/bin/xhost: unable to open display ""
error message. It turned out I needed to set
the DISPLAY
before allowing the connection with xhost
and not before starting the container.
Not only that, but if I tried and set DISPLAY=$IP:0
I had
the /usr/X11/bin/xhost: unable to open display "192.168.0.220:0"
error message.
And although DISPLAY=:0 /usr/X11/bin/xhost + $IP
did not return any error, it didn’t work either. When I started
the container it exited immediately with the Missing X server or $DISPLAY
error message.
What instead worked for me was to get rid of $IP
altogether and use DISPLAY=:0 /usr/X11/bin/xhost +
. This does
allow connections from any host, but given I am not on a public network and that I normally close XQuartz anyway when
I am done with development, I didn’t see it as a big security risk.
By the way, the fact it didn’t work may have had something
to do with how I set the DISPLAY
variable in the docker-compose.yml
file, but I didn’t want to spend too much time
finding out exactly why. I had a working solution and I was happy with it.
So, in the end, to run the Test Runner on my local development machine, what I need to do is
DISPLAY=:0 /usr/X11/bin/xhost +
sail run -it --rm cypress open --project .
I said earlier that one of my objective was to be able to run the Cypress tests during my CI pipeline. I use GitHub Actions for that and I already had a workflow set up for my PHPUnit and Dusk tests. My workflow has a job to build a matrix, and another to install packages and build artifacts which are then cached and reused. My tests jobs depends on these two.
This posed a small problem. Cypress needs to be installed before running, even when using a Docker image. Since this needs to happen every time the workflow runs it made sense to try and cache it. Cypress is installed as a Node package and therefore it made sense to cache it with the other Node modules.
So, my steps to cache both my Node modules and Cypress looks like the following
- name: Cache node modules
id: cache-node-modules
uses: actions/cache@v3
with:
path: |
~/.cache/Cypress
./node_modules
key: ${{ runner.os }}-php-${{ matrix.php-versions }}-build-${{ env.node-modules-cache-name }}-${{ hashFiles('**/package-lock.json') }}
- if: steps.cache-composer-packages.outputs.cache-hit != 'true'
run: npm install
- name: Install Cypress
run: npm i cypress
- name: Verify Cypress
uses: cypress-io/github-action@v4
with:
runTests: false
The addition for Cypress were
~/.cache/Cypress
directorynpm i cypress
cypress-io/github-action@v4
. Note the runTests: false
to avoid, well, running the
testsFinally, I was ready to add the job to run the tests. I have remove some of the steps that are not relevant here, like checking out the code and restoring the caches:
cypress-tests:
steps:
- name: Run Laravel Server
run: php artisan serve &
- name: Run Cypress Tests
id: cypress-tests
uses: cypress-io/github-action@v4
with:
install: false
wait-on: 'http://127.0.0.1:8000'
config: baseUrl=http://127.0.0.1:8000
config-file: ./cypress.config.js
record: true
project: ./
- name: Upload screenshots
uses: actions/upload-artifact@v3
if: failure()
with:
name: ${{ github.job }}-screenshots
path: cypress/screenshots
- name: Upload videos
uses: actions/upload-artifact@v3
if: failure()
with:
name: ${{ github.job }}-videos
path: cypress/videos
Note that:
install: false
, as the installation already
happened and it was cachedconfig-file: ./cypress.config.js
baseUrl
as we are now using a PHP server to serve our site for testingI’m very happy about how things turned out. I am now able to run the test locally using the Cypress Test Runner, which is one of my favourite things about Cypress. The tests are also automatically run during my CI pipeline in GitHub.
I have created a Git Gist with the content of some of the files.
]]>While helping a friend rebuild their website, one of the requirement was to allow users to “like” a post. They could like a post only once, of course, so I have to store that information somewhere. We don’t have logged in users, so I had to use cookies for that. And with cookies comes the task of asking for the user’s consent.
My starting point was the cookie-consent package. It was a good starting point but there are a few issues with that, at least for my case:
The last point was crucial for me, and that was why I decided to write my own component.
The idea is simple: if the user has never been asked, the show a modal to ask for consent. The modal will allow the user to either give or refuse their consent. There is also a link to show the cookie policy, which in my case will be another modal.
So, the first thing I needed was a service to tell me the status of the cookie.
<?php
namespace App\Services;
use Illuminate\Support\Facades\Cookie;
class CookieConsent
{
public function cookieExists(): bool
{
return !is_null($this->getCookie());
}
public function consentHasBeenGiven(): bool
{
if ($this->getCookie() === $this->getConsentValue()) {
return true;
}
return false;
}
public function giveConsent(): void
{
Cookie::queue(
config('cookie-consent.cookie_name'),
config('cookie-consent.consent_value'),
config('cookie-consent.consent_cookie_lifetime')
);
}
public function refuseConsent(): void
{
Cookie::queue(
config('cookie-consent.cookie_name'),
config('cookie-consent.refuse_value'),
config('cookie-consent.refuse_cookie_lifetime')
);
}
/**
* @return array|string|null
*/
private function getCookie()
{
return request()->cookie(config('cookie-consent.cookie_name'));
}
private function getConsentValue(): string
{
return config('cookie-consent.consent_value');
}
}
As you can see, it’s pretty straightforward. We use some configuration which will see later, although they are quite self-explanatory.
The bit that caught me of guard, and made me spend a couple of hours wondering why it did not work, was using the
cookie()
helper rather than the Cookie
facade. I just had to read the Laravel documentation to understand my error,
so please don’t do the same.
Then I created the Livewire component
php artisan livewire:make CookieConsent
and put the following in its controller
<?php
namespace App\Http\Livewire;
use Livewire\Component;
class CookieConsent extends Component
{
public bool $askForConsent;
public bool $openConsentModal;
public bool $openCookieModal = false;
public function mount(\App\Services\CookieConsent $service)
{
$this->askForConsent = !$service->cookieExists();
$this->openConsentModal = true;
}
public function toggleCookieModal()
{
$this->openCookieModal = !$this->openCookieModal;
$this->openConsentModal = !$this->openConsentModal;
}
public function giveConsent(\App\Services\CookieConsent $service)
{
$service->giveConsent();
$this->openConsentModal = false;
$this->askForConsent = false;
}
public function refuseConsent(\App\Services\CookieConsent $service)
{
$service->refuseConsent();
$this->openConsentModal = false;
$this->askForConsent = false;
}
public function render()
{
return view('livewire.cookie-consent.cookie-consent');
}
}
Now, for the view, since I will have two modals, I decided to create two more views for those models. So, the main view
for the componet, resources/views/livewire/cookie-consent/cookie-consent.blade.php
, will just include the other two:
<div>
@if($askForConsent)
@include('livewire.cookie-consent.consent-modal')
@include('livewire.cookie-consent.cookie-policy-modal')
@endif
</div>
The main modal, resources/views/livewire/cookie-consent/consent-modal.blade.php
is
<div x-data="{ open: @entangle('openConsentModal') }" x-show="open"
class="fixed z-10 w-full h-full top-0 left-0 flex items-center justify-center">
<div class="absolute w-full h-full bg-gray-900 opacity-50 sm:bg-yellow-500"></div>
<div class="bg-white w-auto mx-3 sm:mx-0 rounded shadow-lg z-50 overflow-y-auto">
<div class="py-4 text-left px-6">
<!--Title-->
<div class="mx-auto flex items-center justify-center h-12 w-12 rounded-full bg-gray-900">
<svg class="h-8 w-8 text-white" xmlns="http://www.w3.org/2000/svg" class="h-6 w-6" fill="none" viewBox="0 0 24 24" stroke="currentColor">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M11 5.882V19.24a1.76 1.76 0 01-3.417.592l-2.147-6.15M18 13a3 3 0 100-6M5.436 13.683A4.001 4.001 0 017 6h1.832c4.1 0 7.625-1.234 9.168-3v14c-1.543-1.766-5.067-3-9.168-3H7a3.988 3.988 0 01-1.564-.317z" />
</svg>
</div>
<!--Body-->
<div class="mt-5 text-center text-gray-500 space-y-2 leading-snug">
<p>Your experience on this site will be improved by allowing cookies.</p>
<div>
Learn mode about our cookies'
<button wire:click="toggleCookieModal" class="hover:text-blue-500">
<svg class="h-5 w-5 inline-block" xmlns="http://www.w3.org/2000/svg" class="h-6 w-6" fill="none" viewBox="0 0 24 24" stroke="currentColor">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M13 16h-1v-4h-1m1-4h.01M21 12a9 9 0 11-18 0 9 9 0 0118 0z" />
</svg>
</button>
</div>
</div>
<!--Footer-->
<div class="mt-5 flex flex-col sm:flex-row space-y-2 sm:space-x-2 sm:space-y-0">
<button wire:click="refuseConsent"
class="w-full sm:w-1/2 inline-flex justify-center border border-gray-300 rounded-md shadow-sm px-4 py-2 bg-white text-base font-medium text-gray-700 hover:text-gray-500 focus:outline-none focus:border-blue-300 focus:shadow-outline-blue'">
Refuse cookies
</button>
<button wire:click="giveConsent"
class="w-full sm:w-1/2 inline-flex justify-center border border-transparent rounded-md shadow-sm px-4 py-2 bg-gray-900 text-base font-medium text-white hover:bg-gray-600 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-gray-400">
Accept cookies
</button>
</div>
</div>
</div>
</div>
and the modal for the cookie policy, resources/views/livewire/cookie-consent/cookie-policy-modal.blade.php
, is
<div x-data="{ open: @entangle('openCookieModal') }" x-show="open"
class="fixed z-10 w-full h-full top-0 left-0 flex items-center justify-center">
<div class="absolute w-full h-full bg-gray-900 opacity-50 sm:bg-yellow-500"></div>
<div class="bg-white w-auto mx-3 sm:mx-0 rounded shadow-lg z-50 overflow-y-auto">
<div class="py-4 text-left px-6">
<!--Title-->
<div class="mx-auto flex items-center justify-center h-12 w-12 rounded-full bg-gray-900">
<svg class="h-8 w-8 text-white" xmlns="http://www.w3.org/2000/svg" class="h-6 w-6" fill="none" viewBox="0 0 24 24" stroke="currentColor">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M11 5.882V19.24a1.76 1.76 0 01-3.417.592l-2.147-6.15M18 13a3 3 0 100-6M5.436 13.683A4.001 4.001 0 017 6h1.832c4.1 0 7.625-1.234 9.168-3v14c-1.543-1.766-5.067-3-9.168-3H7a3.988 3.988 0 01-1.564-.317z" />
</svg>
</div>
<!--Body-->
<div class="mt-5 text-center text-gray-500 space-y-2 leading-snug">
<h3 class="text-lg leading-6 font-medium text-gray-900" id="modal-headline">
Cookie Statement
</h3>
<p>Cookies are used to store your personal votes for posts.</p>
</div>
<!--Footer-->
<div class="mt-5 sm:mt-6 sm:grid sm:grid-cols-1 sm:gap-3 sm:grid-flow-row-dense">
<button wire:click="toggleCookieModal"
class="mb-2 w-full inline-flex justify-center rounded-md border border-transparent shadow-sm px-4 py-2 bg-gray-900 text-base font-medium text-white hover:bg-gray-600 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-gray-400">
Close
</button>
</div>
</div>
</div>
</div>
I am sure you have noticed I used tailwindcss, which I highly recommend.
The text in all the models can be changed for whatever you need. All modals are fully responsive.
Now we come to the configuration. Create the config/cookie-consent.php
file and add the following:
<?php
return [
'cookie_name' => 'cookie_consent',
'consent_value' => 'yes',
'refuse_value' => 'no',
'consent_cookie_lifetime' => 60 * 24 * 365,
'refuse_cookie_lifetime' => 60 * 24 * 30,
];
I strongly recommend changing the name of the cookie to make it unique to your site. You can also adjust the values and lifetimes.
Finally, just add the new component to the pages you need it to, most likely all of them so add it to you layout.
And that’s it. Now you can use the CookieConsent
service around your code to check if the user has or has not given
consent to store cookies.
The codes is also available as a GitHub Gist.
]]>We use GitLab for deployments, and each of our deployment has an environment associate with it. So, my workflow was to click on the deployment job for the site I was interested in. For there click on the environment link on the top of the page. There GitLab shows all the deployment, so it was just a matter of looking for the latest successful one. And this is where the problem was.
From some time ago, a lot of cancelled jobs started appearing. I have never found out exactly why, some collegues told me it was because with every push to any branch GitLab was creating a job, which was automatically cancelled as the deployment is only manually triggered. Whether that was tru or not, or whether it was our setup at fault or GitLab didn’t matter. What mattered was that it took me too long to find what I was looking for: the latest deployment information. Once I had to go through more than 40 pages!! Something had to be done.
I am learning Vue and in one of the lessons I was following Vuetify was mentioned. It looked very impressive so I decided to build something in VueJS and using Vuetify to help me in my work.
You can find the result of my efforts in this repository.
I went through few iterations, as yuo can see from the tags. Initially, I fixed the project I was working, as that was the one I was, well, doing my work in. But then I realised I could make it more general and I added a select field for choosing which project to display.
Also, I started by retrieving the information for all environments in the project. We have over 170 environments in that project and making all those API call every time didn’t seem the right thing to do. So later I refactored the code and added a toggle for each environment. I’m never interested in all the environments to just getting the information I required seemed a good solution.
You may also noticed that although the table is sortable by various columns, I don’t order the data, and if you
refresh the page a few times you’ll see the initial 10 are not always the same. Apart from being a waste of time to
sort them beforehand since you can sort them with a simple click, I never scroll through the table but instead
use the amazing searching capability that the <v-data-table>
Vuetify component offers.
That’s pretty much it. It’s not a very complicated Vue app, but I am very happy with the result. It looks amazing and it was fun to write. But most importantly, it saves me SO much time.
]]>One of the first thing I wanted to get right was CI. I use both CircleCI and Travis-CI. Why am I using both? No real reason, other than the fact that I wanted to learn how to use both of them.
It took me some time to get them both working as I wanted, but one thing that kept failing from time to time was Laravel Dusk, with the dreaded message:
Facebook\WebDriver\Exception\SessionNotCreatedException: session not created: This version of ChromeDriver only supports Chrome version 75
(Driver info: chromedriver=75.0.3770.140 (2d9f97485c7b07dc18a74666574f19176731995c-refs/branch-heads/3770@{#1155}),platform=Linux 4.15.0-1028-gcp x86_64)
This happened a few times before, and I know how to fix it, by using the great dusk:chrome-driver
artisan command.
However, I now wanted to fix it once and for all.
The basic fix is to get the current version of Google Chrome installed and update the ChromeDriver to the same version. It turns out it’s not that difficult.
For CircleCI, I already have a step to upgrade the ChromeDriver
- run:
name: Update Chrome Driver
command: php artisan dusk:chrome-driver 74
so all I need to is change that step to
- run:
name: Update Chrome Driver
command: |
CHROME_VERSION="$(google-chrome --version)"
CHROMEDRIVER_RELEASE="$(echo $CHROME_VERSION | sed 's/^Google Chrome //')"
CHROMEDRIVER_RELEASE=${CHROMEDRIVER_RELEASE%%.*}
php artisan dusk:chrome-driver $CHROMEDRIVER_RELEASE
For Travis-CI the change is very similar. My current before_script
step includes
php artisan dusk:chrome-driver 74
so I just need to add a few more commands before that
before_script:
- phpenv config-rm xdebug.ini
- touch ./storage/logs/laravel.log
- touch ./database/database.sqlite
- php artisan migrate --force
- php artisan passport:install
- CHROME_VERSION="$(google-chrome-stable --version)"
- CHROMEDRIVER_RELEASE="$(echo $CHROME_VERSION | sed 's/^Google Chrome //')"
- CHROMEDRIVER_RELEASE=${CHROMEDRIVER_RELEASE%%.*}
- php artisan dusk:chrome-driver $CHROMEDRIVER_RELEASE
- google-chrome-stable --headless --disable-gpu --remote-debugging-port=9222 http://localhost &
- php artisan serve &
Note I use google-chrome-stable
here instead of google-chrome
as for CircleCI.
So I decided to look at the main Rollbar package for Laravel and see what that did. My idea was to try and replicate that in Lumen.
The Laravel package I looked at was jensseger/laravel-rollbar, which is very good, if you have a Laravel project. This package doesn’t really do much, it relies a lot on the basic rollbar package rollbar/rollbar.
So, first of all, you need rollbar/rollbar-php
, so add it to your project with
composer require rollbar/rollbar
Note that this will also install monolog/monolog
as a requirement. And because of that
we can configure Rollbar as any other Laravel logging system: by a config file.
This is my config/logging.php
file:
<?php
return [
'default' => env('LOG_CHANNEL', 'stack'),
'channels' => [
'stack' => [
'driver' => 'stack',
'channels' => ['rollbar'],
],
'rollbar' => [
'driver' => 'monolog',
'handler' => Rollbar\Monolog\Handler\RollbarHandler::class,
'access_token' => env('ROLLBAR_ACCESS_TOKEN'),
'level' => 'debug',
],
],
];
The importan part here is the rollbar
channel, the rest is up to you, you don’t have
to use the stack
channel, by default or otherwise. The meaning of the configuration
are explained in the rollbar/rollbar
documentation.
DISCLAIMER: I haven’t tried all of the possible configuration, so I cannot guarantee they would work. Howeveer, as these are simply passed onto the Rollbar package I don’t see why they shouldn’t work.
The main thing in the jensseger/laravel-rollbar
package is its service provider.
It’s quite exhaustive in the way it checks whether it’s safe to use
Rollbar or how it can bypass configuration settings by checking the environment directly.
But I don’t need all that, so my RollbarServiceProvider
looks like this:
<?php
namespace App\Providers;
use Illuminate\Contracts\Config\Repository;
use Illuminate\Support\ServiceProvider;
use Rollbar\RollbarLogger;
use Rollbar\Rollbar;
class RollbarServiceProvider extends ServiceProvider
{
public function register()
{
$this->app->singleton(RollbarLogger::class, function () {
$config = $this->app->make(Repository::class);
$defaults = [
'environment' => app()->environment(),
'root' => base_path(),
'handle_exception' => true,
'handle_error' => true,
'handle_fatal' => true,
];
$rollbarConfig = array_merge($defaults, $config->get('logging.channels.rollbar', []));
$handleException = (bool)array_pull($rollbarConfig, 'handle_exception');
$handleError = (bool)array_pull($rollbarConfig, 'handle_error');
$handleFatal = (bool)array_pull($rollbarConfig, 'handle_fatal');
Rollbar::init($rollbarConfig, $handleException, $handleError, $handleFatal);
return Rollbar::logger();
});
}
}
It’s pretty much the same, but stripped down. Now, we need to configure it. First, add a new
environment variable in your .env
file called ROLLBAR_ACCESS_TOKEN
. Not that its
value should be the post_server_item
type of access code from Rollbar.
Finally, now that we have all the pieces, we need to glue them together in the bootstrap/app.php
file.
So, add the following two line to it:
$app->register(\App\Providers\RollbarServiceProvider::class);
$app->configure('logging');
And there you have it: Rollbar integration in Lumen.
]]>In pure OOP you would model this something like the following:
class Vehicle {
public function getType() {
return 'vehicle';
}
}
class Truck extends Vehicle {
public function getType() {
return 'truck';
}
}
class Car extends Vehicle {
public function getType() {
return 'car';
}
}
class Moped extends Vehicle {
public function getType() {
return 'moped';
}
}
Now, as I said, these were stored in the same table vehicles
. My problem was that I wanted to be able to do
something like
$vehicles = Vehicle::all();
and get an Eloquent collection where each item is of the proper class, for example
[
Truck {
...
},
Car {
...
},
Moped {
...
},
Truck {
...
}
]
This is all pseudocode but I hope the issue is clear.
I did some research and I found a nice solution which takes advantage of the fact that Laravel now always return
a collection and so we can overwrite the newCollection()
method and recast the models. The article suggested to
create a new collection and a factory for the recasting
class VehicleCollection extends Illuminate\Database\Eloquent\Collection
{
public function __construct($items)
{
parent::__construct($items);
$this->recastAll();
}
private function recastAll()
{
$newItems = [];
foreach ($this->items as $model) {
if ($model instanceof Vehicle) {
$newItems[] = VehicleFactory::build($model);
} else {
$newItems[] = $model;
}
}
$this->items = $newItems;
}
}
class VehicleFactory
{
public static function build(Vehicle $model)
{
switch ($model->type) {
case 'truck':
return (new Truck())->setRawAttributes($model->getAttributes(), true);
case 'car':
return (new Car())->setRawAttributes($model->getAttributes(), true);
case 'moped':
return (new Moped())->setRawAttributes($model->getAttributes(), true);
default:
// We should never reach this, but in case we add a new type in the DB and we haven't (yet)
// added the corresponded class, this will prevent an error
return $model;
}
}
}
class Vehicle extends Illuminate\Database\Eloquent\Model {
public function newCollection(array $models = []) {
return new VehicleCollection($models);
}
}
This worked quite well at the beginning, but then I had to do it for two more models, one of which the user. Having three factory classes and three collections classes was a bit too much. I needed to find a way to be more concise. I decided to use a trait
trait RecastModel
{
/**
* @param array $models
*
* @return Collection
*/
public function newCollection(array $models = [])
{
$that = $this;
return (new Collection($models))->map(function ($model) use ($that) {
if ($model instanceof self) {
return $that->setNewModel($that->recastModel($model), $model);
} else {
return $model;
}
});
}
protected function setNewModel(Model $newModel, Model $oldModel): Model
{
$newModel->setRawAttributes($oldModel->getAttributes(), true)
->setRelations($oldModel->getRelations());
$newModel->exists = $oldModel->exists;
return $newModel;
}
/**
* This method should return a new model, of a more specific class.
*
* This is very the logic to differentiate between the models is implemented
*/
abstract protected function recastModel(self $model): Model;
}
This was much simpler. Now any of the three multi models just needed to use the trait and implement the recastModel()
method.
The setNewModel()
method has been declared protected
in case you model needs to do something
different for the default.
Note also the setting of the exists
property on the new model. This is important so that the model
is not created again when saved.
First of all, let me start by briefly explaining what HAL is. You can find more information here.
HAL stands for Hypertext Application Language, and it is a way to standardise how to pass information to clients who consume your APIs. The data returned is in JSON format and it consists of mainly three parts:
_links
section with endpoints to get more information about the resource_embedded
section with other resources (in HAL format) related to resource in question.For example, a response from an API to get data about a book could look something like the following
{
"_links": {
"self": {
"href": "/api/v1/books/12345"
},
"author": {
"href": "/api/v1/authors/isac_asimov"
}
},
"id": "12345",
"title": "Foundation",
"author": "Isac Asimov"
}
Since I had at least three resources I needed to provide an API for, and with more to be added in future, I wanted a simple, nice, clean way to add new APIs for new resources. What I chose to do was to create a HAL resource class to help me build the response and a contract that a model should implement if there is an API for that resource.
So, let’s start with the HAL resource. I am going to break it down to make it easier to understand.
As I said, there are three parts, so have them as protected variables
class HalResource
{
protected $state = [];
protected $links = [];
protected $embedded = [];
//...
}
Now, I needed to define setter methods for them. The setter methods for the $state
and
$links
properties are quite straightforward:
public function setState(CastToHalContract $state): self
{
$this->state = $state->toJsonHal();
return $this;
}
public function addLink($ref, $href): self
{
$ref = trim(strtolower($ref));
if ($ref != 'self') {
$this->links[$ref] = trim(strtolower($href));
}
return $this;
}
Note the use of the toJsonHal()
method. More on it later.
I chose to have them return the object itself so that I can nicely chain these calls when building the API response.
The setter for the $embedded
property is slightly more complicated. I decided to have two
setter methods: one to add another HalResource
object directly and one to add a collection
of Eloquent models. This way the controller will be so much easier to read and understand.
public function addEmbeddedResource($ref, HalResource $resource)
{
$ref = trim(strtolower($ref));
if (!isset($this->embedded[$ref])) {
$this->embedded[$ref] = [];
}
$this->embedded[$ref][] = $resource;
}
public function addEmbeddedResources($ref, Collection $collection)
{
$collection->each(function ($item) use ($ref) {
$this->addEmbeddedResource($ref, (new self())->setState($item));
});
}
They are not quite the same. As per HAL specification, an embedded resource is a fully fledged HAL resource, with embedded resource as well if necessary.
This can be achieved with the addEmbeddedResource()
method, but not with the addEmbeddedResources()
(note the plural) method. This is because the latter works on Collections of model and creates HAL resources
on the fly with only the state set. This was enough for what I needed and therefore I did not look
into a more sophisticated way of adding embedded resources.
Finally, I needed a method to transform this object into an array, ready to be returned by the API.
public function toArray(): array
{
$data = $this->state;
foreach ($this->links as $ref => $href) {
$data['_links'][$ref]['href'] = $href;
}
if (!empty($this->embedded)) {
$data['_embedded'] = [];
foreach ($this->embedded as $ref => $resources) {
$data['embedded'][$ref] = [];
foreach ($resources as $resource) {
/** @var HalResource $resource */
$data['_embedded'][$ref][] = $resource->toArray();
}
}
}
return $data;
}
I’m sure you noticed I have already used the contract I was talking about as a typehint in one of the setter method’s signature, so let’s define it, it is extremely simple
interface CastToHalContract
{
public function toJsonHal(): array;
}
That’s it, just one method. Every model that you would like to return as a HAL resource will need to implement this contract and off you go.
I worked for a magazine at the time, so an issue had articles and contributors associated with it.
I already had the Issue
model class defined, all I needed to do was to implement the toJasonHal()
method.
class Issue extends Model implements CastToHalContract
{
//...
public function toJsonHal(): array
{
return [
'_links' => ['self' => ['href' => route('api.v1.issue.index', [$this->issueid], false)]],
'id' => $this->issueid,
// Add all the other properties that you wish to return by the API
// ...
];
}
///...
One thing to notice here is the inclusion of the _links
section. You may think this is wrong
as I have defined a setter method for it and I should be using it. And you’re probably right.
The thing is though, I want to be 100% sure that the link to itself is always present, so instead
of relying on me remembering to call the addLink()
method I decided to make my life
easier by always including it in the implementation of the toJasonHal()
method.
Also notice that the implementation of the addLink()
method does not allow the developer
to add a self
link. This is to prevent accidentally adding the wrong link.
I was now ready to put everything together in the controller.
public function info(LegacyIssue $issue)
{
$issueResource = (new HalResource())->setState($issue);
// Add the previous and next links if any
$prev = $this->issueService->getPreviousIssue($issue->getKey());
if (!empty($prev)) {
$issueResource->addLink('prev', route('api.v1.issue.index', $prev->getKey(), false));
}
$next = $this->issueService->getNextIssue($issue->getKey());
if (!empty($next)) {
$issueResource->addLink('next', route('api.v1.issue.index', $next->getKey(), false));
}
// Add the articles, if any
$issueResource->addEmbeddedResources('articles', $this->issueService->getArticles($issue->getKey()));
// Add the contributors, if any
$issueResource->addEmbeddedResources('contributors', $this->issueService->getContributors($issue->getKey()));
return response()->json($issueResource->toArray());
}
Let’s go through the controller and see what it does.
The first thing is to set the state of the new HAL resource. Notice that this will use the
model implementation of the toJasonHal()
method.
Then I add the endpoint for the previous and next Issue resource, if any, and the embedded resources.
Finally, the HAL resource is transformed into an array and returned as a JSON structure.
]]>DatabaseMigration
and DatabaseTransactions
traits only work on the default database. Or so I thought.
It turned out that the DatabaseTransactions
trait uses the connectionToTransact
property
to wrap more than one database into a transaction. So, all I had to do was to define that property.
abstract class TestCase extends \Illuminate\Foundation\Testing\TestCase
{
use CreatesApplication, DatabaseTransactions;
protected $connectionsToTransact = ['mysql', 'legacy', 'blog'];
}
Note that mysql
, legacy
and blog
are the connection names I defined in config\database.php
'connections' => [
'mysql' => [
'driver' => 'mysql',
'host' => env('DB_HOST', '127.0.0.1'),
// ...
],
'legacy' => [
'driver' => 'mysql',
'host' => env('LEGACY_DB_HOST', '127.0.0.1'),
// ...
],
'blog' => [
'driver' => 'mysql',
'host' => env('BLOG_DB_HOST', '127.0.0.1'),
// ...
],
// ...
]
Make sure all your databases support transactions, e.g. MyISAM tables in a MySQL database do not support transactions.
This was all hunky-dory for PHPUnit tests, but what about Laravel Dusk? The issue here is that the tests
run in a browser and therefore we cannot use the DatabaseTransactions
trait as it won’t work.
Using the DatabaseMigration
trait was not an option. My legacy database was big, in terms of number
of tables, and recreating it every time would take a very long time. Besides, I did not have a set of
migrations to run (yes, I could have written them but as I said the migrations would have been too long).
I had to think of something else.
My solution was to use the events that Laravel fires automatically when a model is created to record those models and delete them at the end of the test. This is the general idea.
So, first of all, I created a new event: app\Events\EloquentModelCreated.php
class EloquentModelCreated
{
use SerializesModels;
public $model;
public function __construct(Model $model)
{
$this->model = $model;
}
}
Nothing fancy here. This is the even that will be fired when a model is created and all it does is ti stores the model itself.
Then I created a listener for that event: app\Listeners\DatabaseTransactionForDusk.php
class DatabaseTransactionForDusk
{
static protected $createdModels = [];
public static function rollback()
{
// Delete the models is reverse order. This prevents errors when deleting
// a record that with a foreign key without deleting the parent record first.
// I should say this this "should prevent errors" as I have not tested id with FK
collect(self::$createdModels)->reverse()->each(function ($model) {
$model->delete();
});
self::$createdModels = [];
}
public function handle(EloquentModelCreated $event)
{
self::$createdModels[] = $event->model;
}
}
The purpose of this listener is to record all the models that have been created and to delete them
at the end. The rollback
method should probably live somewhere else, it’s not really the job
of a listener to do that. But I couldn’t find another appropriate place for it and it’s just such a small
listener that I didn’t want to spread the code too much.
Now, all my Eloquent models will need the created
event to be handled by the listener, so to
make my life easier I created a trait: app\Models\ListenToModelCreated.php
trait ListenToModelCreated
{
public function __construct(array $attributes = [])
{
parent::__construct($attributes);
$this->dispatchesEvents = array_merge(
$this->dispatchesEvents,
['created' => EloquentModelCreated::class]
);
}
}
This is used in all my models. It’s true that if I forget to use it in one model then those records will not be rolled back and I will end up eventually with lots of rubbish in my testing database. But for the legacy models I do have a base class that they all extend which uses the trait, so happy days.
Last but not least, I had to register the listener for the new event, so I have modified the boot
method of the EventServiceProvider
class as follows:
public function boot()
{
if ($this->app->environment() == 'dusk') {
$this->listen[EloquentModelCreated::class] = [DatabaseTransactionForDusk::class];
}
parent::boot();
}
I’m sure you have noticed I check for the environment before registering the listener. This is because
I want to be able to rollback the databases only when running Laravel Dusk. I could run the tests with
APP_ENV=dusk php artisan dusk
. However, I am bound to forget about setting the APP_ENV
variable
before running Dusk and thus cluttering my database with rubbish data. Instead, I take advantage of the fact
that Laravel Dusk looks for a special .env
file to use (read here).
So, I simply create a .env.dusk.local
file (which I had anyway because of other settings) and set the
APP_ENV
there like this
APP_ENV=dusk
...
The last piece of the jigsaw is to make sure that the rollback is actually performed at the end of the
test. This is done my defining the $beforeApplicationDestroyedCallbacks
property of the DuskTestCase
base class
protected $beforeApplicationDestroyedCallbacks = [
[DatabaseTransactionForDusk::class, 'rollback'],
];
And that’s it. Now even when running Laravel Dusk your databases will be rolled back to their original state.
]]>