Some changes for me

For the past years a lot of my focus has been on the (PHP) community. I've spoken at numerous conferences and usergroups. And although I've been cutting down on the amount of conferences, I've done more usergroups in the past year than in the years before that.

In December 2018, I've made a decision to cut down on this a bit more. This has nothing to do with not wanting to speak anymore, but more with an opportunity that has arisen that I want to take. I want to put 110% of my effort into this, which means I have to cut down on other activities that I'm doing. Speaking at usergroups and conferences is one of those things.

PHP has been my biggest hobby for the past 20+ years. It is great that I have been able to make it my job as well. Since quite a few years, I've picked up on something I've been interested in for years. I've started doing live radio. My first radio show was on the now discontinued Internet radiostation On Air Radio, after which I've moved on to another Internet radiostation IndieXL. Both times I did everything in my own little radio studio that I had built at home. It was a lot of fun.

My interest in radio already began when I was a teen. A Dutch morning show was also broadcasting on TV, so I was "watching radio" every morning. In the 90's, the Dutch radiostation KinkFM introduced me to an incredible amount of alternative music. KinkFM was the best radiostation I could imagine in terms of music, but also in terms of DJ's. People with an incredible passion for and knowledge of music. When the station was stopped by its owner in 2011, I was incredibly sad.

2 years ago one of the original founders of KinkFM saved the brand name from the company that at that time owned the name. While he wasn't planning to restart the station, the response he got was overwhelming, so he started researching his options. I got in touch and over a year ago I started doing a Spotify playlist for them called KLUB KINK.

Late last year, the announcement came: A new radiostation focussing on alternative music will be launched. Since FM is something nearly of the past, the name will now be KINK.

I have been asked to evolve my Spotify playlist into a podcast, and next to that, present a radioshow. After giving it some thought and looking at my schedule, I have decided to take this opportunity. I love doing radio, and to be able to do it for my all-time favorite radiostation is amazing. Starting on Thursday February 7, I will be doing a radioshow every Thursday from 7PM to 9PM.

Will I be completely gone from conferences and usergroups? Of course not! But as I mentioed earlier, I really want this to succeed, I want to give it 110% of my effort, and that means making tough choices.

Git hooks on Windows

I recently was asked to add a git hook to our main repository to add the Jira issue number to the commit message in an automated way. We were so far handling this really inconsistently, with many commits not having the ticket number at all, while others would have them either at the start or the end of the commit message. Since our branches all contain the ticket number (naming is like `feature/APP-123-this-new-feature) this should be easy, right?

Searching around I found that Andy Carter has already published a hook written in Python to do just this. I copied the script and put it in the prepare-commit-msg file. Because I work on Windows these days I expected #!/usr/bin/env python not to work so I updated it to be #!C:\Program Files\Python\Python.exe.

I started testing my new hook, but it wouldn't work. I would constantly run into an error when committing: error: cannot spawn .git/hooks/prepare-commit-msg: No such file or directory.

After trying many different things including adding quotes to the shebang, just using python.exe (since Python was added to the PATH) and a few other things, I found out that I had simply been too excited to change the script. It turns out Windows does support #!/usr/bin/env python. So after simply changing the shebang back to it's original value the commit hook worked like a charm!

Introducing By The Campfire

It's been over a year that I first decided to start a podcast. Inspired by the Dutch podcast Wilde Haren De Podcast in which the host Vincent Patty has interesting conversations with his guests which offer a lot of information about the person as well as the topics they're interested in, I decided I wanted to do something similar with guests from my general area of interest. I first started working on a website, made a list of people I'd like to have on as a guest, and then... procrastinated for way too long.

A few weeks ago I finally got off my butt and made a serious attempt at scheduling some dates. And with pride I can say that the first episode is now published! In it, I talk with the awesome Rick Kuipers on a variety of subjects ranging from Fortnite to dancing, from speaking at conferences to freelancing and from chess to self-steering teams.

Everyone, I'd like to introduce By The Campfire. Because the best conversations are had in situations where there is very little distraction, like when you're sitting by the campfire. And yes, this idea is totally inspired by the campfire talks I've had at WeCamp over the years.

I'll be publishing the podcast on an irregular basis (depending on when I can schedule to meet with someone interested in being a guest on the podcast). You can subscribe to the podcast in several services already, although I am still awaiting approval from Apple and Spotify. I've got a list of all places where we've been approved on the About page. And of course, you can simply listen on the website.

Some thank yous are required. For instance to Stephan Hochdoerfer and the Ingewikkeld crew, who helped with finding the right name for the podcast. Also to someone I can't remember who exactly who came up with the idea of asking my guest to come up with a title for the episode, which results in a title such as How Fortnite Dances Can Help Your Speaking Career. To my wife Marjolein who graciously let me borrow her Zoom H4n recorder. And of course to Rick for being my first guest.

Surface Book 2 Nvidia card not recognized

So I love my Surface Book 2. It is an amazing laptop and tablet hybrid. There's very little issues I have with it. But since a couple of weeks I keep having issues with the video card. The Surface Book 2 has 2 video cards: an Intel UHD 620 card for the tablet, and an Nvidia GeForce GTX 1050 when the Surface is connected to the keyboard section for stunning graphics. Especially when playing games I recently have a very low FPS, and I had no idea what could be causing it.

When I opened my device manager, I was shocked to see that the Nvidia card was not there. I started searching the Internet and apparently, this is an issue that has been plaguing more Surface Book users. Eventually I found a solution as posted by Philip Aaron that worked for me!

  1. Open the device manager so you can see which display adapters Windows sees
  2. Disconnect power from the laptop
  3. Detach the Surface from the keyboard base
  4. Wait until the device manager has reloaded its devices
  5. Attach the Surface back to the keyboard base
  6. Wait until the device manager has reloaded its devices
  7. Now the Nvidia card should be recognized again, connect the power again

Having to do this every time I start the computer is rather annoying, so I've contacted Microsoft to see if there is a permanent solution, but for now, this at least solves my FPS issues.

What I learned from the Zend/Rogue Wave acquisition (or: why I'm so excited about the Github/Microsoft deal)

When Zend was acquired in 2015 I was openly and vocally scared for the future of Zend (and PHP). I honestly thought that this deal would mean Zend would be absorbed within Rogue Wave and that would be the last of it. A friend DM'ed me to warn me that doing this so openly might make it a self fulfilling prophecy, not necessarily because of Rogue Wave but because companies could lose trust in PHP.

Fast-forward three years and PHP is still stronger than even and if anything we're getting more instead of less from Zend: Their products are still going strong and ZendCon has been turned into ZendCon & Open Enterprise, broadening the scope of the conference and thereby making it more interesting for developers.

I didn't know Rogue Wave back in those days, which is what made me a bit scared. Basically, I was doing what I always tell people not to be doing: Being scared of the unknown. Getting out of that comfort zone is a good thing (mostly). I shouldn't have reacted in this way.

About the Github acquisition

Now I'm seeing a lot of people being scared by Microsoft acquiring Github, and the funny thing is that I feel no fear. Part of that is probably because I know Microsoft and while they have a past of very bad behaviour when it comes to open source, they've changed a lot in the previous years. Sure, I also make jokes about Windows sometimes, although those jokes are always based on what Windows was years ago, but overall their current leadership seems very aware and open to the concept of open source.

Another reason I'm not scared is that at this point Github does not seem profitable. A good financial injection from a big company that has no problem investing some money is something like Github is what they may need right now. Sure, things will change after the acquisition has closed, but probably only to make Github profitable again. I trust Microsoft to understand what Github is about and how to run the company.

Moving to Gitlab

Now, there are enough reasons to move to Gitlab. For instance their great CI/CD tooling, their tight integration with Docker or one of their many other features. The fact that they run a very transparant and open company (including the Gitlab codebase itself) can be another good reason. My company has mostly migrated to Gitlab already because of our great Gitlab/Docker/Rancher setup. Moving because of the acquisition of Github is probably the worst reason though. Keep in mind that Gitlab has Google VC so moving to Gitlab does not mean you're now hosted by an independant Git hosting company.

Having said that, I hope that all those people that migrated their codebases to Gitlab will find out about the awesome features they offer that Github does not offer. You'll have to get used to their interface, but Gitlab is awesome.

Back to Github

Many people are predicting doom and gloom for Github. Open source repositories should be moved or else..., Microsoft would go and have a look at your private repositories. I see no reason why any of that will happen. Github will still be Github. Of course they will keep your codebase safe and won't look at the contents of your repository: That is their core business. If they would do stuff like that, 99% of their customers would be off their service in no-time.

So to all developers who are scared after the news of the acquisition broke I'd say: Give Microsoft a chance. The company has changed a lot and they deserve a chance to prove what they're worth. So if Github is working for you, there is no need to move away. As said, there are good reasons to move to Gitlab, but please move to Gitlab for the right reasons, not for your fear of Microsoft.

Surface Book 2 For Development

Over the past month and a half I've been trying to fully switch to a new work machine. Instead of my trusty MacBook Pro, I've mostly been working with a Microsoft Surface Book 2. Here's my lessons of this period.


Let's start with a bit of context. Over the past 10+ years I've been using Apple laptops exclusively for work. It started when I started my job at Ibuildings and I got the opportunity to choose between a PC laptop and a Mac. I'd heard good things about Mac so I decided to give it a try. When I came home from work after that first day I told my wife "If I ever leave Ibuildings, I'm going to have to buy myself a Mac". I was impressed. The ease of use, the intuitiveness and the user experience were all so nice. So much better than Linux, which I'd been using in the years before. Or even Windows, which I'd been using until Windows ME came out and forced me into the stable hands of Linux.

I've been a full-on Apple fanboi ever since. Until a couple of years ago, there was nothing Apple did that would stop me from using their stuff. The platforms they built were stable and because they control both hardware and software, everything was tuned to each other.

But in the past couple of years I've been a bit more unhappy with Apple's decisions. Their platforms are becoming less stable, less reliable, and more than once I've felt their decisions were based mostly on economics, on money, and not on usability which had been their focus until then (or at least that's how it felt).

When Microsoft first announced the Surface (and later the Surface Book), I was intrigued. A tablet that is also a laptop. Everything in a single device. Powerful enough to work on, yet also easy to bring to a meeting and not have a laptop screen in front of your face. When the first rumours started that Apple was going to introduce an iPad Pro, I sincerely hoped it would be similar: An iPad device running macOS. That would be great!

The announcement of the iPad Pro was a disappointment to me. With it running iOS there was no chance I could do serious development on it. The specs were also disappointing. This was not an option anymore.

My MacBook has been slowly becoming a device that was frustrating me instead of a device that I felt happy to spend 8-12 hours a day on. So I've been looking around. In February I was visiting a Dutch Mediamarkt store to get some stuff, I noticed a Surface Book, and I started playing with it. A Mediamarkt employee came by to give me a demo of some of its features and I was pretty much convinced.

Borrowing a Surface Book

Switching platforms is a big decision however. I've got so much time, effort and money invested in the Apple platform that I'd (at least partially) lose when I switch to Windows, that I really wanted to test-drive the Surface Book before actually making a decision. So I started looking for companies that have Surface Book rental devices. I found several companies but they were all aimed at renting the device for a couple of days, for events and such. If I were to rent it for 2-3 months (which I'd need to really test-drive the device) I could just buy the device. The prices were much higher than anticipated, and long-term rental did not really seem to be a thing. So I reached out to Gerard, one of my contacts inside Microsoft Netherlands, and asked him if he knew of companies that do this. Gerard introduced me to Paul from Microsoft Netherlands, and Paul offered to lend me a device. For free. The only catch was that I'd share my experiences. Given that I was planning on sharing my experiences anyway if I'd find a rental device, I quickly agreed. This was a great opportunity!

Waiting is hard

After agreeing to lend the device, the waiting is probably the hardest. It felt just like that moment when you ordered your new MacBook Pro on the Apple website and then have to wait for it to be delivered. A shiny new device is coming your way. Luckily, the wait wasn't all that long, because a week later a parcel was delivered. Ooooohhh.

The first time

I unpacked the Surface Book 2 and booted it up. It definitely felt like I was unpacking and booting up a new MacBook in terms of experience. A nice wizard helped me set up the basics of the computer like the user, the wifi etc. The whole setup could also be done using speech with the Cortana software, but as fancy as it may seem, I somehow dislike microphones constantly listening to what I'm doing and saying, I quickly turned that off. All in all the initial setup was done in a couple of steps and a couple of minutes.

Now, to set this up as a development workstation I need some software. The initial list of things I thought I needed was:

  • Firefox
  • Docker
  • Git
  • PHPStorm

That should at least give me a basic setup for doing my development projects. Just like any new computer this is pretty straightforward. Download, run the installer, run the software. Nothing special about that. But I quickly found out I was missing some other things:

  • An SSH key for Github
  • 1Password for my passwords
  • A MySQL client

The first two were done, pretty quickly, the last took me a bit more time.

Sequel Pro

Replacing Sequel Pro took some time. Nothing works like Sequel Pro in terms of user experience. My first thought was to try MySQL Workbench, but I quickly concluded that it is not my thing. It just misses any form of user experience. After searching around the Internet for a bit I found HeidiSQL, a free software package for managing MySQL, PostgreSQL and MSSQL databases. It's not as good as Sequel Pro, but it comes really close. The interface is very clear and intuitive. I'd found my Sequel Pro replacement.

Connecting my headphones

The first real issue I ran into was when I wanted to connect my bluetooth headphones (JBL E65BTNC) to the Surface Book. At first, the Surface Book didn't even see my headphones, then when it recognized my headphones it wouldn’t connect to them. When I turned bluetooth and my headphones off and on again, they connected. I guess one would call this "the Windows way" jokingly, but it seems that after all these years, it actually still is the Windows way. As I used the Surface Book more and more bluetooth turned out to be the main weak spot: I tried to connect or reconnect several different bluetooth devices during my trial period. Eventually, most devices did connect, but it usually took several tries and turning off and on of devices and the Windows bluetooth functionality before it worked.


Another issue I had was with Docker networking. My initial playing around with Docker for Windows worked fine, but as soon as I wanted to start working on my client project I had major issues with networking in Docker. We have a pretty complex Docker setup which somehow did not want to work. Luckily, a co-worker who is also using Windows was able to help me out. In the Hyper-V Manager under Virtual Switch Manager, I needed to create a new external network. I used my wireless controller as the external network for this. Important was to tick the box 'Allow management operating system to share this network adapter'. Once I had done this and restarted Docker, it worked like a charm.


Another useful tip I got was to use ConEmu. ConEmu is an easy little application allowing you to have multiple Powershell tabs in a single window. I use shells a lot, and with Powershell it is impossible to have several tabs with different shells, but using ConEmu, you can do this. ConEmu is actually quite powerful, because you can configure several different shell configurations. This means you can easily open a new tab with a different configuration, if you have shells for several purposes. Quite useful!

Bash on Windows

At some point I was also pointed towards the option to Run an actual Bash on Windows. I tried it out and indeed it works quite nice. Since there is no (or I did not find any) less-like program in Powershell, the Bash shell makes it a lot easier to quickly check the contents of files or for instance tail -f a log file. I had quite a few issues though integrating the Bash with my Docker, because Bash actually runs inside a Linux subsystem on your Windows, so it is not fully integrated with Windows. I was pointed to a solution: You need to set EXPORT DOCKER_HOST= in your .bashrc in the WSL. Then in the settings for Docker for Windows, you need to tick the checkbox Expose daemon on tcp://localhost:2375 without TLS. Now you can use the docker commands in your WSL linux (after you installed the package and the docker-compose package using apt-get). Unfortunately, this did not fully solve the problem, so I've decided (given I only have a limited trial period at the moment) to let this rest and just use Docker from Powershell and use Bash for things like tailing, quick file access etc.


Over the previous years I have invested quite heavily in Mac. Not just in terms of money, but also in terms of tooling. One of the major issues I've encountered is the fact that I am a heavy user of the Messages app on my MacBook. It allows to me to quickly type iMessages to other people with Apple devices. So a big question for my decision to switch to Windows is: Do I want to get rid of iMessage? is only available for Apple platforms, so there is no way I can have the same integration with iMessage if I switch to Windows.

In the previous months I've solved this by using WhatsApp Web in Rambox. I was already using Rambox for access to Slack, Gmail/Inbox, Discord and Google Calendar, so it was easy to also add WhatsApp to that. This allowed me the same ease of sending messages, and actually it made it even easier, because I was now also able to communicate that same way with people that do not have Apple devices. Only downside is: It's WhatsApp.


I don't use my MacBook purely for work. At home I also use Steam to install and manage the games that I play on my MacBook. The most important games for me at the moment are Orwell, Eternal, Prison Architect and Football Manager 2018. All of these ran extremely well, with much better graphics, on the Surface Book Pro. The touch screen was an added value for some games (like Eternal), because you could literally play by dragging cards onto the playing field. No mouse required anymore! Yay!

And there was more. I could now try out Fortnite, which doesn't really run on OSX. A great game, which I could easily play with all sliders cranked up to the maximum.

Switching was surprisingly easy

Now, this is not really a testament to Windows, Mac or anything like that, but more to the fact that over the previous years I'd switched to so many cloud-based services, but switching to Windows was extremely easy thanks to things like Dropbox, Todoist, Google Calendar, Google Drive, 1Password and OneNote. Thanks to applications like these switching to another platform is literally just installing the apps on the new platform, logging in and it works. I immediately had access to the majority of the important files that I had on my Mac, all my notes, my calendar and my todo-list (most of my life is dictated by my todo-list these days, I could not afford to lose this data).

The touch screen and the detachable screen

The touch screen on the Surface Book is quite nice. It works really well. Unfortunately many apps are not yet adapted to work with touch screen, which made it a bit more annoying. Luckily, I also got a Surface Pen with the Surface Book, which allows for more precise aiming.

Where the power of the touch screen really stood out was when I would detach the screen to use it as a tablet, for instance when I went into meetings. When using OneNote, I could use the Pen to write down notes, then select those notes and use the OneNote Ink to text feature to convert my written notes into actual text. The handwriting recognition is impressive! It makes some mistakes here and there but they're small and easy to correct.

As a developer, detaching the screen had some downsides as well though. When the screen is detached, the battery life is (understandably) a lot lower. If you're running several Docker containers, it's hard to even sit through an hour-long meeting without your battery running out. So using the screen as a tablet does add some effort: You'll have to shut down your Docker containers and IDE before going into the meeting. When you turn off such battery-slurping applications though, your battery problems disappear immediately and you can sit through meeting after meeting without ever fearing of running out of battery life.

Where Apple still wins

There are still some things Apple definitely wins in. Mostly this is the default toolset that is installed. When you get a Mac, you can easily open any file you receive. Whether those are images, PDF files, Word documents, spreadsheats. It all just opens. One of the main reasons for this is that every Mac is equipped with a great standard toolset: Preview, Pages, Keynote, Numbers, it's all there and can open just about any file you get. With Microsoft, you get the Office tools installed but you don't get the license by default, which means you get a read-only view with a very annoying popup asking you to get a license. PDF files are opened in Edge (for some reason), which seems to work but Edge is not as lightweight as Preview.

Another thing I've noticed is that it is a lot easier to find good apps with a good UX for MacOS. There's a lot of software you can find for Windows, but a lot of it simply doesn't seem to have been made with any sense of UX. Even things like the aforementioned Todoist is a lot less usable on Windows than it is on MacOS. In a blog they promised improvements, but if they released those improvements already, I'm not sure how bad it was before. Don't get me wrong, Todoist works well on Windows, but the interface is far from as smooth as it is in MacOS. Todoist is just an example, but I have many similar experiences with other apps.

Having said all that, all those experiences were with apps that already work well under Windows. They could just be better. It's definitely not a reason to stop my move to Windows.


After just under 2 months of using the Surface Book 2 I'm very sad to let it go. This machine is amazing. I'm very positively surprised by Windows these days. When I "left" Windows (during the Windows ME time) it was a horrible operating system for power users, and the times since then that I had to work with Windows were not very good experiences. But since then, a lot has changed. Windows is a serious option again for development work, and with the Surface Book 2 Microsoft has a fantastic and very powerful machine that does well both as development machine and for your occasional gaming pleasure. I know what my next development laptop will be, and it's not a MacBook Pro. It's going to be a Surface Book 2.

Mental Health First Aid

It was TrueNorthPHP conference in 2013 that I first saw Ed Finkler speak. It was his Open Sourcing Mental Illness talk. This talk has meant so much to me on so many levels, but one of the things I took away from that talk was the existence of Mental Health First Aid. Mental Health First Aid is basically the mental illness counterpart of regular first aid. It gives you basic information about how to handle when you encounter someone with a mental illess, especially in crisis situations. This includes approaching the person, what to do and not to do when talking to them, and how to make sure people get the right help, including a hand-off to (mental health) medical professionals.

Fast forward a few years to about two years ago. Doing the MHFA course was still on my wishlist, but there still was no option in The Netherlands to do so. Just as I was looking into options of travelling to the UK for the course, I found out that one of the Dutch regional institutes for mental health (GGZ Eindhoven) was working on bringing MHFA to The Netherlands. I contacted them to see if I could be part of the trial group they were doing, but never heard back. I put my focus on some other things I wanted to do and put MHFA on hold again.

Earlier this year I decided to add a bit more structure to the training programs within my company Ingewikkeld to enable my employees a bit better to increase their knowledge and skills, but decided I definitely also should use this new structure for my own. I looked up the MHFA options in The Netherlands and found out that there were now many options for taking the course, including in my favorite city Utrecht. I signed up for the course, and over the past 4 weeks have had 4 3-hour sessions.

The goal

In the first session we were asked for our goals. My main goal was to understand more about mental illness and get handholds on how to act in case of talking to someone with a mental illness, be it a crisis situation or not. And over all sessions, this was indeed what happened. I learned a lot about depression, fear, psychosis, substance abuse and crisis such as suicidal tendencies, self-harm, panic attacks, aggression and more. About what it was and about how to handle such situations. As we were asked at the end of todays session to summarize our experience over the past 4 weeks, my answer was:

Goals achieved

It became personal

As I've got issues with depression myself, especially the first two sessions caused a lot of self-reflection as I learned more about what happens with depression. There was a lot of familiar situations in the course material, and it was very interesting to hear more background information on those situations. Our group was a very nice and diverse group, with people with lots of different backgrounds, which gave me a lot of insight into how different people experience different situations.

As the course progressed though, other mental illnesses were handled that I had no experience with. This was definitely eye opening. I now have so much more understanding of what can happen in peoples heads, and I hope that helps me in a more empathetic response to such situations, if I ever encounter them.

Why I recommend more people taking this course

Isn't it a bit weird that we find it very normal to take regular first aid courses, but we try to stay away of anything related to mental health? Somehow there is still a taboo on mental health related problems. And yet (at least here in The Netherlands) there are news items on a more regular basis about people with mental illness crises. It seems like this is a growing problem, yet nobody wants to know how to handle in such situations?

Taking this course will make you understand more clearly what happens when someone has a mental health issue. How it affects their life, and how to handle when you encounter a situation involving a mental health issue. It will help you be more empathetic, not just in crisis situations, but also when simply talking to someone with a mental illness. I also think it will look good on your resume for potential employers. It is still a rare skill to know how to handle in this situations, and employers will benefit from you having this knowledge. So Check which local organization offers the course and register. I'm pretty sure you won't regret it.

3 weeks without coffee

Three weeks ago I decided that was going to take a break from coffee. Every once in a while I take a break from certain things, or try to minimize their usage. Some months ago I minimized the amount of soft drinks I was drinking, and 3 weeks ago it was time to quit coffee. I wanted to break the habit and get rid of my caffeine dependence.

I'd done this before so I knew what to expect, and it was not very different this time around. The first day was fine except for the habit of getting coffee, and I accidentally got coffee when visiting a client, out of habit. "You want coffee?" I got asked, and just like that I said "sure". I didn't realize I had quit coffee until I already finished half of it. The second and third day I had some headaches and got a lot of urges to get coffee. I resisted the urges, got water or tea instead, and pretty much got off my addiction (or dependence, or habit). From day 4 onward I had pretty much no urge to get coffee anymore.

The effects? I sleep better. I wake up less tired (well, except when I go to bed really late of course). I am also less tense, feel more relaxed. Long story short, it just feels a bit better.

I intend to keep this up for a long time. Now that I'm used to not drinking coffee, it's not that hard anymore, and the urge to get coffee is gone. I'm also considering doing a similar habit-breaking experiment with alcohol.

If you've got similar experiences with experiments like these, I'd be happy to hear from you.

Installing Bolt extensions on Docker

I'm currently working on a website with the Bolt CMS. For this website, I am using an extension. Now, the "problem" with extensions is that they are installed using Composer by Bolt, and end up in the .gitignore'd vendor/ directory. Which is OK while developing, because the extension will just be in my local codebase, but once I commit my changes and push them, I run into a little problem.

Some context

Let's start with a bit of context: Our current hosting platform is a bunch of Digital Ocean droplets managed by Rancher. We use Gitlab for our Git hosting, and use Gitlab Pipelines for building our docker containers and deploying them to production.

The single line solution

In Slack, I checked with Bob to see what the easiest way was of getting the extensions installed when they're in the configuration but not in vendor/, and the solution was so simple I had not thought of it:

Run composer install --no-dev in your extensions/ directory

So I adapted my Dockerfile to include a single line:

RUN cd /var/www/extensions && composer install --no-dev

I committed the changes, pushed them, Gitlab picked them up and built the new container, Rancher pulled the new container and switched it on, and lo and behold, the extension was there!

Sometimes the simple solutions are actually the best solutions

A rant about best practices

I have yet to talk to a developer that has told me that they were purposefully writing bad software. I think this is something that is part of being a developer, that you write software that is as good as you can possibly make it within the constraints that you have.

In our effort to write the Best Software Ever (TM) we read up on all the programming best practices: design patterns, refactoring and rewriting code, new concepts such as Domain-Driven Design and CQRS, all the latest frameworks and of course we test our code until we have a decent code coverage and we sit together with our teammates to do pair programming. And that's great. It is. But it isn't.

In my lightning talk for the PHPAmersfoort meetup on Tuesday, January 9th, 2018, I ranted a bit about best practices. In this blog post, I try to summarize what I ranted about.

Test Coverage

Test coverage is great! It is a great tool to measure how much of our code is being touched by unit (and possibly integration) tests. A lot of developers I talk to tell me that they strive to get 100% code coverage, 80% code coverage, 50% code coverage or any other arbitrary percentage. What they don't mention is whether or not they actually look at what they are testing.

Over the years I have encountered so many unit tests that were not actually testing anything. They were written for a sole purpose: To make sure that all the lines in the code were "green", were covered by unit tests. And that is useless. Completely useless. You get a false sense of security if you work like this.

There are many ways of keeping track of whether your tests actually make sense. Recently I wrote about using docblocks for that purpose, but you can also use code coverage to help you write great tests. Generating code coverage can help you identify which parts of your code are not covered by tests. But instead of just writing a test to ensure the line turns green, you need to consider what that line of code stands for, what behavior it adds to your code. And you should write your tests to test that behavior, not just to add a green line and an extra 0.1% to your code coverage. Code coverage is an indication, not a proof of good tests.

Domain-driven design

DDD is a way of designing the code of your application based on the domain you're working in. It puts the actual use cases at the heart of your application and ensures that your code is structured in a way that makes sense to the context it is running in.

Domain-Driven Design is a big hit in the programming world at the moment. These days you don't count anymore if you don't do DDD. And you shouldn't just know about DDD or try to apply it here and there, no: ALL YOUR CODES SHOULD BE DDD!1!1shift-one!!1!

Now, don't get me wrong: There is a lot in DDD that makes way more sense than any approach I've used in the past, but just applying DDD on every bit of code you write does not make any sense. Doing things DDD is not that hard, but doing DDD right takes a lot of learning and a lot of effort. And for quite a few of the things that I've seen people want to use full-on DDD recently, I wonder whether it is worth the effort.

So yes, dig into DDD, read the blue book if you want, read any book about it, all the blog post, and apply it where it makes sense. Go ahead! But don't overdo it.


I used to be a framework zealot. I was convinced that everyone should use frameworks, and everyone should use it all the time. For me it started with Mojavi, then Zend Framework and finally I settled on Symfony. To me, the approach and structure that Symfony gave me made so much sense that I started using Symfony for every project that I worked on. My first step would be to download (and later: install) Symfony. It made my life so much easier.

Using a framework does make a lot of sense for a lot of situations. And I personally do not really care what framework you use, although I see a lot of people saying "You use Laravel? You're such a n00b!" or "No, you have to use Symfony for everything" or "Zend Framework is the only true enterprise framework and you need to use it".

First of all: There is no single framework that is good for every situation. Second of all, why use a pre-fab framework when you can build your own?. And sometimes you really don't need a framework. Stop bashing other people's solutions and start worrying about solving your own problems. Pick the right tool for the job and fix stuff.

Event sourcing + CQRS

Event sourcing is a way of storing and retrieving data that does not hold a single truth. It uses events to communicate changes to your data. At any point in time, you can replay those events to get to the current state of your data, but it also allows you to look back into your history for other states of the data. It is a great concept for storing data where you need a paper trail (for instance for audit purposes) or where you need versioning of your data.

CQRS is a method of separating your C, R, U, and D. In most places where I've seen it applied it is a separation of reading data from the datastore and writing data to the data store.

Both are, like Domain-Driven Design, a big hit in the programming world at the moment. There's a lot of fanaticism around it. Of course, you should do event sourcing, preferably on all your data. Of course, you should use CQRS, it is such a great way of separating responsibilities.

And while I agree with the arguments, I don't think they should be applied to every situation. In many projects, a "traditional" relational database will work. Or the previous big hit, document databases, will work as well. And for your average project separating read and write are not a huge requirement either. Sure, it will add some structure to your code, but also some overhead while developing. As Martin Fowler puts it:

For some situations, this separation can be valuable, but beware that for most systems CQRS adds risky complexity.

Pair programming

Now here's a programming practice that I truly love: pair programming. Sit down with another developer and start coding. One developer is the "driver", they type the code and offer implementations of the road that the "navigator" lays out. The navigator sets next to the driver and comes up with ways of implementing the task at hand.

There is something about this way of working together that makes a lot of sense. My way of looking at a problem is probably different from the person sitting next to me, and by combining our approaches and picking the best of both world, the solution will be better than any solution our individual selves could've come up with.

Having said that, I don't think any developer would say "yes, let's do pair programming full-time". Or if they do, they're not like me.

Pairing full-time would exhaust me. When I do full-day pairing sessions (which I occasionally do) I am completely dead by the end of the day. When I do it a couple of days in a row, I need the full weekend just to recover from that, meaning I have very little time to actually do fun stuff. The amount of social interaction while pairing would kill me if I do it full-time. The intensity of pairing as well. Because pairing is intense. Instead of just having to think of your own solution, you now have to combine that with the input of the other half of the pair and together you have to decide what is the way to go. And there is such a thing as Decision Fatigue.

Instead, and I've done this several times to great success, you should combine pairing sessions with individual work time. Do pair programming for an hour, or maybe two hours, then split up and work on parts of the task individually, then come back together to combine your individual work. This still gives you the benefit of working together but won't burn you out in two weeks time.

Refactoring + Rewriting

Refactoring is the process of changing parts of your code while keeping the outward behavior the same. It improves the code quality without impacting the code that relies on your code.

Rewriting code is basically refactoring without giving a shit about backward compatibility. It's refactoring YOLO style. You completely replace the old code with new code, and the behavior of the code may change according to your wishes.

Depending on who you're talking to, every bit of legacy code should be refactored or rewritten, as soon as possible.

And while I agree on the fact that we should refactor or rewrite legacy code, I probably disagree on the definition of "as soon as possible".

Refactoring and rewriting code or great tools to improve the quality of your codebase and with that the quality of your application. They are extremely powerful tools, but with great power comes great responsibility. Given unlimited time and funds, I am of the belief that any developer in this world would continually keep refactoring and rewriting their code, and never ship a damn thing. Because as we develop our software and as we develop our skill set, we find out about new and different ways of solving the same problem. And every time we discover a fancy new way to solve a problem, all code we have written until then becomes instant legacy code. This is a never-ending cycle.

Legacy code is fine if it works, performs and is secure according to the business specifications and requirements. It is possible that, from a technical point of view, you may want to fix some issues that the code has, but there has to be a balance between delivering code improvements and delivering functionality. We should not refactor or rewrite parts of the code as we encounter them, but instead, keep track of what we have found in a central place and determine, in close collaboration with the business, what to fix at that time. If you really need a quick solution, you encapsulate the legacy with a small layer of better code. That way you can use the legacy while having a nice and "modern" interface to it.


All of the above examples are just examples of different best practices that you need to consider. When writing code, you should, of course, keep all the best practices in mind that you can think of. But there is no need to consider them all at the same time. Make a balance between code quality and speed of development, applying the best practices that apply to the situation you're in at that point. The best practices are best practices for a majority of the situations, but they are generalized so as to apply to a majority of the situations. This also means they may not apply to your situation, or there may be more important things you should weigh in. So read up on all the best practices, keep them in mind, but think before you do. Apply the best practices wisely after weighing all the factors that apply to your situation. And please, please use your common sense.