Multiple Github accounts on a single machine

I've not used Github a lot anymore in recent times because we've switched to Gitlab some time ago. The main reason for using Github at this point is client work. But sometimes, I still need my account for open source reasons.

For one of the customers I do work for, however, I was asked to create a separate Github account. Now I was running into some issues because the private repositories are now shared with this new account, but by default I'm logged into my own account on the commandline (well, it picks my default key). Searching around I ended up on the site of freeCodeCamp, where Bivil M Jacob shared the solution that ended up working. The solution: Add a configuration file.

The first three steps are pretty standard and now that hard to understand, but the fourth step was new to me. I had no idea this was possible. So I followed the guidance of Bivil and ended up with my own .ssh/config file:

# Skoop, - the default config
   User git
   IdentityFile ~/.ssh/id_rsa

# clientskoop
   User git
   IdentityFile ~/.ssh/cf_ed25519

I've redacted this example to not include my actual client name. Now, if I want to clone a private repository from my client, all I need to do is:

git clone

In the above command, note the host name that is the same as the Host I defined in the config file. And lo and behold, when I clone in this way I can clone without a problem. It takes the correct key for this specific repository and I can do everything I want: push, pull, etc.

Sometimes the solution is simpler than you would've imagined.

Holiday project: Better wifi and cabled network

Almost two years ago we moved to our current house. It was actually right at the start of the current pandemic, because we had some potential issues with moving because at the start it was unclear what the rules of the first lockdown were going to be. Luckily, moving was still an option.

In the old house we had set up our wifi network using a Ubiquity Amplify set with a base station and two mesh points. This worked really well because it was an old house so the wifi signal could pretty easily pass through floors and walls. The house was narrow but long, and we had a good wifi covering throughout the house.

The new house we moved into, however, had some challenges. It's a newly built house, very well isolated and with floor heating. It's great! However, having a set up with a mesh network that relies solely on a wifi signal being passed around quickly turned out to not be reliable. The signal was almost fully blocked by the heating and isolation material, causing great wifi on the ground floor but bad to now coverage on the first and second floor.

I contacted Ricardo Wijthoef of Tweeweg Internet, who was also the man responsible for the wifi on our WeCamp island. A great person very knowledgable of how to set up wifi and cabled networks. We got together and made a plan of how the network would work, what equipment we needed and how to set it all up. We started with the first floor (since the ground floor had pretty OK coverage with the Ubiquity gear) and getting the cables up to the second floor. Once the day was over we agreed to set a new date soon to finish the rest.

Unfortunately, Ricardo got quite ill and after some time in the hospital an email landed in my inbox some time later with the announcement that Ricardo had passed away. While this was sad for our home network, it was even more sad for the fantastic person that Ricardo was. Always willing to help people, friendly and a funny guy.

For quite some time after that, I could not be motivated to even think about doing the network without Ricardo, but last year I got to the point that I said to myself: Let's finish this thing.

The setup

Because getting a good wifi single up two floors is almost impossible, the only solution is is to pull cables up. Since we already had a cable going from the ground floor to the second floor and the first floor was already fully functional, that meant I just needed to get the correct access points for the ground floor and the second floor. Ricardo had recommended TP-Link Omada gear and since we already had installed an EAP235-Wall on our first floor, I ordered two more. Since these are powered by Power of Ethernet (PoE) I ordered a TL-SH1005P Switch. This would ensure all access points would have power. To have full control over the network and connect to the modem/router of my fiber provider, I got us a TL-R605 Router.

Now there were two more challenges to solve:

  • I wanted most devices in our living room to be connected by cable
  • I did not want to have to configure all devices seperately

Cabled connections

The EAP235-Wall has 3 ethernet ports, which is fantastic because it allows you to have good wifi as well as cabled connections. But our living room has more than 3 devices that I want to have cabled: Our TV, our settop box, our Playstation, our BlueSound and our AppleTV. So I got us an 8-port unmanaged TL-SG108 switch that I connected to the ground floor EAP235-Wall. Problem solved. Once I realised how easy this was, I got another TL-SG108 for the office on the second floor. While wifi is good there, having ethernet there for our computers is even better and those switches are not that expensive. Excellent!

Configure once

Now the other thing: I wanted an easy way to manage all these TP-Link devices. Luckily, TP-Link has an easy solution for that in the form of the OC200 Hardware Controller, a small device powered by PoE that allows a central place to configure all devices using the Omada Cloud environment. Configure once then simply roll out the configuration changes to all devices in your network.

Well, that was easy

After connecting everything the setup was a breeze. It was very easy to connect everything together and get the controller to recognize all the devices. I was a bit shocked at first that the default username/password combination for all devices was admin/admin, but when you log in to the devices you're forced to change that. Also, the controller will take over the control of all the devices anyway. So that was easily settled.


Now, what did all of this cost? A quick overview:

  • 3 TP-Link EAP235-Wall access points: 70 euro each
  • 1 TP-Link TL-R605 router: 60 euro
  • 1 TP-Link OC200 hardware controller: 75 euro
  • 2 TP-Link TL-SG108 8-port switches: 30 euro each
  • 1 TP-Link TL-SG1005P 5-port PoE switch: 45 euro

That makes a total investment of 450 euro, plus the cables (but Ricardo was nice enough to just give those to us).


I am extremely happy with this setup. The Omada Cloud web interface and the Omada app are really easy to use and gives a lot of control over your network. The wifi connection throughout the house is now fantastic and having the devices that use a lot of data such as the TV, settop box, Playstation, BlueSound and AppleTV on a cabled connection has greatly improved the audio and video quality of those devices. The investment was really worth it. Plus, it was a nice hobby project over the holidays.

Comference 2021: A free online conference

Last year, as the global pandemic caused us to have to cancel WeCamp, we felt compelled to do something else. That something else became Comference, an online conference from the comfort of your own chair, couch, bed, or wherever you want to comfortable see a nice set of interesting talks. In terms of content we wanted to stay close to WeCamp in that we felt tech was important, but there's many topics related to the tech world that are equally important and maybe not getting as much attention at tech conferences.

Our go/no-go moment for WeCamp is at the end of December or early January. This is because we have to reserve the WeCamp island and make a down payment on the reservation, but also because we start our search for both sponsors and coaches in January. So early this year we had to make the sad decision that 2021 was simply too early to start planning a new WeCamp. The result of that was, obviously, that we wanted to organize a new Comference.

Tomorrow we kick off that second edition of Comference, and I'm really looking forward to it. We have some amazing talks that I want to highlight here.


As much as developers don't want to hear this, documentation is important. I'm really looking forward to hearing Milana Cap speak about documentation based on the documentation of one of the most important open source projects in the world: WordPress.

Get rid of management

With our company Ingewikkeld we've been working a lot for and with Enrise over the years. It is fair to say that Ingewikkeld wouldn't be where we are today without Enrise. We've worked on some amazing projects with them. Some years ago they switched their whole business to completely self-managing teams and it's been interesting to follow that journey. Now we'll have some people from Enrise telling us about that switch and the new model.


Change can be hard. If you're convinced a change is necessary, but the rest of your team or organization isn't convinced, it'll take a lot of hard work. Jeremy Cook will tell us more about what you can do to get adoption for change without having to become some kind of dictator that pushes change down people's throats.

No More Scrum?

Most companies I come into as a developer or consultant have adopted some form of scrum. Jeroen de Jong is at Comference to tell people to stop doing scrum. What? Yes, he'll share the story of how he told a team to stop down scrum and how that helped them to improve collaboration.

Do you see the light?

Due to the pandemic a lot of people have started working from home more. And as the pandemic (hopefully) leaves this world, remote working seems to have gotten a stronger grip on the workplace. This is a great development, but it also introduces some challenges: How to get a good home office set up. Camilo Sperberg talks about a topic often forgotten: Lighting.


Diversity is a diverse (see what I did there?) topic. Last year we have Lineke talking about diversity from a completely different point of view, and this year Andreas Heigl takes a completely different view on diversity again: How do we scale diversity on a global level?

Don't believe the hype

APIs are here to stay, but we can certainly improve on how we create our APIs. Tim Lytle will be sharing information on how to make an API easy to traverse and ensure less problems with implementing APIs by introducing hypermedia.

The fellowship

If you've been following me for some time you know that I simply love the PHP community, and the concept of community in general. I'm really happy that one of the greats of our community, Wasseem Khayrattee, has agreed to do a talk about the community and what community can do for you as well as what you can do for the community.

Isolation on an island?

Last but certainly not least, we dive into the world of open source. Tonya Mork and Juliette Reinders Folmer are going to have a conversation about open source, and about how open source projects are not isolated islands, but together they are a constantly changing ecosystem of related projects.

See you there?

The whole of Comference is free to stream on YouTube (we'll also add the video's to our website). If you have any questions or want to discuss the subjects of the talks with fellow attendees, feel free to join the Comference Discord. That same Discord is also used during the game night to discuss what games we're going to play. See you there?

Quick Git trick: Sign your commits after the fact

OK, so this might be a situation you may get into, but I'm blogging this as much for myself as for others because I know this is a situation that I might also head into:

You start working on a new project for a new customer, and you forget to set up PGP signing of your commits in your Git configuration. Now you've pushed your branch, made a pull request but the pull request can not be merged because your commits are not signed. OH NO! Now what?

The one-liner

Luckily, I'm not the only one to do this and a quick search around the web resulted in this post on and look, it is actually quite easy. First of all, you configure git to use your PGP key and set the correct email address. Then you use git log to find out the commit hash of the commit before your first commit. Then you use the command:

git rebase --exec 'git commit --amend --no-edit -n -S' -i <commit hash>

Where, obviously, you replace <commit hash> with the actual commit hash.

Simply write and quit the editor coming up describing what the rebase is going to do, and... TADA! All your commits are now signed.

Force push

Yes, you will have to force push your changes to your branch. But since the merging of the pull request was blocked anyway, no one will have pulled your changes. Right? RIGHT?

If anyone pulled your changes before you force pushed, you'll have to warn them and prepare them for a bit of extra work.


And that's it. That's all you need to do to sign your commits after the fact. And next time you start working on a new project, don't forget to configure Git to use your key and sign your commits from the start.

Ingewikkeld presents: Comference Summer Sessions

I am very happy to announce a new initiative from Ingewikkeld: Comference Summer Sessions. In three interactive panel discussions we'll be talking about three topics we feel are very important in development:

  • Open Source in your company
  • Facilitating company cultural change
  • The role of the Product Owner in a development team

Each session will last about 1.5 hours, in which the panel will discuss the subject at hand. Through our Comference Discord, you can also ask questions and remarks that the panel can talk about.

After each session, the panelists will also record a 30-minute podcast in which the panel summarizes the most important lessons from the interactive session. So if you miss the stream or if you want to be reminded of what was talked about, the podcast is there to help you out.

For the first session, on Open Source in your company, the panel is already known:

  • Host: Jaap van Otterdijk (Ingewikkeld)
  • Sebastian Bergmann (PHPUnit,
  • Stefan Koopmanschap (Ingewikkeld)
  • Erik Baars (ProActive)

I am very excited about this new series of events. The sessions are free to attend (we're streaming them live on YouTube). SO block your calendars for the three dates:

  • Tuesday, June 29
  • Tuesday, July 27
  • Tuesday, August 24

I hope to see you there!

Dear recruiter

Dear recruiter,

This is an open letter to you, the recruiter who focusses on the tech industry. I've had many interesting experiences with recruiters, and I want to share some, to tell you about my positive and negative experiences, in the hopes that you might learn something.

In my social bubble, both online and offline, many developers are not happy with you. Perhaps not you specifically, but recruiters are often seen as annoying, rude, lying people. It is therefore especially important for you to make a good first impression when you contact someone. Hopefully, the things I'm going to talk about will help you do that.

Be clear and honest from the start

I have had many interactions with recruiters that were either very unclear or dishonest, or conveniently left out some information in the early communication about a project.

One example was an email I got about a very interesting project. I showed interest, sent a CV and had an interview. This means that from my side, I've invested several hours already, hours that for me as a freelancer, are unpaid. Because I see potential in the project.

The customer was interested and I was interested as well. And then the paperwork came about, which included a 90 day payment period. 90 days. That is three months. When I responded to that, I got told "This is standard for this customer". That might be true, but you know that and I don't. And I don't know a lot of freelancers that are OK with or even capable of handling a 90 day payment period. It could've saved me (and you!) hours of work and a disappointment in the final stages of the process. And if this is standard, you knew about it and could've communicated this in your initial email (or quickly thereafter).

More recently, I got an email about an interesting full-remote project. I specifically communicated in our initial communication that the full-remote was an important requirement, and you went ahead and introduced my developer to the customer. When that developer interviewed with the customer, it turned out that because they were recruiting for a new team, there was a 2-day on-site requirement. When I asked about this, you told me you had heard about this already. Why had you not already communicated this to me? This could've saved me, my developer and you a lot of time and frustration.

Send the right projects

The amount of messages on LinkedIn and email in my inbox from recruiters is pretty big. Which I can accept, because that should mean that we do our work well and you as a recruiter think we could be good potential candidates for the roles your customers have.

Unfortunately, when I wade through the emails, I can delete about 75% of the emails immediately. With my company we focus on PHP development, architecture, consulting and training (and you know about that!), so why do I get so many emails for Python positions, Java positions etc? You are wasting a lot of my time, because yes, I go through all of these because there might be a super cool project in there that we want to work on.

If you send out emails or LinkedIn messages, please make sure you only send messages that actually apply to me. If it's sort of related (like a product owner or scrum master role) that's fine as well, but things that are totally unrelated to our specialty can better be sent to the people that specialize in that.


I only rarely receive emails about projects where the rates are communicated in the first email. Most of the emails contain useless wording such as good rates or competitive rates. Just tell me what you're willing to pay me. I'm happy to discuss my rate and in specific situations adapt my rate to fit with your client's wishes. But, here we go again, it would save you and me a lot of time if you communicate this upfront. Because if your maximum rate is 80% of my minimum rate, we're never going to be able to make a deal.

Ask before sending my CV

Unfortunately, I've been in situations where I was contacted by you and I sent you my CV, and while we're still discussing specifics it turns out you've already sent my CV to your client. Later in our discussion it then turns out that we can't agree on specifics, and then I get the but I've already sent your CV to the client and they're really interested, can't we agree on this because this looks bad on me. Listen, if you've already sent my CV while we are still discussing specifics, that is not my problem. Also, please consider the damage it does to my image. Because I'm pretty sure you're not going to tell your client that you f*cked up. Which means that this specific client, which I may or may not encounter in the future, may have a negative experience with me, even if they haven't even ever spoken to me.

That first impression

The above are only a few examples. I have many more where these came from. So let's talk about making a good first impression. And that really already starts with your initial contact:

  • Send an email first, with as much information you have on the project: A good description of what the project is, what your client is looking for and why you think I would be/have a good candidate, a maximum rate (possibly accompanied by a preferred rate), a description of the process and any special requirements or other agreements that are important such as payment method, hours per week/month, specific work days, required insurances and anything special that is expected to be included in the rate such as specific hardware or software, background checks, etc)
  • Feel free to call me after a couple of days if I haven't responded yet, but give me some time to consider if I am or have a good candidate for the project and whether I can agree with the wishes and requirements
  • If any new information comes to light between your email or other contact we have, inform me of that. Don't leave it out for me to find out later. Because I will (have to) find out.

This is just for that good first impression. If my first impression of you is not good, given the amount of bad apples in the recruitment business, it is very likely that you'll quickly make it to my mental blacklist and I'll ignore you in the future.

After a good first impression, make sure to show that you want to continue a good business relationship. I try to do that as well by contacting you if I feel there might be a good opportunity. Don't spoil it by spamming me with projects that you know I'm not a good fit for. If we agree to work together, ensure that you agree to your end of the deal in terms of payments etc. That ensures we might be able to work together for a long time.

Stand out

In a recruitment world that is dominated by spammers, lying people, frustrating communications and an overall negative attitude, it is quite easy to stand out, it is easy to show that you're serious about doing business with me by showing you don't just care about your own profit margin, but you care about a long-term business relationship. I care deeply about delivering high quality services to my customers and, if we work together, to your customers. Please show me that you also care about that. If you do, I will not only be happy to work with you for a long time, but I'll also be happy to help you find other possible candidates, which may lead to more money and a better network for you. I'd call that a win/win situation. Please care. I do.

Speaker support

Over all the years that I've been a speaker at PHP conferences, I have been very happy as a speaker. Most conferences I spoke at would reimburse my travel and get me a hotel room for the duration of the conference. Most of the time, a speaker dinner was included as well. It got me to travel the world. I've seen many amazing cities such as San Francisco, Montreal, Verona, Barcelona, Paris, Berlin, Cologne and many more. I would not have seen all those cities without conferences paying for my airfare and hotels. There were even conferences where I'd pay for some of the costs. Either because I just wanted to be there anyway, or because the conference would offer my company a sponsor slot if I covered my own airfare.

Over the past few years, I've been reading more about how accessible speaking is. Or rather, the lack of said accessibility. I had not yet considered this since I have been privileged to either have employers paying for part of the costs, or having a company with a high enough income to be able to cover some of the costs of speaking. There are actually developers that do not have the luxury of being able to cover those costs, or who are unable to just take the time off work. There's also other situations, such as having to pay for someone to babysit during a talk.

With Comference and also before that with WeCamp we've always made an explicit effort to aim for a diverse line-up. And this year, we've decided to make an extra effort. This year's edition of Comference we will compensate speakers for their effort. Especially with online conferences that can be done directly from your home or office, we hope that this will allow new people to be a speaker. Eventually, we hope to expose as many different viewpoints on technology-related subjects as possible as we believe every viewpoint can be learned from.

We're not stopping at the speaker fee either. We facilitate different talk lengths as well. It is possible for speakers to submit talks for 15-, 30- and 45-minute talks. We hope that this enables people to submit a talk of the length they prefer for their subject. Because not every talk should be 45 minutes or an hour.

The CfP for Comference is now open and I'd like to extend a warm invitation to anyone who has something to share. Our speaker line-up will be created with a combination of invited speakers and selections from our CfP.

Holiday project: Raspberry Pi NAS

After our Sonos died early December, we were in the market for a new device. We did some research, got some advice and eventually decided to go for a BlueSound Node 2. While going through the features we noticed the option to have your own digital music library playing on it (yeah, I know the Sonos could also do that, but we never set it up) and we felt this was a good trigger to do just that. But getting a very expensive (Sonology or the like) NAS just for that, especially after investing in BlueSound, seemed overkill. But of course there are simpler options. So my son and I set out to create a simple NAS using a Raspberry Pi and an external USB harddisk, and made that our end of year holiday project.

What we got

My son did the research on what to get and we ordered the following:

Assembling the hardware

When all the hardware was in, it was time to assemble the hardware. And that was surprisingly easy. Not only because my son was the one doing the assembling, but also because the case included a little screwdriver so we didn't have to look for the exact correct screwdriver.

So: SD card in Pi board, Pi board into the case, screw the case closed, connect network cable, power and... oh shit. We don't have a micro-HDMI cable. OK. Let's order that one and wait for it to arrive.

Fast-forward a couple of days and the cable arrives: Time to connect the Pi to a screen, keyboard and mouse. We had followed a tutorial on how to install OpenMediaVault but unfortunately the Pi wouldn't boot. The green light blinked 4 times, which means it couldn't find a required file. So we searched around a bit and ended up at the Installing OMV5 On a Raspberry Pi document. This was perfect!

Installing OpenMediaVault

After having found the above document, installation was a breeze:

  • Put Raspberry Pi OS on the SD card
  • Install updates
  • Run the installer script that Ryecoaaron wrote
  • Make OMV recognize the USB storage
  • Set up a network share


Fun project!

This was a fun little project, and pretty simple. By now, there's gigabytes of music on the NAS already and our BlueSound is already playing the music. This was great. And since I recently won a Raspberry Pi 400 kit I'm already thinking about what my next Pi-project will be :)

Check your SQL

Recently I've been digging into an internal API for one of our customers because there were some complaints about the performance. Some, and only some, calls would take over 10 seconds to complete. These were all calls that were being used by XHR requests fetching data for display, and the users were (rightfully so) getting annoyed by some pages taking so long to load.

The API itself is, for performance reasons, very low on framework-y stuff. There's a basic request/response handling layer, but beneath that there's basically raw SQL being fed to PDO. If you ask me, this was a great choice. And since the underlying (MySQL) database has some tables with quite a lot of data, I immediately expected the queries on that data to be the problem. So the first thing I did was check the queries that would fetch the data to be returned. But those queries were not really a problem. They were pretty optimized, even if they would fetch a lot of data through several joins. Performance of those queries was great. So that was not the problem. OK, so what could possible cause the problem then.

I decided to pull out Blackfire, a tool for profiling your code (and more). I've used Blackfire in the past to find the performance bottlenecks in the code I was working on and felt I needed to use the same here. That was a good decision.

After installing Blackfire in my local Docker setup (I used the PHP-SDK for the API) I sent my first request with Postman and when checking the function calls, the problem immediately became clear to me

That's a whole lot of time for a single SQL query.

Oh, right. 99.8% of my full request is taken by... a COUNT query? OK, I had not expected that. I had assumed after testing the query that fetches the data that the count query would be OK too. I was wrong. What did people say about assumptions?

OK, so let's have a look at the query. There's a great little bit of functionality to figure out what the problem is, which is EXPLAIN. I took the COUNT query, added EXPLAIN in front of it, and checked what MySQL would tell me.

OK, the count query uses 2 where clauses and both fields have an index on them. However, a query only uses a single index. EXPLAIN told me which index, and also told me that using that index, it would have to go through about 20000 records to check whether the other WHERE clause matched or not. And it apparently took quite a bit of time to do that.

But it is possible to create an index on a combination of two fields. Now, you do have to be careful with creating too many indexes as your INSERT queries will get slower. It will have to be a conscious decision that balances the performance hits on both sides. But in this case, it made sense to add the index. And with effect.


Optimizations in SQL

When you run into performance problems, there are often issues with different types of I/O. Whether it is file access, databases or external API's, those are common causes of performance. External API's are often things you can do little about (aside from caching). In terms of file access the main solution is "less file access". Save stuff in a Redis or Memcache or a similar memory-based solution. But in terms of SQL, it can be worth digging into your database schema and queries. There are many possible solutions. Take your time to have a good look at your SQL, and learn how to use the functions your database has to offer such as EXPLAIN to find the cause of your problems. Oh, and don't make assumptions like I did. Use tools like Blackfire to quickly find the cause of the problem. It'll make your life a lot easier and your work more efficient.

symfony 0.6 to Symfony 5: What I learned from the framework

Today I was a speaker at SymfonyWorld, the global online conference related to the Symfony framework. In my talk I travelled through my history with the framework and shared some of the lessons I learned over the years, focussing specifically on the lessons I learned by using the framework and seeing the evolution of the framework. This blogpost will share those same lessons.

I started my journey with Symfony (or actually symfony back then) at version 0.6.7. The framework back then was truly full-stack: You either used it or you didn't, there was no in-between. The framework was MVC, there were controllers, models and views. The models were still Propel and the views were still plain PHP. And with the code I wrote back then, the controllers were huge. I did not yet know about correctly applying best practices in terms of thin controllers, using seperate services etc.

Take your time to learn

One of the first lessons I learned while using symfony was before I even really started using the framework. I was working at a company called Dutch Open Projects and we were building a software as a service project. I don't think the term software as a service even existed yet, but we were doing it. The first version of the project was built on top of the Mambo CMS. Well, I don't know if you can remember Mambo, but at some point we decided that for security and application structure purposes, we wanted to migrate to something else and frameworks were just starting to pop up and be interesting as an option instead of using a CMS or custom PHP as a base. So I started digging into what frameworks were available. I made a list of the potential candidates and I cannot really remember which ones were on that list, except that I know that at least Zend framework and Symfony were serious contenders at that moment. The list had all the pros and cons for each framework. Together with my colleague Peter I looked at that list of features and we felt it was hard to decide just based on that list which framework to use. Zend framework was a serious candidate, but the features and especially the solidness and everything that Symfony had already done that the other frameworks hadn't made us consider Symfony. The fact that we didn't have to write all of this boilerplate code just to get started helped a lot. And then Peter said: why don't we just take a day, just one work day. We sit down, we take symfony and we just start building a project. And that's when I learned that it is OK to really take your time to learn about a tool and to see if the tool is the right fit. So we sat down, we downloaded Symfony and we started building a simple prototype application. At the end of the day we had an actual working prototype application. It was very simple. It didn't have all the features that we needed, of course, but it was enough to convince us that Symfony was a good choice. We felt it to be impressive that in such a short time we can get so much done and we can have actually something that works at the end of the day. That was the deciding factor for us. And that was a great choice because it started a major journey in my career.


As I worked more and more with symfony and symfony came to the stable 1.0 release, I really started appreciating the structure the framework brought me. Before I started using Symfony, every time I created a new project, I used to copy paste code from the previous project and then started altering it because over the time I'd learn some new lessons about how to build an application. But that meant that over time somehow my application seemed to change and change and change a bit more. And then having to go back into three or four applications ago, I'd have to really adapt again. Oh, what am I doing here? How was this structured? Wasn't this done differently? Oh, no, that was in the project after that. And one of the things that I found out when I started using Symfony was that it was great to have one common structure.

Making connections

As I got more and more excited by Symfony and what Symfony was offering I pitched the idea to my boss to organize a Symfony conference. And my employer Dutch Open Projects back then had (and they still have) a beautiful villa out in the forest. My boss really liked the idea of organizing a conference at their office. There was a beautiful lawn. There was a swimming pool, and there was enough room to put up some tents and stuff like that. So we ended up organizing SymfonyCamp, the first symfony conference that ever happened. People brought their tents, put it up on the lawn, and we had a big tent where all the talks were. We had a lot of pretty important people back then in the symfony community that came over. Dustin Whittle, Jonathan Wage, Kris Wallsmith, they all came over. Fabien was also there, which was amazing to us because having the man himself at the conference was great. The talks were great, but one of the things that I learned as well is that talking to people and exchanging experiences and ideas and approaches: You learn a lot from that as well. If you have the opportunity, then go to a meetup or a conference and meet other people. Don't just attend the talks but also talk to other people. Preferably talk to people you don't know yet because the connections that you make at a conference are connections that can sometimes even last over the years. There's still a couple of people from SymfonyCamp that I still talk to on a regular basis. Sometimes I ask them a question. Sometimes they ask me a question. But we're still connected.

Symfony 2: Seperation of concerns

Symfony 2 was, in terms of the framework, a completely new framework. A lot had changed since symfony 1, and that triggered another lesson. The introduction of components made me learn about creating nicely isolated pieces of code where each part of the code has its own purpose, its own responsibility. Put all the related code in the same namespace and offer some kind of public API where you can make sure that the public API stays as stable as possible and you can refactor all the internals if you want to. And this also made the code so much easier to test because it's not a big ball of spaghetti anymore. Every class has its own simple methods and every method can easily be tested. Of course, one of the lessons I also learned was to not make it too abstract because that adds the risk of adding way too much complexity in terms of the amount of layers of code that you need to go through just to find out what's happening.

Migration should not be an afterthought

Migrations are part of your application planning. At some point, you know that you're going to have to migrate either to a newer version of your framework because the older version isn't supported anymore or maybe to a whole different platform because your platform isn't supported anymore (or whatever other reason).

In symfony 1 most of the code that I wrote was completely tied into the whole framework, so upgrading from symfony 1 to Symfony 2 really meant to take out all those pieces of code or maybe even rewrite parts just to make it work with Symfony 2. In 2013 I went to SymfonyLive London and and my mind was completely blown by two people doing a talk called The Framework as an implementation detail. While some of the elements of that approach are a lot more common these days it is still good to to have a look at this video or basically any video on the topic of hexagonal architecture. Keep in mind when watching this video: In those days this was a relatively new approach in PHP and in Symfony. And just to summarize (and I'm going to completely butcher their content): The idea of hexagonal architecture is that you write your code in such a way that your business logic is completely isolated from any other logic. Your business logic should not need to know about where your data is stored, where your files are stored and what you know, what kind of APIs that you connect to to make it work. Or even what framework you use. If you take that approach then once you want to migrate from one version to the next version of your framework or even from one platform to another platform, you could just take that piece of business logic, copy it over into your new project. And the only thing that you need to do is make sure that the glue between your business logic and the rest of your application is fixed. If I had taken that approach in symfony 1, the upgrade to Symfony 2 would have been a lot less painful.

Symfony 3: Stability

As Symfony 3 came and we upgraded our applications to the new version, I realized something about stability. The newer versions of Symfony gave a much clearer versioning, which you could almost fully trust on. Small changes and bug fixes, new features, BC breaks: It was pretty much clear what was happening and the changes were documented really well in articles on and the way that deprecations were handled was also a lot better. Things were first marked as a deprecation and then a warning was given and then eventually the deprecated code was removed in the next major update. One of the major things I learned from Symfony at this point was not about development at all. It was about communication.

It was about communicating changes in code in such a way that people understand what is happening inside the code and that they have time to respond before their code breaks. And of course I tried to apply this to myself. Not that I really maintain a framework of the size or popularity of Symfony; I don't. I really don't manage a lot of open source in the first place. But I can still apply those lessons to my own code and especially to things that I publish to the outside world. For instance, if you work on an API, you basically offer a similar thing as what the framework does, it's a programming interface. And whether you connect to that over HTTP or you connect to that using just PHP... it's still the same thing, you offer a programming interface and people are going to rely on that programming interface.

So once you start changing things inside that API you really need to carefully take care of changing things, especially the outside behavior. If you're looking to change things or if you plan on dropping support for certain functionality, you should communicate that well. And you should describe what what will be changing or what you will be removing and in what ways people can still solve their problem. You should help your user to keep on using your software. And if you really drop support for something and there is no replacement: Be clear about that and give users some time to find a better solution. Or better yet, if you don't offer a solution yourself, tell them how they can still solve the same problem in a different way.

Symfony 4: A healthy environment

One great addition that really helped to get rid of environment specific configuration was the introduction of environment variables. This greatly reduced the risk of accidentally committing the wrong configuration file and then deploying them to production.

And I'm afraid, yes, I am guilty of having done that.

It also made it a lot easier to deploy because we didn't need to have complex deployment strategies that would copy over the correct configuration file during the deployment and things like that. Instead things like database credentials, API keys etc. could just be registered directly as an environment variable in the different environments and combining that with the way we use Docker these days makes it a lot easier. On development you can easily configure your environment variables in your docker-compose file. On production, you can configure those in whatever you use for production with Docker. We use Rancher, which is a nice GUI layer on top of Kubernetes. And I can simply click into that specific application, add some environment variables or change them when I need to change them, save it. And that's it. It works. And I don't need to update any files any more inside the container to get it running.

Common directory structure

Another big change is the directory structure. We've had the same directory structure since Symfony 2 and I was quite used to that, but in Symfony 4 things changed for the better, because now the directory structure that Symfony offered by default was very similar to the Linux file system. It was a lot easier to find stuff if you were already used to working with Linux. One thing I learned from this is that it is a good idea not to reinvent the wheel and use naming conventions and common structures that people already know. By doing that people will quickly understand what they are looking at. They will have a lot easier time trying to figure out where to find whatever file they are looking for. And this is especially important when on-boarding new developers. Now, of course, you can always look for a Symfony developer and hope that you will find a Symfony developer that has experience with the version of Symfony that you use. But if you just look for a PHP developer that has some experience with Linux, then it's actually quite easy to adapt yourself to the Symfony way of doing things. And over the years, I've done a lot of different projects, and every time I go back to a Symfony project, especially the recent projects, I feel right at home because I know where to find things and I don't really have to think about it a lot.

Symfony 5: The release schedule

Now this lesson is not about something introduced in Symfony 5, but it is about something I realized around the release of Symfony 5: The clear release schedule is great! At any moment in time it is very clear what the timeline is for releases. And combine this with the releases that are clearly marked as long term support releases: It makes it very easy to communicate to for instance my customers about which versions are a good choice for their project. The current long term support version is 4.4 and it is supported until November 2023. And I know that Symfony 5.0, which came out in November last year, it is not supported anymore at this point. And the currently supported version is Symfony 5.1, which lasts until January. There's Symfony 5.2 that is supported until July of next year. And I also know that in May we'll get Symfony 5.3 and I can already anticipate on that. This is important for people, at least people like me as a consultant, I regularly have to communicate with people from the business instead of developers. And those business people can be wary of open source. For instance because open source has the image that it's one or two developers in their bedroom working on open source projects whenever they feel like it. Having to communicate on that level with those people that have the power to decide which systems to use it makes it a lot easier to be able to tell them: This is the current version. This is the long term support version. The next version comes out at that point and there will be another long term support version, which will last until a certain point in time, so you can invest in this project using this open source framework. And you know for sure that for the coming X amount of time, you won't have to invest into big upgrades or backwards compatibility breaks. You can use the power of open source. You can reap the benefits from that. And you're not at risk of the project being abandoned or some some kind of unexpected new version breaking all your software.

This is something that I've found recently that is very important and it makes it a lot easier to talk to customers. Even if you're not talking to customers, just knowing that you can work on this software and you know for sure that you don't really have to worry about backwards compatibility breaks and things like that for the foreseeable future is really nice.

The community

There are also two things I learned over the years that are not tied to one specific version of the framework. The first one is the power of the community and you see this strength everywhere: In the pull requests that are being sent to to Symfony, in how the documentation is being written and updated, in the Symfony slack where people are helping each other, but also on less isolated places like open places like Twitter and Facebook and LinkedIn. People can ask a question on Twitter and they will get a response at some point. And even in the wider PHP communities. In the Netherlands we have the PHPNL, a Slack network for PHP developers. There is a separate Symfony channel in there where people can ask questions, other people will help. And, you know, eventually the problem will be solved.

And this happens in a lot of different places around the world.

And that's just how great it is to have the Symfony community just there for you.

If you're looking for a new job, there's tons of people that will help you find whatever the cool job is that you're looking for. And if you're looking for new developers, there's usually some people that will help you find new developers as well.

And I mentioned it already when I talked about SymfonyCamp, how important it is to meet people and to make that connection. And then the Symfony community, there's so many people that you can meet that you can connect with and that may end up being a friend or just a business partner or, you know, things like that. This community is amazing. And I want to thank you for being part of that community.


And the last thing I want to mention is the best invention since sliced bread: Composer. Composer has made my developer life so much easier, so much better. Mind you: Before Composer you had to download all the libraries that you used and you had to put them into the project yourself. They would usually be committed into your version control and you'd be manually responsible for making sure that they were up to date by downloading a new version and replacing the old version with the new version.

And the other thing that you needed to do was make sure manually that all of these libraries were compatible with each other and that there weren't any issues between the different libraries, but also between the libraries and the version of PHP that you were using and all of the PHP extensions that you needed to run those libraries. Composer really changed all of that. And it made my life so much easier. So I'm really grateful. And I want to really give a big thank you to Nils and Jordi and everyone else who worked on Composer over the years. And and of course, congratulations with the release of version 2.0 of Composer!