Quick Git trick: Sign your commits after the fact

OK, so this might be a situation you may get into, but I'm blogging this as much for myself as for others because I know this is a situation that I might also head into:

You start working on a new project for a new customer, and you forget to set up PGP signing of your commits in your Git configuration. Now you've pushed your branch, made a pull request but the pull request can not be merged because your commits are not signed. OH NO! Now what?

The one-liner

Luckily, I'm not the only one to do this and a quick search around the web resulted in this post on superuser.com and look, it is actually quite easy. First of all, you configure git to use your PGP key and set the correct email address. Then you use git log to find out the commit hash of the commit before your first commit. Then you use the command:

git rebase --exec 'git commit --amend --no-edit -n -S' -i <commit hash>

Where, obviously, you replace <commit hash> with the actual commit hash.

Simply write and quit the editor coming up describing what the rebase is going to do, and... TADA! All your commits are now signed.

Force push

Yes, you will have to force push your changes to your branch. But since the merging of the pull request was blocked anyway, no one will have pulled your changes. Right? RIGHT?

If anyone pulled your changes before you force pushed, you'll have to warn them and prepare them for a bit of extra work.

Done!

And that's it. That's all you need to do to sign your commits after the fact. And next time you start working on a new project, don't forget to configure Git to use your key and sign your commits from the start.


Ingewikkeld presents: Comference Summer Sessions

I am very happy to announce a new initiative from Ingewikkeld: Comference Summer Sessions. In three interactive panel discussions we'll be talking about three topics we feel are very important in development:

  • Open Source in your company
  • Facilitating company cultural change
  • The role of the Product Owner in a development team

Each session will last about 1.5 hours, in which the panel will discuss the subject at hand. Through our Comference Discord, you can also ask questions and remarks that the panel can talk about.

After each session, the panelists will also record a 30-minute podcast in which the panel summarizes the most important lessons from the interactive session. So if you miss the stream or if you want to be reminded of what was talked about, the podcast is there to help you out.

For the first session, on Open Source in your company, the panel is already known:

  • Host: Jaap van Otterdijk (Ingewikkeld)
  • Sebastian Bergmann (PHPUnit, ThePHP.cc)
  • Stefan Koopmanschap (Ingewikkeld)
  • Erik Baars (ProActive)

I am very excited about this new series of events. The sessions are free to attend (we're streaming them live on YouTube). SO block your calendars for the three dates:

  • Tuesday, June 29
  • Tuesday, July 27
  • Tuesday, August 24

I hope to see you there!


Dear recruiter

Dear recruiter,

This is an open letter to you, the recruiter who focusses on the tech industry. I've had many interesting experiences with recruiters, and I want to share some, to tell you about my positive and negative experiences, in the hopes that you might learn something.

In my social bubble, both online and offline, many developers are not happy with you. Perhaps not you specifically, but recruiters are often seen as annoying, rude, lying people. It is therefore especially important for you to make a good first impression when you contact someone. Hopefully, the things I'm going to talk about will help you do that.

Be clear and honest from the start

I have had many interactions with recruiters that were either very unclear or dishonest, or conveniently left out some information in the early communication about a project.

One example was an email I got about a very interesting project. I showed interest, sent a CV and had an interview. This means that from my side, I've invested several hours already, hours that for me as a freelancer, are unpaid. Because I see potential in the project.

The customer was interested and I was interested as well. And then the paperwork came about, which included a 90 day payment period. 90 days. That is three months. When I responded to that, I got told "This is standard for this customer". That might be true, but you know that and I don't. And I don't know a lot of freelancers that are OK with or even capable of handling a 90 day payment period. It could've saved me (and you!) hours of work and a disappointment in the final stages of the process. And if this is standard, you knew about it and could've communicated this in your initial email (or quickly thereafter).

More recently, I got an email about an interesting full-remote project. I specifically communicated in our initial communication that the full-remote was an important requirement, and you went ahead and introduced my developer to the customer. When that developer interviewed with the customer, it turned out that because they were recruiting for a new team, there was a 2-day on-site requirement. When I asked about this, you told me you had heard about this already. Why had you not already communicated this to me? This could've saved me, my developer and you a lot of time and frustration.

Send the right projects

The amount of messages on LinkedIn and email in my inbox from recruiters is pretty big. Which I can accept, because that should mean that we do our work well and you as a recruiter think we could be good potential candidates for the roles your customers have.

Unfortunately, when I wade through the emails, I can delete about 75% of the emails immediately. With my company we focus on PHP development, architecture, consulting and training (and you know about that!), so why do I get so many emails for Python positions, Java positions etc? You are wasting a lot of my time, because yes, I go through all of these because there might be a super cool project in there that we want to work on.

If you send out emails or LinkedIn messages, please make sure you only send messages that actually apply to me. If it's sort of related (like a product owner or scrum master role) that's fine as well, but things that are totally unrelated to our specialty can better be sent to the people that specialize in that.

Rates

I only rarely receive emails about projects where the rates are communicated in the first email. Most of the emails contain useless wording such as good rates or competitive rates. Just tell me what you're willing to pay me. I'm happy to discuss my rate and in specific situations adapt my rate to fit with your client's wishes. But, here we go again, it would save you and me a lot of time if you communicate this upfront. Because if your maximum rate is 80% of my minimum rate, we're never going to be able to make a deal.

Ask before sending my CV

Unfortunately, I've been in situations where I was contacted by you and I sent you my CV, and while we're still discussing specifics it turns out you've already sent my CV to your client. Later in our discussion it then turns out that we can't agree on specifics, and then I get the but I've already sent your CV to the client and they're really interested, can't we agree on this because this looks bad on me. Listen, if you've already sent my CV while we are still discussing specifics, that is not my problem. Also, please consider the damage it does to my image. Because I'm pretty sure you're not going to tell your client that you f*cked up. Which means that this specific client, which I may or may not encounter in the future, may have a negative experience with me, even if they haven't even ever spoken to me.

That first impression

The above are only a few examples. I have many more where these came from. So let's talk about making a good first impression. And that really already starts with your initial contact:

  • Send an email first, with as much information you have on the project: A good description of what the project is, what your client is looking for and why you think I would be/have a good candidate, a maximum rate (possibly accompanied by a preferred rate), a description of the process and any special requirements or other agreements that are important such as payment method, hours per week/month, specific work days, required insurances and anything special that is expected to be included in the rate such as specific hardware or software, background checks, etc)
  • Feel free to call me after a couple of days if I haven't responded yet, but give me some time to consider if I am or have a good candidate for the project and whether I can agree with the wishes and requirements
  • If any new information comes to light between your email or other contact we have, inform me of that. Don't leave it out for me to find out later. Because I will (have to) find out.

This is just for that good first impression. If my first impression of you is not good, given the amount of bad apples in the recruitment business, it is very likely that you'll quickly make it to my mental blacklist and I'll ignore you in the future.

After a good first impression, make sure to show that you want to continue a good business relationship. I try to do that as well by contacting you if I feel there might be a good opportunity. Don't spoil it by spamming me with projects that you know I'm not a good fit for. If we agree to work together, ensure that you agree to your end of the deal in terms of payments etc. That ensures we might be able to work together for a long time.

Stand out

In a recruitment world that is dominated by spammers, lying people, frustrating communications and an overall negative attitude, it is quite easy to stand out, it is easy to show that you're serious about doing business with me by showing you don't just care about your own profit margin, but you care about a long-term business relationship. I care deeply about delivering high quality services to my customers and, if we work together, to your customers. Please show me that you also care about that. If you do, I will not only be happy to work with you for a long time, but I'll also be happy to help you find other possible candidates, which may lead to more money and a better network for you. I'd call that a win/win situation. Please care. I do.


Speaker support

Over all the years that I've been a speaker at PHP conferences, I have been very happy as a speaker. Most conferences I spoke at would reimburse my travel and get me a hotel room for the duration of the conference. Most of the time, a speaker dinner was included as well. It got me to travel the world. I've seen many amazing cities such as San Francisco, Montreal, Verona, Barcelona, Paris, Berlin, Cologne and many more. I would not have seen all those cities without conferences paying for my airfare and hotels. There were even conferences where I'd pay for some of the costs. Either because I just wanted to be there anyway, or because the conference would offer my company a sponsor slot if I covered my own airfare.

Over the past few years, I've been reading more about how accessible speaking is. Or rather, the lack of said accessibility. I had not yet considered this since I have been privileged to either have employers paying for part of the costs, or having a company with a high enough income to be able to cover some of the costs of speaking. There are actually developers that do not have the luxury of being able to cover those costs, or who are unable to just take the time off work. There's also other situations, such as having to pay for someone to babysit during a talk.

With Comference and also before that with WeCamp we've always made an explicit effort to aim for a diverse line-up. And this year, we've decided to make an extra effort. This year's edition of Comference we will compensate speakers for their effort. Especially with online conferences that can be done directly from your home or office, we hope that this will allow new people to be a speaker. Eventually, we hope to expose as many different viewpoints on technology-related subjects as possible as we believe every viewpoint can be learned from.

We're not stopping at the speaker fee either. We facilitate different talk lengths as well. It is possible for speakers to submit talks for 15-, 30- and 45-minute talks. We hope that this enables people to submit a talk of the length they prefer for their subject. Because not every talk should be 45 minutes or an hour.

The CfP for Comference is now open and I'd like to extend a warm invitation to anyone who has something to share. Our speaker line-up will be created with a combination of invited speakers and selections from our CfP.


Holiday project: Raspberry Pi NAS

After our Sonos died early December, we were in the market for a new device. We did some research, got some advice and eventually decided to go for a BlueSound Node 2. While going through the features we noticed the option to have your own digital music library playing on it (yeah, I know the Sonos could also do that, but we never set it up) and we felt this was a good trigger to do just that. But getting a very expensive (Sonology or the like) NAS just for that, especially after investing in BlueSound, seemed overkill. But of course there are simpler options. So my son and I set out to create a simple NAS using a Raspberry Pi and an external USB harddisk, and made that our end of year holiday project.

What we got

My son did the research on what to get and we ordered the following:

Assembling the hardware

When all the hardware was in, it was time to assemble the hardware. And that was surprisingly easy. Not only because my son was the one doing the assembling, but also because the case included a little screwdriver so we didn't have to look for the exact correct screwdriver.

So: SD card in Pi board, Pi board into the case, screw the case closed, connect network cable, power and... oh shit. We don't have a micro-HDMI cable. OK. Let's order that one and wait for it to arrive.

Fast-forward a couple of days and the cable arrives: Time to connect the Pi to a screen, keyboard and mouse. We had followed a tutorial on how to install OpenMediaVault but unfortunately the Pi wouldn't boot. The green light blinked 4 times, which means it couldn't find a required file. So we searched around a bit and ended up at the Installing OMV5 On a Raspberry Pi document. This was perfect!

Installing OpenMediaVault

After having found the above document, installation was a breeze:

  • Put Raspberry Pi OS on the SD card
  • Install updates
  • Run the installer script that Ryecoaaron wrote
  • Make OMV recognize the USB storage
  • Set up a network share

DONE!

Fun project!

This was a fun little project, and pretty simple. By now, there's gigabytes of music on the NAS already and our BlueSound is already playing the music. This was great. And since I recently won a Raspberry Pi 400 kit I'm already thinking about what my next Pi-project will be :)


Check your SQL

Recently I've been digging into an internal API for one of our customers because there were some complaints about the performance. Some, and only some, calls would take over 10 seconds to complete. These were all calls that were being used by XHR requests fetching data for display, and the users were (rightfully so) getting annoyed by some pages taking so long to load.

The API itself is, for performance reasons, very low on framework-y stuff. There's a basic request/response handling layer, but beneath that there's basically raw SQL being fed to PDO. If you ask me, this was a great choice. And since the underlying (MySQL) database has some tables with quite a lot of data, I immediately expected the queries on that data to be the problem. So the first thing I did was check the queries that would fetch the data to be returned. But those queries were not really a problem. They were pretty optimized, even if they would fetch a lot of data through several joins. Performance of those queries was great. So that was not the problem. OK, so what could possible cause the problem then.

I decided to pull out Blackfire, a tool for profiling your code (and more). I've used Blackfire in the past to find the performance bottlenecks in the code I was working on and felt I needed to use the same here. That was a good decision.

After installing Blackfire in my local Docker setup (I used the PHP-SDK for the API) I sent my first request with Postman and when checking the function calls, the problem immediately became clear to me

That's a whole lot of time for a single SQL query.

Oh, right. 99.8% of my full request is taken by... a COUNT query? OK, I had not expected that. I had assumed after testing the query that fetches the data that the count query would be OK too. I was wrong. What did people say about assumptions?

OK, so let's have a look at the query. There's a great little bit of functionality to figure out what the problem is, which is EXPLAIN. I took the COUNT query, added EXPLAIN in front of it, and checked what MySQL would tell me.

OK, the count query uses 2 where clauses and both fields have an index on them. However, a query only uses a single index. EXPLAIN told me which index, and also told me that using that index, it would have to go through about 20000 records to check whether the other WHERE clause matched or not. And it apparently took quite a bit of time to do that.

But it is possible to create an index on a combination of two fields. Now, you do have to be careful with creating too many indexes as your INSERT queries will get slower. It will have to be a conscious decision that balances the performance hits on both sides. But in this case, it made sense to add the index. And with effect.

LOOK! IT WORKED!

Optimizations in SQL

When you run into performance problems, there are often issues with different types of I/O. Whether it is file access, databases or external API's, those are common causes of performance. External API's are often things you can do little about (aside from caching). In terms of file access the main solution is "less file access". Save stuff in a Redis or Memcache or a similar memory-based solution. But in terms of SQL, it can be worth digging into your database schema and queries. There are many possible solutions. Take your time to have a good look at your SQL, and learn how to use the functions your database has to offer such as EXPLAIN to find the cause of your problems. Oh, and don't make assumptions like I did. Use tools like Blackfire to quickly find the cause of the problem. It'll make your life a lot easier and your work more efficient.


symfony 0.6 to Symfony 5: What I learned from the framework

Today I was a speaker at SymfonyWorld, the global online conference related to the Symfony framework. In my talk I travelled through my history with the framework and shared some of the lessons I learned over the years, focussing specifically on the lessons I learned by using the framework and seeing the evolution of the framework. This blogpost will share those same lessons.

I started my journey with Symfony (or actually symfony back then) at version 0.6.7. The framework back then was truly full-stack: You either used it or you didn't, there was no in-between. The framework was MVC, there were controllers, models and views. The models were still Propel and the views were still plain PHP. And with the code I wrote back then, the controllers were huge. I did not yet know about correctly applying best practices in terms of thin controllers, using seperate services etc.

Take your time to learn

One of the first lessons I learned while using symfony was before I even really started using the framework. I was working at a company called Dutch Open Projects and we were building a software as a service project. I don't think the term software as a service even existed yet, but we were doing it. The first version of the project was built on top of the Mambo CMS. Well, I don't know if you can remember Mambo, but at some point we decided that for security and application structure purposes, we wanted to migrate to something else and frameworks were just starting to pop up and be interesting as an option instead of using a CMS or custom PHP as a base. So I started digging into what frameworks were available. I made a list of the potential candidates and I cannot really remember which ones were on that list, except that I know that at least Zend framework and Symfony were serious contenders at that moment. The list had all the pros and cons for each framework. Together with my colleague Peter I looked at that list of features and we felt it was hard to decide just based on that list which framework to use. Zend framework was a serious candidate, but the features and especially the solidness and everything that Symfony had already done that the other frameworks hadn't made us consider Symfony. The fact that we didn't have to write all of this boilerplate code just to get started helped a lot. And then Peter said: why don't we just take a day, just one work day. We sit down, we take symfony and we just start building a project. And that's when I learned that it is OK to really take your time to learn about a tool and to see if the tool is the right fit. So we sat down, we downloaded Symfony and we started building a simple prototype application. At the end of the day we had an actual working prototype application. It was very simple. It didn't have all the features that we needed, of course, but it was enough to convince us that Symfony was a good choice. We felt it to be impressive that in such a short time we can get so much done and we can have actually something that works at the end of the day. That was the deciding factor for us. And that was a great choice because it started a major journey in my career.

Structure

As I worked more and more with symfony and symfony came to the stable 1.0 release, I really started appreciating the structure the framework brought me. Before I started using Symfony, every time I created a new project, I used to copy paste code from the previous project and then started altering it because over the time I'd learn some new lessons about how to build an application. But that meant that over time somehow my application seemed to change and change and change a bit more. And then having to go back into three or four applications ago, I'd have to really adapt again. Oh, what am I doing here? How was this structured? Wasn't this done differently? Oh, no, that was in the project after that. And one of the things that I found out when I started using Symfony was that it was great to have one common structure.

Making connections

As I got more and more excited by Symfony and what Symfony was offering I pitched the idea to my boss to organize a Symfony conference. And my employer Dutch Open Projects back then had (and they still have) a beautiful villa out in the forest. My boss really liked the idea of organizing a conference at their office. There was a beautiful lawn. There was a swimming pool, and there was enough room to put up some tents and stuff like that. So we ended up organizing SymfonyCamp, the first symfony conference that ever happened. People brought their tents, put it up on the lawn, and we had a big tent where all the talks were. We had a lot of pretty important people back then in the symfony community that came over. Dustin Whittle, Jonathan Wage, Kris Wallsmith, they all came over. Fabien was also there, which was amazing to us because having the man himself at the conference was great. The talks were great, but one of the things that I learned as well is that talking to people and exchanging experiences and ideas and approaches: You learn a lot from that as well. If you have the opportunity, then go to a meetup or a conference and meet other people. Don't just attend the talks but also talk to other people. Preferably talk to people you don't know yet because the connections that you make at a conference are connections that can sometimes even last over the years. There's still a couple of people from SymfonyCamp that I still talk to on a regular basis. Sometimes I ask them a question. Sometimes they ask me a question. But we're still connected.

Symfony 2: Seperation of concerns

Symfony 2 was, in terms of the framework, a completely new framework. A lot had changed since symfony 1, and that triggered another lesson. The introduction of components made me learn about creating nicely isolated pieces of code where each part of the code has its own purpose, its own responsibility. Put all the related code in the same namespace and offer some kind of public API where you can make sure that the public API stays as stable as possible and you can refactor all the internals if you want to. And this also made the code so much easier to test because it's not a big ball of spaghetti anymore. Every class has its own simple methods and every method can easily be tested. Of course, one of the lessons I also learned was to not make it too abstract because that adds the risk of adding way too much complexity in terms of the amount of layers of code that you need to go through just to find out what's happening.

Migration should not be an afterthought

Migrations are part of your application planning. At some point, you know that you're going to have to migrate either to a newer version of your framework because the older version isn't supported anymore or maybe to a whole different platform because your platform isn't supported anymore (or whatever other reason).

In symfony 1 most of the code that I wrote was completely tied into the whole framework, so upgrading from symfony 1 to Symfony 2 really meant to take out all those pieces of code or maybe even rewrite parts just to make it work with Symfony 2. In 2013 I went to SymfonyLive London and and my mind was completely blown by two people doing a talk called The Framework as an implementation detail. While some of the elements of that approach are a lot more common these days it is still good to to have a look at this video or basically any video on the topic of hexagonal architecture. Keep in mind when watching this video: In those days this was a relatively new approach in PHP and in Symfony. And just to summarize (and I'm going to completely butcher their content): The idea of hexagonal architecture is that you write your code in such a way that your business logic is completely isolated from any other logic. Your business logic should not need to know about where your data is stored, where your files are stored and what you know, what kind of APIs that you connect to to make it work. Or even what framework you use. If you take that approach then once you want to migrate from one version to the next version of your framework or even from one platform to another platform, you could just take that piece of business logic, copy it over into your new project. And the only thing that you need to do is make sure that the glue between your business logic and the rest of your application is fixed. If I had taken that approach in symfony 1, the upgrade to Symfony 2 would have been a lot less painful.

Symfony 3: Stability

As Symfony 3 came and we upgraded our applications to the new version, I realized something about stability. The newer versions of Symfony gave a much clearer versioning, which you could almost fully trust on. Small changes and bug fixes, new features, BC breaks: It was pretty much clear what was happening and the changes were documented really well in articles on symfony.com and the way that deprecations were handled was also a lot better. Things were first marked as a deprecation and then a warning was given and then eventually the deprecated code was removed in the next major update. One of the major things I learned from Symfony at this point was not about development at all. It was about communication.

It was about communicating changes in code in such a way that people understand what is happening inside the code and that they have time to respond before their code breaks. And of course I tried to apply this to myself. Not that I really maintain a framework of the size or popularity of Symfony; I don't. I really don't manage a lot of open source in the first place. But I can still apply those lessons to my own code and especially to things that I publish to the outside world. For instance, if you work on an API, you basically offer a similar thing as what the framework does, it's a programming interface. And whether you connect to that over HTTP or you connect to that using just PHP... it's still the same thing, you offer a programming interface and people are going to rely on that programming interface.

So once you start changing things inside that API you really need to carefully take care of changing things, especially the outside behavior. If you're looking to change things or if you plan on dropping support for certain functionality, you should communicate that well. And you should describe what what will be changing or what you will be removing and in what ways people can still solve their problem. You should help your user to keep on using your software. And if you really drop support for something and there is no replacement: Be clear about that and give users some time to find a better solution. Or better yet, if you don't offer a solution yourself, tell them how they can still solve the same problem in a different way.

Symfony 4: A healthy environment

One great addition that really helped to get rid of environment specific configuration was the introduction of environment variables. This greatly reduced the risk of accidentally committing the wrong configuration file and then deploying them to production.

And I'm afraid, yes, I am guilty of having done that.

It also made it a lot easier to deploy because we didn't need to have complex deployment strategies that would copy over the correct configuration file during the deployment and things like that. Instead things like database credentials, API keys etc. could just be registered directly as an environment variable in the different environments and combining that with the way we use Docker these days makes it a lot easier. On development you can easily configure your environment variables in your docker-compose file. On production, you can configure those in whatever you use for production with Docker. We use Rancher, which is a nice GUI layer on top of Kubernetes. And I can simply click into that specific application, add some environment variables or change them when I need to change them, save it. And that's it. It works. And I don't need to update any files any more inside the container to get it running.

Common directory structure

Another big change is the directory structure. We've had the same directory structure since Symfony 2 and I was quite used to that, but in Symfony 4 things changed for the better, because now the directory structure that Symfony offered by default was very similar to the Linux file system. It was a lot easier to find stuff if you were already used to working with Linux. One thing I learned from this is that it is a good idea not to reinvent the wheel and use naming conventions and common structures that people already know. By doing that people will quickly understand what they are looking at. They will have a lot easier time trying to figure out where to find whatever file they are looking for. And this is especially important when on-boarding new developers. Now, of course, you can always look for a Symfony developer and hope that you will find a Symfony developer that has experience with the version of Symfony that you use. But if you just look for a PHP developer that has some experience with Linux, then it's actually quite easy to adapt yourself to the Symfony way of doing things. And over the years, I've done a lot of different projects, and every time I go back to a Symfony project, especially the recent projects, I feel right at home because I know where to find things and I don't really have to think about it a lot.

Symfony 5: The release schedule

Now this lesson is not about something introduced in Symfony 5, but it is about something I realized around the release of Symfony 5: The clear release schedule is great! At any moment in time it is very clear what the timeline is for releases. And combine this with the releases that are clearly marked as long term support releases: It makes it very easy to communicate to for instance my customers about which versions are a good choice for their project. The current long term support version is 4.4 and it is supported until November 2023. And I know that Symfony 5.0, which came out in November last year, it is not supported anymore at this point. And the currently supported version is Symfony 5.1, which lasts until January. There's Symfony 5.2 that is supported until July of next year. And I also know that in May we'll get Symfony 5.3 and I can already anticipate on that. This is important for people, at least people like me as a consultant, I regularly have to communicate with people from the business instead of developers. And those business people can be wary of open source. For instance because open source has the image that it's one or two developers in their bedroom working on open source projects whenever they feel like it. Having to communicate on that level with those people that have the power to decide which systems to use it makes it a lot easier to be able to tell them: This is the current version. This is the long term support version. The next version comes out at that point and there will be another long term support version, which will last until a certain point in time, so you can invest in this project using this open source framework. And you know for sure that for the coming X amount of time, you won't have to invest into big upgrades or backwards compatibility breaks. You can use the power of open source. You can reap the benefits from that. And you're not at risk of the project being abandoned or some some kind of unexpected new version breaking all your software.

This is something that I've found recently that is very important and it makes it a lot easier to talk to customers. Even if you're not talking to customers, just knowing that you can work on this software and you know for sure that you don't really have to worry about backwards compatibility breaks and things like that for the foreseeable future is really nice.

The community

There are also two things I learned over the years that are not tied to one specific version of the framework. The first one is the power of the community and you see this strength everywhere: In the pull requests that are being sent to to Symfony, in how the documentation is being written and updated, in the Symfony slack where people are helping each other, but also on less isolated places like open places like Twitter and Facebook and LinkedIn. People can ask a question on Twitter and they will get a response at some point. And even in the wider PHP communities. In the Netherlands we have the PHPNL, a Slack network for PHP developers. There is a separate Symfony channel in there where people can ask questions, other people will help. And, you know, eventually the problem will be solved.

And this happens in a lot of different places around the world.

And that's just how great it is to have the Symfony community just there for you.

If you're looking for a new job, there's tons of people that will help you find whatever the cool job is that you're looking for. And if you're looking for new developers, there's usually some people that will help you find new developers as well.

And I mentioned it already when I talked about SymfonyCamp, how important it is to meet people and to make that connection. And then the Symfony community, there's so many people that you can meet that you can connect with and that may end up being a friend or just a business partner or, you know, things like that. This community is amazing. And I want to thank you for being part of that community.

Composer

And the last thing I want to mention is the best invention since sliced bread: Composer. Composer has made my developer life so much easier, so much better. Mind you: Before Composer you had to download all the libraries that you used and you had to put them into the project yourself. They would usually be committed into your version control and you'd be manually responsible for making sure that they were up to date by downloading a new version and replacing the old version with the new version.

And the other thing that you needed to do was make sure manually that all of these libraries were compatible with each other and that there weren't any issues between the different libraries, but also between the libraries and the version of PHP that you were using and all of the PHP extensions that you needed to run those libraries. Composer really changed all of that. And it made my life so much easier. So I'm really grateful. And I want to really give a big thank you to Nils and Jordi and everyone else who worked on Composer over the years. And and of course, congratulations with the release of version 2.0 of Composer!


Lessons learned from organizing an online conference

Earlier this year we made the tough decision to cancel WeCamp 2020 due to the on-going COVID-19 pandemic. It was not an easy decision but looking back at it now, I think we made the right decision. We honestly could not have justify letting people travel to an island. WeCamp was supposed to have happened last week. We (at Ingewikkeld immediately came up with a new idea however, because we didn't want to sit still and do nothing. One of the things that makes us proud is to facilitate the sharing of knowledge. So we came up with Comference.

Comference took place last week, and organizing the event was very different from WeCamp. As such, I learned a few lessons organizing the event that I hadn't with WeCamp. This blogpost is an attempt to document some of those lessons.

Organizing an online event is still a lot of work

Even if an online event makes a lot less logistical planning than a physical event, it is still a lot of work to organize an online event. Of course, getting sponsors is a lot of work, but that's about the same as for a physical event. But there's so much to do. Some of the things we needed to do was:

  • Figuring out the tech stack for streaming the event
  • Inviting speakers and creating the schedule
  • Thinking of fun activities that you can do online

Especially in a world where physical meetings are just about impossible, the communication for all of this also had to happen 100% online.

Also, since this was our first online event, I kept looking for things I had not considered. Our TODO sometimes kept growing instead of getting smaller because of the things that we thought of at a later point.

Testing has to be realistic

Luckily, most of Comference went pretty smooth, but we had some weird issues, mostly with our audio. This turned out to be due to the (perhaps slightly amateuristic) tech setup we had for our event. As far as we have been able to analyse so far, it had to do with the fact that we used two Elgato HD60 S devices connected through a USB hub to our streaming computer. The device itself is fantastic and works really well, but it seems that if you have two of them connected to the same computer, and you use them for hours on end, either the device or Windows has an issue with that.

Which leads me to the lesson: Yes, we tested the whole setup before the event. We tested all aspects: Local speakers, remote speakers, getting local speakers' slides into the stream, the audio from our local mics, the audio from remote speakers, etc. But we only tested for 10-15 minutes. We did not do a longer test run.

When we first encountered an issue where the audio dropped, we started analysing the problem and could not find it. Big heads up to my son Tomas for helping us out here. He was the one that came up with the idea of disconnecting one of the Elgato devices and connecting it directly to the computer. After that, OBS had input again and we could move on.

I should've known this from my development work, but ah well. It wasn't a major issue (it happened 2 or 3 times and was fixed in a matter of minutes) and our attendees were very forgiving ♥.

A good community tool adds a lot of value to your event

Now, I had learned this lesson at The Online PHP Conference by the awesome people of The PHP.cc already. They combined Zoom with Slack for their conference and that worked really well. During a talk there was a lot of interaction by attendees in the Slack. For us it was mostly a matter of: which of the available tools do we want to use?

Because we were also organizing a game night and a Dungeons & Dragons session, one of our main requirements was to have a system that allowed both text chat and group voice chat. Since Slack only supports group voice chat for their paid plan and Discord is very gamer-oriented we decided to go for Discord. This turned out to be an extra good choice once one of our speakers decided to have break out sessions with smaller groups doing exercises. The voice chat could be used for that as well.

The choice for a good community tool worked out really well. During the talks there were excellent discussions going on, that in some cases lasted until after the talk was already done for some time.

OBS is awesome software

OBS is awesome software. It's as simple as that. It is amazing that the open source community is able to come up with such a professional video streaming tool. I will be sure to make a donation.

Online conferences do work

I have never really been very excited about the idea of online conferences. But in the current situation with no or very little physical conferences in the foreseeable future and with me really liking conferences and learning a lot while attending them, we need to find something that works.

When organized well, online conferences really work well. With enough breaks (we did a 15 minute break between each talk, slightly longer if a speaker didn't use their 60 minute slot fully) you don't get tired that much. With a good community system set up to accommodate both the hallway track and solid discussions on the subject at hand, an extra dimension is added that physical conferences don't even really have (I would consider it rude to start talking to each other while a speaker is on stage ;) ). Now, I'm actually quite excited about the concept of online conferences, and I may attend more in the future, or maybe even organize more in the future...


Playing with WSL2

Some time ago I got fed up with the performance of Docker on Mac. Especially with the bigger projects I work on, the performance was getting horrible. After trying Windows for a bit, I switched to Windows because performance was better on Windows.

But as I was using Docker for Windows on a project last year, performance was still horrible. I was sometimes waiting for a page to load for several seconds. While Docker for Windows was performing better, it was still not performing the way I wanted to. After a while I decided to switch to Linux, just for the performance.

Linux and I have a love/hate relationship. Technologically it is far superior, especially for power users such as developers. In terms of UX... I hate it. It's better now than it was 10 years ago, but it's still not good. Especially when things go wrong, too often you have to revert to searching the Internet for a solution that requires executing complex commands and manually editting config files. I understand some people love that, but I've had it with that stuff since, I don't know, ages ago. Basically, since I switched to Mac and stuff "just worked".

So when Microsoft announced WSL2 with a full linux kernel and support for Docker, I was excited! This could mean my perfect setup could finally happen: The UX of Windows but with the Docker performance of Linux.

The update came out end of May, but I was in a big project and didn't have time to play with it. Yesterday, I finally set out to try it.

First impressions

My first impression is quite positive. The installation of WSL2 is easy, the Docker for Windows installer immediately picks up on the fact that WSL2 is available as a backend which makes the setup of my environment a breeze.

So the first thing I did was clone a git repository to my Windows system, head into bash and do docker-compose up -d to start the project. The project builds, executes Composer and runs. But I notice composer install is already quite slow and once the project is up and running, the pages still take quite a long time in the browser. And then it hits me... I checked out the repository on my host system, which means they basically have to go through the Windows -> WSL2 mount to linux, and then to Docker. That might slow things down.

Second try

OK, let's try again, but now let's clone the repository directly in the WSL2 filesystem. After cloning, I again type docker-compose up -d and .... whoa! Hang on, the build is done already? Composer ran incredibly fast. Let's try in the browser... whoa! Again, blazingly fast! This is incredible! This is near Linux performance for Docker.

But now my files are in the WSL2 filesystem, and I want to use PHPStorm and SmartGit to edit the files and commit and push to Gitlab. Is there a way to do that? Why, yes of course there is. The smart people at Microsoft have a solution for everything I want.

It took a bit of searching around, but it is actually quite easy to find the path of the files. As it turns out, in your bash shell you can just type explorer.exe . and it will open a Windows Explorer window in the directory on your WSL2 filesystem. Seriously, the integration between WSL2 and Windows is amazing. The path will look something like this:

\\wsl$\Ubuntu-20.04\home\stefan\php\my-project

But that's not all: When I started PHPStorm and I created a new project from existing files, PHPStorm automatically detected the WSL2 filesystem in the new project window. I can just choose between my C-drive and the WSL filesystem. This is excellent, and so easy to set up.

Concluding

I don't want to make a major conclusion like "Docker with WSL2 is perfect" because I haven't actually done serious development with it, but so far it is looking very good. The performance is amazing, the integration between WSL2 and the Windows operating system is great, and setting up tools such as PHPStorm and SmartGit is a breeze. Yes, this really, truly looks like a game changer for me.


Cancelling WeCamp 2020

A couple of weeks ago we had an energy-draining meeting with the WeCamp 2020 team. While it was technically possible to organize WeCamp 2020, given the current crisis we had an interesting and lengthy discussion about whether it was the right thing to do. We ended the meeting with a very tough decision: To cancel WeCamp 2020. It is hard to justify the risk of attendees travelling to and from our island in a period where an infection can have such dire consequences.

All attendees should have had an email by now with more information. If you have not, please get in touch with me.

We can't sit still and do nothing

One of the many amazing things of the people of Ingewikkeld is the fact that their passion for sharing knowledge and helping people level up is so strong that we can't just sit still and do nothing. We decided that we'd be doing an online event in the same week as WeCamp. Now, the WeCamp experience as it is can not be translated to an online event, so we decided we should do something different. We're still working on all the details, but expect an event where tech talks are scheduled together with talks on personal development, where relaxation techniques are discussed and you have the opportunity to do a talk yourself as well. We'll most certainly share more details as we confirm them, but for now, we have a name: Comference, the online conference from the comfort of your own home. If you want to stay up-to-date on our announcements, you can subscribe to our mailinglist from our website, and you can follow us on Twitter, Facebook and Instagram.

Oh, and the best thing: Comference will be live streaming for free! So while we can't welcome you to our island in August, we'll be happy to welcome you to our online stream!