skoop.dev

  • About
  • @skoop@phpc.social
  • My first Azure Pipeline

    September 12, 2025
    azure, ci, php, pipeline

    Last week I started at a new customer and they’re fully committed to the Azure DevOps environment. They use their Git repositories, their project boards and also their pipelines. Since the project I’m working is currently their only PHP project, they did not yet have any experience with setting up pipelines for a PHP project. So I set out to do just that, and found it surprisingly easy.

    Just like many other Git hosting sites, you can configure the pipelines using a YAML file, in this case azure-pipelines.yml, in the root of your project. They do also offer an online editor, but I haven’t actually tried that, preferring the YAML format for configuring pipelines.

    If you have experience with Bitbucket pipelines or Gitlab pipelines, then configuring Azure pipelines will be a breeze. Most things make a lot of sense. There’s a few things that I did need to take into account while setting up, and in this article I want to share these things.

    Tasks

    The first thing I found out: The predefined tasks beat having to create your custom scripts any day. Azure Pipelines have a huge list of predefined tasks that can make your life with pipelines a lot easier. For instance, this project uses private composer packages that are located inside Git repositories. So I have to insert an SSH key into the runner to make sure that when running composer install it can fetch those packages. A pretty common situation. Before I knew about the tasks I tried to manually do this myself. But I have very little knowledge of the insides of the runner, so it’s pretty hard to figure out exactly how to do that. Luckily, there is a task for that. This will save you a lot of time.

    Secure files

    Speaking about SSH keys (or other things that need to be injected that need to be securely stored) the Secure files feature is the answer. It’s really easy to insert those secure files into your runner context, and tasks such as the earlier mentioned InstallSSHKey@0 task allows you to simply reference the secure file to use. Again, this will save you a lot of time and frustration.

    Similar to secure files, there’s also variables that you can configure, that are stored securely and can then be used in the azure-pipelines.yml file.

    Conditions

    By default the whole pipeline is triggered on each push of each branch. You can limit when things are run by using the triggers. But there is another way of limited when things are run: by using conditions. Triggers are configured on the top level and so apply to your whole pipeline configuration. But sometimes you want certain stages, jobs or steps to run only in specific situations. For instance, you only want to build and push the image when all previous steps have succeeded and only for your main branch.

    This is where conditions come in. Let’s take the above example of wanting to run a stage only when all previous stages have succeeded and only when the branch is main. You can add a relatively simple condition to do that:

    condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))

    By adding the above to a stage, job or step, that item will in this case only be run when previous items have succeeded (succeeded()), and when the branch this runs on (Build.SourceBranch) is main (refs/heads/main).

    And you can start doing pretty fancy stuff with this. There’s a lot of logic you can add, based on variables, attributes of the current build, success or failure of previous steps, etc.

    This is pretty cool

    My experience with Microsoft stuff has always been a bit hit or miss, but so far I must say I’m actually quite impressed by the Azure Pipelines and the ease and speed with which it works. And not just that: Their git repository hosting in general works pretty well (except that pipelines work on branches and are not directly linked to pull requests, which is unfortunate). Their Jira replacement also seems to work pretty well. So yes, I quite like how this works.

  • Explicit code

    August 8, 2025
    code, explicit code, inclusive code, php

    One of the most discussed subjects during code reviews or in projects is not a functional thing but more a style thing: How does the code look? What coding style do we adopt?

    And while it feels like mostly a cosmetic thing, the code style is actually quite important. It determines not just how you write code, but also how you can read it. And since you usually read the code a lot more often than you write it, this is quite important.

    The race for shorter code

    More and more I see that developers opt for the shortest bit of code they can possibly write. And PHP as a language is also changing to make things a lot shorter. Think of the Null Coalescing Assignment Operator or the Pipe Operator, for instance.

    Now in essence of course, the longing for shorter code can be understood. A lazy developer is a good developer. And if anything, wanting to type less characters is a great property of a lazy developer.

    Write once, read often

    Up to a certain point I can follow the idea of a lazy developer. Hell, I’m a lazy developer. However, as I mentioned before, you read the code a lot more than you write it. And if you think about that, it’s much more important to optimize your code for reading than it is for writing.

    Combine that with the fact that our modern tooling with IDE’s that help us write the code makes it very easy to write code. With templates, macro’s and auto-complete, we don’t actually have to type all our characters anymore. Our laziness is supported by our tooling. So is it really needed to shorten our code that much anymore?

    Optimize for reading

    I don’t know about you, but I spend a lot of time reading code. Whether that is brand new code or older legacy code, if I need to modernize a codebase or simply implement a new feature into some existing code, I need to write what is there, determine what it does, then make whatever change I was going to make. And usually this involves stepping into called methods to see what these do, sometimes several levels deep. The most important thing for me, therefore, is to be able to quickly read what code does. How it behaves, how it alters values, what it returns.

    Let’s borrow an example from the pipe operator RFC:

    function getUsers(): array {
        return [
            new User('root', isAdmin: true),
            new User('john.doe', isAdmin: false),
        ];
    }
     
    function isAdmin(User $user): bool {
        return $user->isAdmin;
    }

    So far so good, right? Two methods. Now to figure out how many users are admins, I’d write a pretty simple bit of code:

    $numberOfAdmins = 0;
    
    foreach (getUsers() as $user) {
        if (isAdmin($user) === true) {
            $numberOfAdmins++;
        }
    }

    The new pipe operator would turn this into the following piece of code:

    $numberOfAdmins = getUsers()
        |> fn ($list) => array_filter($list, isAdmin(...)) 
        |> count(...);

    Granted, this is indeed shorter. But does this really make it more readable? I understand that this is subjective, but if you just take a quick glance, which gives you more information in less time, the top or the bottom code snippet?

    Let’s borrow another example from an RFC, this time the earlier references Null Coalescing Assignment Operator. This change to the PHP language is a shortening of an earlier syntax shortening. Old man speaking: Back in the old days, we’d write this bit of code as follows:

    if ($this->request->data['comments']['user_id'] !== null) {
        $this->request->data['comments']['user_id'] = 'value';
    }

    Earlier on, this was shortened to:

    $this->request->data['comments']['user_id'] = $this->request->data['comments']['user_id'] ?? 'value';

    and now, it’s become:

    $this->request->data['comments']['user_id'] ??= 'value';

    I would like to repeat my questions from the last set of code snippets: But does this really make it more readable? I understand that this is subjective, but if you just take a quick glance, which gives you more information in less time, the top or the bottom code snippet?

    And it doesn’t stop at that. I still encounter the exclamation mark a lot. For instance:

    if (!$variable) {
        // do something
    }

    That exclamation mark is easily overlooked when quickly scanning over code. And how much more typing does it take to make this:

    if ($variable === false) {
        // do something
    }

    Inclusive coding

    Even if you don’t have an issue with reading these shorter versions, it might be good to adopt a more explicit style of coding. Because most probably, you won’t be the only person reading the code. And the shorter the code and more complex the syntax, the heavier the mental load is on developers to read the code. Not just you (or you in six months after focussing on a lot of other stuff), but also other developers (including potentially those under high stress, with mental health issues, junior developers who just don’t have that much experience yet, etc). By keeping the code explicit and basic, you help those developers understand what is happening. Doing so will make your code more inclusive.

    Make your code more explicit

    I would love to see more people adopting more explicit coding styles. Keep in mind that someone, possibly you, might be reading this code in a few months or years time, trying to figure out what’s going on. Do you really want them to have to take a long time to understand what is happening?

  • The lost art of training?

    July 4, 2025
    learning, lessons, php, training

    Disclaimer: I deliver courses and organize training sessions myself as part of Ingewikkeld Trainingen. As such, I have written this with a certain bias.

    With the risk of sounding old: When I was young, if you wanted to learn something, you basically had two options: Read a book and start self-teaching, or book a training session and learn through that.

    Since then, a lot has changed. And that’s a good thing. Video tutorials, podcasts, blogs, magazines, monthly usergroups, conferences, even LLMs… there are so many ways to learn these days. It’s fantastic! Because not everyone learns well from reading a book or just trying something. Not everyone learns well from attending a training session.

    What I’ve noticed is that all these new ways of learning have caused the original way of learning to almost be lost. Having a teacher come in to teach a course or booking a seat on a classroom training. It’s something that rarely happens anymore. And I can sort of see why: Booking a course means committing one, two or even three(+) days to that course. It costs a set amount of money (either per student or per course). It’s cheaper to recommend some podcasts, get a subscription to Udemy or give someone a ChatGPT subscription. That also takes less time. I get that.

    Advantages of the course

    Actually attending a physical course with a trainer teaching you things does have certain advantages that all the other forms of learning don’t have. Let’s have a look at four of those advantages.

    1. Focus

    A lot of things are happening in your work, your life and the world. There are a lot of things that are constantly asking for our attention. Whether that is your manager, your social media notifications, your team or the latest BREAKING NEWS, keeping focus on learning is hard. And that while focussing on the topic at hand is instrumental in actually learning something. As Cal Newport wrote:

    To remain valuable in our economy, therefore, you must master the art of quickly learning complicated things. This task requires deep work. If you don’t cultivate this ability, you’re likely to fall behind as technology advances.

    A physical teacher can not just be paused every time something asks for your attention. They can also see you and call you out on any distractions they might notice, adding motivation for paying attention and focussing on the material you are learning.

    2. Interactivity

    Most of the “modern” training options have no interactivity. There is material, and you can consume that material. You can not ask questions, or discuss certain topics. You get what you get. This can be nice because it’s predictable, but once you don’t understand something or would like to dive deeper into something, you’ll have to figure that out by yourself.

    Having someone that delivers the content also allows you to ask questions, get clarification or even steer the content that you’re given into a certain direction (within the limits of the course).

    This can be invaluable in understanding the topic and not getting stuck somewhere without knowing how to move forward.

    3. Customization

    In line with the previous point, customization is also great. With “standard” content such as a podcast, video tutorial or book, you get what you get. But when you book a training course for your team, the trainer will be able to prepare custom content for your team. If you book a PHPUnit Masterclass and your team has a pre-existing codebase, you can ask the trainer to use that for the exercises on testing untestable code instead of the default exercises. If you book a training on Docker and Kubernetes and your company uses a specific cloud provider, you can ask the course the be customized for that specific platform. This is priceless: Your team is getting exactly the content they need.

    4. The hallway track

    Do not underestimate the informal contact different students have during breaks when attending a training course. While the formal content of the course is of course very important, the social aspect of sharing experiences, lessons learned and also non-course-related topics is also very important. Your team can reflect, learn from each other and also get to know each other better. If you booked seats on an external classroom training, they can also do this with people with completely different background and experiences, which could make the learning experience even more extensive.

    Book a course?

    Nope. I’m not going to tell you that you should now immediately book a course. First of all, in previous years it seems that, especially in the programming world, courses have become rarer. They’re not offered as much anymore unfortunately. But what I do want to ask you is to seriously consider booking a course in case it is available for the topic that you or your team wants or needs to learn more on.

  • Migrating to e/OS

    May 30, 2025
    android, apple, e/OS, fairphone, gadget bridge, ios, pinetime, pocket casts, technology

    For as long as I can remember, I’ve been an Apple Fanboi. Well, no, not that long, but ever since I started working at Ibuildings in the 00’s and getting my first Mac, I was sold. Unfortunately at some point for a power user like me the laptops started becoming less interesting (and also way too expensive), but for my phone I stuck with Apple because iOS was far superior to Android in terms of UX and consistency.

    In recent times, however, I’ve been becoming more privacy aware. Aside from that, with the current geopolitical situation and the “mightiest country in the world” being led by an unstable and unpredictable leader, I want to prevent my dependency (US-based) big tech.

    Some months ago I got introduced to the de-Googled GrapheneOS and e/OS. I got to try out e/OS on a secondary phone for a while and had to admit: Android has improved a lot since the last time I played with it. It compares quite well with iOS these days in terms of usability. So that’s when I started considering the move.

    It’s not an easy move though. Migrating from one platform to the other requires some planning and serious thought. When you’ve invested so many years into a platform, switching to another platform means figuring out how to replace certain platform-specific features. To my surprise, however, this turned out to not be that hard. Over the time I’ve switched mostly to apps that support both iOS and Android, and files I’ve been storing in for instance Proton Drive, not iCloud, so… hey, switching might not be as hard as I thought.

    Fairphone

    The first thing then is to think of which hardware to get. I was using an iPhone 12 mini, which is a very small phone, and quite quickly I came to the conclusion that it wouldn’t be that easy to get a similar device. Especially since I needed to keep in mind the device had to be compatible with e/OS. I preferred a phone that would also have official support instead of community support so that I could use the official installer. A sustainable and European company would also be nice. I’d heard a lot of good things about Fairphone and given their focus is on sustainability and right to repair, and they’re Amsterdam-based… that seemed like a fine choice.

    Installing e/OS

    My previous experience with installing e/OS on an old Samsung device was horrible. Hence my mention of wanting to use an officially supported device. The Fairphone I ordered came with a standard Android instead of e/OS, but hey, I’ve gone through the ordeal of installing it on that Samsung device, I can do this.

    So, connect Phone to laptop, go through setup steps so that I can start using the official installer, go through the first couple of steps… so far so good. One step was confusing: the e/OS documentation mentions having to unlock the bootloader, but as I did that it asked for a code, which the documentation did not mention anywhere. Turns out this is a Fairphone-specific thing. Fairphone offers a tool to calculate that code based on your IMEI and serial number.

    The phone is recognized and I can connect to it. But at the second point where I need to connect with the laptop… nope. It didn’t connect. Searching around a bit, this turned out to be related to the phone being locked. I found this blogpost and by just going through the first couple of steps (up to and including the fastboot flashing unlock_critical step), I got it to work. Now the second connect step of the installer did work.

    One thing to note in the official installer: Your progress is usually at the bottom of the screen, but any errors are shown in tiny letters at the top of the screen. So while you might be waiting for things to finish, there might already be errors. Keep your eyes on the top of the screen as well!

    After this, the installer was able to finish all the way to the end, and I had a Fairphone with e/OS. Yay!

    Another thing I noticed, however, is that I do not find anything in the documentation about, after finalizing the install, putting to lock back on using the fastboot flashing lock_critical and fastboot flashing lock commands. I did that regardless.

    App lounge

    After booting into e/OS I had another issue. The App lounge, the app with which you install other apps, would not load in anonymous mode. It would just keep on loading without being able to do anything. Unfortunately clearing the cache and storage, as recommended by the official documentation, did not solve it either. The official documentation suggested that if it couldn’t be solved, to still use a Google account. Which kind of defeats the purpose of using a de-Googled Android, imho. On the Internet I found some people who suggested just to wait a bit, however, and a friend made the same recommendation. So I wanted and lo and behold: It started working. I’m still not sure why it didn’t work before, but who am I to complain.

    Migrating

    Next step: migration. I had to install a whole bunch of apps of course. The first one was my password manager, because after installing all those apps, I would have to log in to most of them. This was a boring but very straightforward task. A big shout-out to Pocket Casts that after I logged in even remembered exactly where I was in the podcast I was listening to. Talk about a seamless migration experience!

    Contacts

    One thing I was quite scared of was how to ensure my contacts were migrated. Turns out that fear was based on nothing. The e/OS documentation offers a very simple tutorial: Download a vcf-file from iCloud, transfer that to the new phone, then import that file into your Contacts app. Everything was transferred!

    The watch

    My trusty iPhone had a trust companion: The Apple Watch. In my evaluation of my Apple usage, I’d come to the conclusion that while the Apple Watch can do a whole lot of things, all I really used it for on a daily basis was keeping track of my exercise and getting notifications of important things happening on my phone. I don’t really need such a complex watch for that. After looking around I found the PineTime watch: I very simple “sort of smart” watch. After posting on Mastodon about this I got a response from someone one village away who had one lying around that I could test-drive. I’ve been wearing it for three days now and I haven’t even had to charge it yet! And yes, this is not as fancy as an Apple Watch, but it tracks my steps (not all my exercise, sure, but my steps) and after setting up the Gadgetbridge app on my Fairphone I do get notifications. So hey, this works just fine!

    Airpods

    The only thing now that I still have from the Apple ecosystem is a set of airpods. I hardly use those, though, since I also own a pair of Bose QC-35 headphones. And the airpods connect just fine with the Fairphone, so I have no real need to replace those. And even if I wanted to replace them, Fairphone has a solution for that as well.

    Concluding

    I had expected that escaping the Apple ecosystem would be hard. But, some minor setbacks aside, it was pretty much smooth sailing. Whether I will bump into other things, only time will tell of course, but so far my experience with the Fairphone, with e/OS and with the Pine Time has been great!

  • Dutch PHP Conference 2025

    March 18, 2025
    amsterdam, conferences, DPC, php

    With only a few more days to go until Dutch PHP Conference 2025 it’s time to look forward to the conference. DPC is always a good conference (and has always been so), but I’m going to put focus on some talks that I’m really looking forward to.

    Sacred Syntax: Dev Tribes, Programming Languages, and Cultural Code

    One of the best and most entertaining talks at DPC last year was by Serges Goma and was titled Evil Tech: How Devs Became Villains. In it, Serges put focus on ethics in software development in a very fun and accessible way.

    With the great way that topic was approached I’m really looking forward to Serges’ take on tribalism, which is the subject of the talk this year!

    Small Is Beautiful: Microstacks Or Megadependencies

    Bert Hubert has been big in media in recent times talking about the EU dependency on US-based cloud providers, talking about the risks and even the potentially unlawfulness of doing that. However, Bert is also someone with a big history in tech, being the founder of PowerDNS.

    In his closing keynote, he will talk about microstacks and megadependencies. I’m really curious to hear about this from Bert.

    Don’t Use AI! (For Everything)

    After my recent blogpost on AI I got a response from Ivo Jansch, one of the organizers of DPC but also a really smart person that I’ve enjoyed working with in the past, to attend this talk by Willem Hoogervorst. I am really looking forward to the considerations that Willem will be sharing on when to use AI and when not to.

    Parallel Futures: Unlocking Multithreading In PHP

    Multithreading and PHP is not a good combination? Apparently it is! Florian Engelhardt will be talking about ext-parallel and I do want to hear more. I’m intrigued!

    Our tests instability Prevent Us From Delivering

    I’ve seen this situation with several of my customers: tests that can not be trusted. Either tests turned out green when something was wrong, or tests would be red while everything was fine. I’m looking forward to hearing from Sofia Lescano Carroll on what can lead to this and how to mitigate this.

  • Post-mortem

    March 14, 2025
    accountability, development, incidents, post-mortem, programming

    It is hard to imagine a world where nothing goes wrong. Especially in software development, which is not an exact science, things will go wrong. As far as I am aware, no definitive research has been done on this, and different sources give different numbers: Security Week talks about 0.6 bugs per 1000 lines of code, while Gray Hat Hacking mentions 5-50 bugs per 1000 lines of code. I am sure things like this also depends on your QA process. But it’s impossible to write bug-free code.

    So when things inevitably go wrong and your production environment goes down or errors out, it is important to figure out what went wrong. If you know what went wrong, you can figure out how to prevent that issue the next time. Part of that is a good post-mortem. A post-mortem usually includes a meeting where the event is discussed openly and freely, and a written report of the findings (a summary of which you could and should send to your customers).

    In the past days I’ve seen this blogpost from Stay Saasy do the rounds on social media and in communities. As I already said on Mastodon, I couldn’t disagree more. I feel the need to expand on just that statement, so I’ll focus on some statements from the blogpost and why I disagree so much.

    Frequency

    The first thing that I noticed in the blogpost is an assumption that shocked me a bit:

    Many companies do weekly incident reviews

    Hold up. Weekly? I realize the statistics are wildly varying and if indeed you have 50 bugs per 1000 lines of code, you’ll have a lot of bugs, but I would hope that you have a QA process that weeds out most bugs. I am used to have several steps between writing code and it going to production. That may include:

    • Code reviews by other developers
    • Static analysis tools
    • Automated tests (unit tests and functional tests)
    • Manual tests by QA
    • Acceptance tests by customers

    Let’s go from that worst-case number of 50/1000. I would expect, with the above steps, that the majority of bugs are therefore caught before the code even ends up on production servers. If this is true, why would you have weekly incident reviews? I mean, that’s OK if it is indeed needed, but if you need weekly incident reviews, I’d combine looking at the incident with looking at your overall QA process, because something is wrong then.

    Is it somebody’s fault?

    In the blogpost, Stay Saasy states that it is always somebody’s fault.

    it must be primarily one person or one team’s fault

    No. Just no. If you look back at the different ways you can catch bugs that I described earlier, you can already see that it is impossible to blame a single person. One or more developers write the code, one or more other developers review the code, one or more people set up and configured the static analysis tools and one or more people interpreted the results of those, the tests were written, the QA team did manual tests where needed, and the customer did acceptance testing. Bugs going into production is, aside from just being something that happens sometimes, a shared responsibility of everyone. It is impossible and unfair to blame a single person or even a single team.

    Accountability

    It feels that Stay Saasy mixes up blameless post-mortems with non-accountability. But these are two different things, with two different motivations. The post-mortem is not about laying blame. It is about figuring out what went wrong and how we can prevent it in the future. It is a group effort of all involved. The accountability part if something that is best done in a private meeting between the people who were involved in the cause of the issue. To mix these two up would indeed be a mistake, which is why blameful post mortems is such a bad idea.

    On the flip side, if you really messed up, you might get fired. If we said we’re in a code freeze and you YOLOed a release to try to push out a project to game the performance assessment round and you took out prod for 2 days, you will be blamed and you will be fired.

    While I agree up to a certain point with this statement, I think in this case you might also want to fire the IT manager, CTO or whoever is responsible for the fact that an individual developer could even YOLO a release and push it to production during a code freeze. Again, have a look at the process please.

    But yes, even if it is possible to do this on your own, you should not actually do this. So if you do this, it might warrant repercussions up to and including termination of your contract.

    Fear as an incentive

    There is one main incentive that all employees have – act with high integrity or get fired.

    I can’t even. Really. If fear is your only tactic to get people to behave, you should really have a good look at your hiring policy, because you’re hiring the wrong people.

    In every role where I was (partially) responsible for hiring people, my main focus would be to hire on people with the right mindset. Skills were not even the main focus, it would be mindset. People who are highly motivated to write quality code, who will take the extra effort of double-checking their code, who welcome comments from other developers that will improve the code. People that are always willing to learn new things that will improve their skills. You do not need fear to keep people in check when you hire the right people, because they are already motivated by their own yearning to write good code, to deliver high-quality software.

    So how to post-mortem?

    It might not be a surprise to you after all I read that I am a big supporter of blamless post-mortems. Why? Because of the goal of a post-mortem. The main goal (in my humble opinion) is to find out what went wrong, and brainstorm about ways to prevent this from happening again. There are four main phases in a post-mortem process:

    • Figure out what went wrong
    • Think of ways to prevent this from happening again
    • Assign follow-up tasks
    • Document the meeting results

    Figure out what went wrong

    The first phase of the meeting is to figure out what went wrong? This first phase should be about facts, and facts alone. Figure out which part of your code or infra was the root cause of the incident. Focus nost just on that offending part of your software, but also on how it got there? Reproduce the path of the offending bit from the moment it was written to the moment things went wrong.

    In the first phase, it is OK to use names of team members, but only in factual statements. So Stefan started working on story ABC-123 to implement this feature, and wrote that code or Tessa took the story from the Ready For Test column and started running through testcases 1, 2 and 5. Avoid opinions or blame. Everyone should be free to add details.

    Think of ways to prevent this from happening again

    Now that you have your facts straight, you can look at the individual steps the cause took from the keyboard to your production server, and figure out at which steps someone or something could’ve prevented the cause from proceeding to the next step. It can also be worth it to not just look at individual steps, but also the big picture of your process to identify if there are things to be changed in multiple steps to prevent issues.

    Initially, in this phase, it works well to just brainstorm: put the wildest ideas on the table, and then look at which have the most impact and/or take the least effort to implement. Together, you then identify which steps to take to implement the most promising measures to prevent the issue in the future.

    Let everyone speak in this meeting. Involve your junior developers, your product manager, your architect, your QA and whoever else is a stakeholder or in another way involved in this. You might be surprised how creative people can get when it comes to preventing incidents.

    Assign follow-up tasks

    Now that you have a list of tasks to do to prevent future issues, it’s time to assign who will do what. Someone (usually a lead dev or team lead, sometimes a scrum master, manager or CTO) will follow up on whether the tasks are done, to make sure that we don’t just talk about how to fix things, but we actually do.

    Document the meeting results

    Aside from talking about things and preventing future issues, you should also document your findings. Pretty extensively for internal usage, but preferably also in a summarized way for publication. Customers will notice issues, and even if they don’t notice, they will want to be informed. Honest and transparent communication about the things that go wrong will help your customers to trust you more: You show that you care about problems, you do all you can to solve them and to prevent them in the future. Things will go wrong, that’s inherent in software development. The way you handle the situation when things go wrong is where you can show your quality. In all documentation, try to avoid blaming as well. That isn’t important. What’s important is that you should you care and put in effort to prevent future issues.

    So what about accountability?

    Blameless post-mortems do not stop you from also holding people accountable for the things they do. If someone messes up, they should be spoken to directly. But it should not be a lynch mob setting, but a preferably one-on-one setting where two individuals evaluate the situation. And yes, there can be consequences. The most important thing is that the accountability is completely seperate from the post-mortem. It is not the focus of a post-mortem to hold someone accountable. That is a completely seperate process.

  • On AI

    February 28, 2025
    ai, chatgpt, climate crisis, llm, openai

    AI. It is the next big tech hype. AI stands for Artificial Intelligence, which these days mostly comes in the form of Large Language Models (LLMs). The most popular one these days seems to be ChatGPT from OpenAI, although there are many others already being widely used as well.

    Like with a lot of previous tech hypes, a lot of people have been jumping on the hype train. And I can’t really blame them either. It sounds cool, no, it is cool to play with such tech. Technology does a lot of cool things and can make our life a lot easier. I can not deny that.

    But as with a lot of previous tech hypes, we don’t really stand still to think about why we would want to use the tech. And also, why would we not want to use it. Blockchain is a really cool technology which has its use, but when it became a hype a lot of people/companies starting using it for use cases where it really wasn’t necessary. Despite all the criticism on the energy usage of cryptocurrency, still on a regular basis new currencies are started. And while the concept behind cryptocurrency is really good, the downsides are ignored. And the same happens now with AI.

    The downsides we like to ignore

    There has already been a lot of criticism of using AI/LLMs. I probably won’t be sharing anything new. But I’d like to summarize for you some reasons why we should really be careful with using AI.

    Copyright

    The major players in the AI world right now have been trained on any data they could find. This includes copyrighted material. There is a good chance that it has been trained on your blog, my blog, newspaper material, even full e-books. When using an image generation AI, there’s a good chance it was trained with material by artists, designers, and other creators that have not been paid for the usage of that material, nor have they given permission for their material to be used. And to make it even worse, nobody is attributed. Which I understand, because when you combine a lot of sources to generate something new, it’s hard to attribute who were the original sources. But they are taking a lot, and then earning money on it.

    Misinformation

    Because of the way the big players scraped basically all information from the Internet and other sources, they’ve also been scraping pure nonsense. When the input is inaccurate, the output will be as well. There’s been tons of examples on social media and in articles about inaccuracies, and even when you confront the AI with incorrectness, they’ll come up with more nonsense.

    In a world where misinformation is already a big issue, where science is no longer seen as one of the most accurate sources of information, but rather people rely on information from social media or “what their gut tells them”, we really don’t need another source of misinformation.

    Destroying our world

    I am sorry to say this, but AI is destroying our world. Datacenters for AI are using an incredible amount of power and with that, are both causing an energy crisis but also causing a lot of emissions. And there is more, because it’s not just the power. It’s also the resources that are needed to make all the servers that run the AI, the fuel that is needed to get the fuel for the backup generators to the datacenters, the amount of water being used by datacenters, and the list goes on. Watch this talk from CCC for a bit more context.

    There have been claims that AI would solve the climate crisis, but one can wonder if this is true. Besides, we’re a bit late now. Perhaps if we have made this energy investment some decades ago, it might’ve been a wise investment. We’re in the middle of a climate crisis and we should really currently focus on lowering emissions. There is not really room for investments at this point.

    If we’re even working on reviving old nuclear power plants simply to power AI datacenters, something is terribly wrong. Especially when it’s a power plant that almost caused a major nuclear disaster. And while nuclear power is often seen as a clean power solution, that opinion is highly contested due to the need for uranium (which has to be mined in a polluting process) and the nuclear waste is also a big problem. Not to speak of the problems in case of a disaster.

    Abuse

    One thing I had not seen before until I started digging into this is the abuse of cheap labor. This does make me wonder: How many other big players in AI do this? It is hard to find out, of course, but this is something we should at least weigh into the decision whether to use AI.

    So should we stay away from AI?

    It is easy to just say yes to that question, but that would be stupid. Because AI does have advantages. Certainly, there is nuance in this decision. Because there are advantages.

    AI can do things much faster

    A good example of AI doing things a lot faster is the way AI has been used in medicine. A trial in Utrecht, for instance, found that medical personnel spend a lot less time on analysis, experience their work as more enjoyable, and the cost goes down as well. And it gets even better, because there are also AI tools that can even predict who gets cancer. These specialized tools trained for specifically this purpose can only be seen as an asset to the field of medicine.

    Productivity increases

    Several people I’ve talked to who use AI in their work as software developers have mentioned the productivity boost they get from their respective tools. Although most do not think you should let AI generate code (as there have been a lot of issues with AI-generated code so far), it can be used for other things such as debugging but also summarizing documentation or even writing documentation. The amount of time you have to invest in checking the output of an AI is a lot less than actually having to do the work yourself.

    I do want to add a bit of nuance to this, however. The constant focus on productivity is, in my very humble opinion, one of the biggest downsides of (late stage) capitalism, where “good things take time” becomes less important. From a company point of view, this makes sense: If you can lower the cost of work by paying a relatively small amount of money for tools that increase productivity, that makes the financial situation of the company better. However, these costs never include the cost to the environment. If more companies would make more ethical decisions when it comes to the resources they use, this would make the decision process very different.

    Less repetition and human errors

    This goes for any form of automation of course: When you automate repetetive tasks, your work gets more enjoyable. Also, potentially related to this, you’re less prone to making errors. AI can do that automation. I recently saw a news item of how a medical insurance company in The Netherlands uses AI to “pre-process” international claims. international claims don’t follow their normal standards, so they used to have to manually process all those claims. Now, those claims go through an AI that already analyses the claim, tries to identify all important data, so that the people handling the claims only have to check whether the AI identified it correctly. After that, they can focus on handling the claim. This reduces a lot of human errors and repetition of boring work.

    So now what?

    Of course everyone has to come to their own conclusion on what to do with this. And there’s a lot more resources out there with information to consider. I have come to some conclusions for myself. Let me share those for your consideration.

    Specialized AI over generic AI

    AI can be really useful. Think of the medical examples I mentioned above. Because of the specialization, those systems have a lot lower energy consumption for training and running purposes. The main problem with generic AI is that because it is unclear what it should do, it is trained to do and understand everything. A lot of the training time needed might never actually be used.

    While for me the climate crisis is the most important reason to be critical of AI, I can also not ignore the copyright issues, and the problems with misinformation. AI may seem smart, but is actually quite dumb. It will not realize when it says something stupid. So with a high energy (and other pollution) investment, you get relatively unreliable results. And that’s without even mentioning potential bias. The model needs to be trained, but someone decides on which data it is getting trained.

    Experimentation is worth it

    Just like with any other technology, it is worth experimenting with AI. Last year I did experiment (and yes, I did experiment with ChatGPT) to get an idea of how AI can help. And while I found the usage helpful, I came to the conclusion that most of what it did for me at that point was make my life slightly easier. It did add value, but not enough for me to justify the problems I described above.

    What I did not experiment with, though, but where I do see a potential, is in small, energy-efficient models that you can run locally. But that again also comes back to my previous point: Those can be trained only for a specific purpose.

    Is there another option?

    One of the most important questions everyone considering AI should ask themselves is: Are there no other options?

    One thing I’ve seen happen a lot lately is the everyone is jumping on the AI bandwagon, so we must also do that! without thinking about the why? of implementing AI. A lot of applications of AI that I have seen have been to cover up other flaws in software. Instead of using AI to find data in your spaghetti database, you could also implement a better search functionality using tools that use less energy and that have been made specifically for search. ElasticSearch, Algolia and MeiliSearch come to mind, for instance. Some of those have been implementing some AI as well, but again: that is very specific and specialized.

    I’ve also heard people say “task x is hard right now to accomplish and AI can help”. In some situations, sure, AI is a good solution. But in a lot of situations, it might be your own application that just needs improving. Talk to your UX-er and/or designer and see how you can improve on your software.

    An important topic to factor into your considerations on whether to use AI or not should be what is the impact on the rest of the world? You’ve read some of the downsides now, factor those into your decision on whether you need AI.

    Long story short: Always be critical and never ever implement AI just because everyone is doing it.

    Concluding

    Please, do not start using AI for everything you need. Be very critical in every situation where the idea comes up to implement AI. If you do, ensure you keep in mind not just the cost to your company, but also the cost to the rest of the world. When using AI, focus on specialized and optimized versions, preferably those that can be run locally on energy-efficient computers. And always, but really… always be critical of AI input and output.

  • Expanding on our training offering

    November 1, 2024
    ddd, docker, git, ingewikkeld, knowledge sharing, kubernetes, laravel, symfony, symfonycon, training

    I don’t know if it matters that I’m the son of two teachers, but training is in my DNA, knowledge sharing is in my DNA. From the moment I started learning PHP based on a book and scripts I downloaded from the Internet, I’ve tried to help other people who ran into issues that I might have the solution for. I was involved in the founding of the Dutch PHP Usergroup and later PHPBenelux, and local usergroups like PHPAmersfoort and PHP.FRL.

    When I started working for Dutch Open Projects, we started organizing events there. First PHPCamp and later SymfonyCamp. I did my first conference talk at Dutch PHP Conference on Symfony, and have since then delivered talks and workshops at several conferences including SymfonyDay Cologne, Dutch PHP Conference and more. We’ve done in-house training courses from Ingewikkeld at several companies in The Netherlands and even outside of The Netherlands.

    Mike, Marjolein and I were brainstorming about Ingewikkeld and what direction to take, and training popped up as something we could take some steps in. So we did. We partnered up with the fantastic In2it and started Ingewikkeld Trainingen. Our next step in offering training courses to companies. With topics such as Symfony, Laravel, Docker/Kubernetes and Git, I think we’re offering a solid base for many development teams.

    One of the courses we’re offering is the Symfony and DDD in practice course that I’m also doing at SymfonyCon. It was sold out earlier but they were able to squeeze in a few extra seats (so grab those tickets there if you want to attend at SymfonyCon). If you can’t or don’t want to attend at SymfonyCon, you can also book this course as on-site training for your team. Get in touch with us if you’re interested in that!

    If you had a look at our offering but you can’t find what you’re looking for, then do get in touch with us! We’d love to discuss your needs and see if we can do a custom course for your team!

    I’m really, really excited about this next step in our training offering.

  • Quick tip: Rector error T_CONSTANT_ENCAPSED_STRING

    October 15, 2024
    error, php, rector, T_CONSTANT_ENCAPSED_STRING

    OK, here’s a quick one that got me stuck for some time. I made a change to a file and, you won’t expect it: I made a little boo boo. I didn’t notice it, but our Rector did. It gave me the following error:

    [ERROR] Could not process
             "path/to/file.php" file,
             due to:
             "Syntax error, unexpected T_CONSTANT_ENCAPSED_STRING415". On line: 82

    Given this error, I started checking the file on line 82. But there was nothing bad there. It all seemed fine. I didn’t understand why Rector was erroring. I made some other changes to the same file, but the error kept occuring on line 82. I was stumped!

    Until I noticed the error changing. Not the mentioned line number. But the actual error. The number in T_CONSTANT_ENCAPSED_STRING415 changed. And that’s when it clicked: The number there is the actual offending line number. I suspect the line 82 is the place in the Rector code where the exception is being thrown. The 415 is the actual line number. And indeed, when I checked that line, I immediately saw the boo boo I made.

  • API Platform Con 2024: About that lambo…

    September 21, 2024

    It’s over again. And it’s a bittersweet feeling. I got to meet many friends again, old and new. I’ve learned quite a few things. But it’s over, and that’s sad. Let’s have a look at the highlights of the conference.

    Laravel

    This was obviously the biggest news out there: During the keynote of Kevin Dunglas it was announced that API Platform, with the release of version 4 which is available now, not only supports Symfony, but also supports Laravel. And there are plans to start supporting more systems as well in the future. So it could well be that API Platform can become available for Yii, CakePHP, perhaps even WordPress or Drupal. All API Platform components are available as a stand-alone PHP package, so in theory it could be integrated into anything. And to jump immediately to the last talk of the conference as well, self-proclaimed “Laravel Guy” Steve McDougall compared for us what he used to need to do in Laravel to create an API, and what he can now do. And apparently, if you use Laravel, you get a Lambo. Not my words 😉

    This was by far the biggest news for the conference. And it surely brought on some interesting conversations. I for one am happy that API Platform is now also on Laravel, as I work with Laravel every once in a while as well. This will make creating APIs a lot easier.

    Xdebug, brewing binaries and consuming APIs

    Some other interesting talks were done by Derick Rethans, Boas Falke and Nicolas Grekas. Derick updated us on recent Xdebug developments by demoing a lot of stuff. Boas showed us in 20 minutes how he can brew binaries using FrankenPHP. Check how he did it! And Nicolas showed some interesting stuff for using Symfony’s HTTP Client.

    Meeting friends

    The hallway track is important at every conference, and so it is at API Platform Con. I was so happy to be able to talk to people like Sergès Goma, Hadrien Cren from Sensio Labs, Łukasz Chruściel, Florian Engelhardt, Allison Guilhem, Derick Rethans. Special mention: Francois Zaninotto. We hadn’t spoken in ages, it was so good to meet him again.

    Big thanks

    Big thanks to the organizers of the API Platform Con. It has been amazing. I feel very welcome here in the beautiful city of Lille. Hopefully see you next year!

    Up next

    Up next in unKonf, which is in Mannheim in two weeks time. This promises to be fun!

1 2 3 … 60
Next Page

skoop.dev

  • Bandcamp
  • Mastodon
  • Bandcamp