skoop.dev

  • About
  • @skoop@phpc.social
  • Dutch PHP Conference 2025

    March 18, 2025
    amsterdam, conferences, DPC, php

    With only a few more days to go until Dutch PHP Conference 2025 it’s time to look forward to the conference. DPC is always a good conference (and has always been so), but I’m going to put focus on some talks that I’m really looking forward to.

    Sacred Syntax: Dev Tribes, Programming Languages, and Cultural Code

    One of the best and most entertaining talks at DPC last year was by Serges Goma and was titled Evil Tech: How Devs Became Villains. In it, Serges put focus on ethics in software development in a very fun and accessible way.

    With the great way that topic was approached I’m really looking forward to Serges’ take on tribalism, which is the subject of the talk this year!

    Small Is Beautiful: Microstacks Or Megadependencies

    Bert Hubert has been big in media in recent times talking about the EU dependency on US-based cloud providers, talking about the risks and even the potentially unlawfulness of doing that. However, Bert is also someone with a big history in tech, being the founder of PowerDNS.

    In his closing keynote, he will talk about microstacks and megadependencies. I’m really curious to hear about this from Bert.

    Don’t Use AI! (For Everything)

    After my recent blogpost on AI I got a response from Ivo Jansch, one of the organizers of DPC but also a really smart person that I’ve enjoyed working with in the past, to attend this talk by Willem Hoogervorst. I am really looking forward to the considerations that Willem will be sharing on when to use AI and when not to.

    Parallel Futures: Unlocking Multithreading In PHP

    Multithreading and PHP is not a good combination? Apparently it is! Florian Engelhardt will be talking about ext-parallel and I do want to hear more. I’m intrigued!

    Our tests instability Prevent Us From Delivering

    I’ve seen this situation with several of my customers: tests that can not be trusted. Either tests turned out green when something was wrong, or tests would be red while everything was fine. I’m looking forward to hearing from Sofia Lescano Carroll on what can lead to this and how to mitigate this.

  • Post-mortem

    March 14, 2025
    accountability, development, incidents, post-mortem, programming

    It is hard to imagine a world where nothing goes wrong. Especially in software development, which is not an exact science, things will go wrong. As far as I am aware, no definitive research has been done on this, and different sources give different numbers: Security Week talks about 0.6 bugs per 1000 lines of code, while Gray Hat Hacking mentions 5-50 bugs per 1000 lines of code. I am sure things like this also depends on your QA process. But it’s impossible to write bug-free code.

    So when things inevitably go wrong and your production environment goes down or errors out, it is important to figure out what went wrong. If you know what went wrong, you can figure out how to prevent that issue the next time. Part of that is a good post-mortem. A post-mortem usually includes a meeting where the event is discussed openly and freely, and a written report of the findings (a summary of which you could and should send to your customers).

    In the past days I’ve seen this blogpost from Stay Saasy do the rounds on social media and in communities. As I already said on Mastodon, I couldn’t disagree more. I feel the need to expand on just that statement, so I’ll focus on some statements from the blogpost and why I disagree so much.

    Frequency

    The first thing that I noticed in the blogpost is an assumption that shocked me a bit:

    Many companies do weekly incident reviews

    Hold up. Weekly? I realize the statistics are wildly varying and if indeed you have 50 bugs per 1000 lines of code, you’ll have a lot of bugs, but I would hope that you have a QA process that weeds out most bugs. I am used to have several steps between writing code and it going to production. That may include:

    • Code reviews by other developers
    • Static analysis tools
    • Automated tests (unit tests and functional tests)
    • Manual tests by QA
    • Acceptance tests by customers

    Let’s go from that worst-case number of 50/1000. I would expect, with the above steps, that the majority of bugs are therefore caught before the code even ends up on production servers. If this is true, why would you have weekly incident reviews? I mean, that’s OK if it is indeed needed, but if you need weekly incident reviews, I’d combine looking at the incident with looking at your overall QA process, because something is wrong then.

    Is it somebody’s fault?

    In the blogpost, Stay Saasy states that it is always somebody’s fault.

    it must be primarily one person or one team’s fault

    No. Just no. If you look back at the different ways you can catch bugs that I described earlier, you can already see that it is impossible to blame a single person. One or more developers write the code, one or more other developers review the code, one or more people set up and configured the static analysis tools and one or more people interpreted the results of those, the tests were written, the QA team did manual tests where needed, and the customer did acceptance testing. Bugs going into production is, aside from just being something that happens sometimes, a shared responsibility of everyone. It is impossible and unfair to blame a single person or even a single team.

    Accountability

    It feels that Stay Saasy mixes up blameless post-mortems with non-accountability. But these are two different things, with two different motivations. The post-mortem is not about laying blame. It is about figuring out what went wrong and how we can prevent it in the future. It is a group effort of all involved. The accountability part if something that is best done in a private meeting between the people who were involved in the cause of the issue. To mix these two up would indeed be a mistake, which is why blameful post mortems is such a bad idea.

    On the flip side, if you really messed up, you might get fired. If we said we’re in a code freeze and you YOLOed a release to try to push out a project to game the performance assessment round and you took out prod for 2 days, you will be blamed and you will be fired.

    While I agree up to a certain point with this statement, I think in this case you might also want to fire the IT manager, CTO or whoever is responsible for the fact that an individual developer could even YOLO a release and push it to production during a code freeze. Again, have a look at the process please.

    But yes, even if it is possible to do this on your own, you should not actually do this. So if you do this, it might warrant repercussions up to and including termination of your contract.

    Fear as an incentive

    There is one main incentive that all employees have – act with high integrity or get fired.

    I can’t even. Really. If fear is your only tactic to get people to behave, you should really have a good look at your hiring policy, because you’re hiring the wrong people.

    In every role where I was (partially) responsible for hiring people, my main focus would be to hire on people with the right mindset. Skills were not even the main focus, it would be mindset. People who are highly motivated to write quality code, who will take the extra effort of double-checking their code, who welcome comments from other developers that will improve the code. People that are always willing to learn new things that will improve their skills. You do not need fear to keep people in check when you hire the right people, because they are already motivated by their own yearning to write good code, to deliver high-quality software.

    So how to post-mortem?

    It might not be a surprise to you after all I read that I am a big supporter of blamless post-mortems. Why? Because of the goal of a post-mortem. The main goal (in my humble opinion) is to find out what went wrong, and brainstorm about ways to prevent this from happening again. There are four main phases in a post-mortem process:

    • Figure out what went wrong
    • Think of ways to prevent this from happening again
    • Assign follow-up tasks
    • Document the meeting results

    Figure out what went wrong

    The first phase of the meeting is to figure out what went wrong? This first phase should be about facts, and facts alone. Figure out which part of your code or infra was the root cause of the incident. Focus nost just on that offending part of your software, but also on how it got there? Reproduce the path of the offending bit from the moment it was written to the moment things went wrong.

    In the first phase, it is OK to use names of team members, but only in factual statements. So Stefan started working on story ABC-123 to implement this feature, and wrote that code or Tessa took the story from the Ready For Test column and started running through testcases 1, 2 and 5. Avoid opinions or blame. Everyone should be free to add details.

    Think of ways to prevent this from happening again

    Now that you have your facts straight, you can look at the individual steps the cause took from the keyboard to your production server, and figure out at which steps someone or something could’ve prevented the cause from proceeding to the next step. It can also be worth it to not just look at individual steps, but also the big picture of your process to identify if there are things to be changed in multiple steps to prevent issues.

    Initially, in this phase, it works well to just brainstorm: put the wildest ideas on the table, and then look at which have the most impact and/or take the least effort to implement. Together, you then identify which steps to take to implement the most promising measures to prevent the issue in the future.

    Let everyone speak in this meeting. Involve your junior developers, your product manager, your architect, your QA and whoever else is a stakeholder or in another way involved in this. You might be surprised how creative people can get when it comes to preventing incidents.

    Assign follow-up tasks

    Now that you have a list of tasks to do to prevent future issues, it’s time to assign who will do what. Someone (usually a lead dev or team lead, sometimes a scrum master, manager or CTO) will follow up on whether the tasks are done, to make sure that we don’t just talk about how to fix things, but we actually do.

    Document the meeting results

    Aside from talking about things and preventing future issues, you should also document your findings. Pretty extensively for internal usage, but preferably also in a summarized way for publication. Customers will notice issues, and even if they don’t notice, they will want to be informed. Honest and transparent communication about the things that go wrong will help your customers to trust you more: You show that you care about problems, you do all you can to solve them and to prevent them in the future. Things will go wrong, that’s inherent in software development. The way you handle the situation when things go wrong is where you can show your quality. In all documentation, try to avoid blaming as well. That isn’t important. What’s important is that you should you care and put in effort to prevent future issues.

    So what about accountability?

    Blameless post-mortems do not stop you from also holding people accountable for the things they do. If someone messes up, they should be spoken to directly. But it should not be a lynch mob setting, but a preferably one-on-one setting where two individuals evaluate the situation. And yes, there can be consequences. The most important thing is that the accountability is completely seperate from the post-mortem. It is not the focus of a post-mortem to hold someone accountable. That is a completely seperate process.

  • On AI

    February 28, 2025
    ai, chatgpt, climate crisis, llm, openai

    AI. It is the next big tech hype. AI stands for Artificial Intelligence, which these days mostly comes in the form of Large Language Models (LLMs). The most popular one these days seems to be ChatGPT from OpenAI, although there are many others already being widely used as well.

    Like with a lot of previous tech hypes, a lot of people have been jumping on the hype train. And I can’t really blame them either. It sounds cool, no, it is cool to play with such tech. Technology does a lot of cool things and can make our life a lot easier. I can not deny that.

    But as with a lot of previous tech hypes, we don’t really stand still to think about why we would want to use the tech. And also, why would we not want to use it. Blockchain is a really cool technology which has its use, but when it became a hype a lot of people/companies starting using it for use cases where it really wasn’t necessary. Despite all the criticism on the energy usage of cryptocurrency, still on a regular basis new currencies are started. And while the concept behind cryptocurrency is really good, the downsides are ignored. And the same happens now with AI.

    The downsides we like to ignore

    There has already been a lot of criticism of using AI/LLMs. I probably won’t be sharing anything new. But I’d like to summarize for you some reasons why we should really be careful with using AI.

    Copyright

    The major players in the AI world right now have been trained on any data they could find. This includes copyrighted material. There is a good chance that it has been trained on your blog, my blog, newspaper material, even full e-books. When using an image generation AI, there’s a good chance it was trained with material by artists, designers, and other creators that have not been paid for the usage of that material, nor have they given permission for their material to be used. And to make it even worse, nobody is attributed. Which I understand, because when you combine a lot of sources to generate something new, it’s hard to attribute who were the original sources. But they are taking a lot, and then earning money on it.

    Misinformation

    Because of the way the big players scraped basically all information from the Internet and other sources, they’ve also been scraping pure nonsense. When the input is inaccurate, the output will be as well. There’s been tons of examples on social media and in articles about inaccuracies, and even when you confront the AI with incorrectness, they’ll come up with more nonsense.

    In a world where misinformation is already a big issue, where science is no longer seen as one of the most accurate sources of information, but rather people rely on information from social media or “what their gut tells them”, we really don’t need another source of misinformation.

    Destroying our world

    I am sorry to say this, but AI is destroying our world. Datacenters for AI are using an incredible amount of power and with that, are both causing an energy crisis but also causing a lot of emissions. And there is more, because it’s not just the power. It’s also the resources that are needed to make all the servers that run the AI, the fuel that is needed to get the fuel for the backup generators to the datacenters, the amount of water being used by datacenters, and the list goes on. Watch this talk from CCC for a bit more context.

    There have been claims that AI would solve the climate crisis, but one can wonder if this is true. Besides, we’re a bit late now. Perhaps if we have made this energy investment some decades ago, it might’ve been a wise investment. We’re in the middle of a climate crisis and we should really currently focus on lowering emissions. There is not really room for investments at this point.

    If we’re even working on reviving old nuclear power plants simply to power AI datacenters, something is terribly wrong. Especially when it’s a power plant that almost caused a major nuclear disaster. And while nuclear power is often seen as a clean power solution, that opinion is highly contested due to the need for uranium (which has to be mined in a polluting process) and the nuclear waste is also a big problem. Not to speak of the problems in case of a disaster.

    Abuse

    One thing I had not seen before until I started digging into this is the abuse of cheap labor. This does make me wonder: How many other big players in AI do this? It is hard to find out, of course, but this is something we should at least weigh into the decision whether to use AI.

    So should we stay away from AI?

    It is easy to just say yes to that question, but that would be stupid. Because AI does have advantages. Certainly, there is nuance in this decision. Because there are advantages.

    AI can do things much faster

    A good example of AI doing things a lot faster is the way AI has been used in medicine. A trial in Utrecht, for instance, found that medical personnel spend a lot less time on analysis, experience their work as more enjoyable, and the cost goes down as well. And it gets even better, because there are also AI tools that can even predict who gets cancer. These specialized tools trained for specifically this purpose can only be seen as an asset to the field of medicine.

    Productivity increases

    Several people I’ve talked to who use AI in their work as software developers have mentioned the productivity boost they get from their respective tools. Although most do not think you should let AI generate code (as there have been a lot of issues with AI-generated code so far), it can be used for other things such as debugging but also summarizing documentation or even writing documentation. The amount of time you have to invest in checking the output of an AI is a lot less than actually having to do the work yourself.

    I do want to add a bit of nuance to this, however. The constant focus on productivity is, in my very humble opinion, one of the biggest downsides of (late stage) capitalism, where “good things take time” becomes less important. From a company point of view, this makes sense: If you can lower the cost of work by paying a relatively small amount of money for tools that increase productivity, that makes the financial situation of the company better. However, these costs never include the cost to the environment. If more companies would make more ethical decisions when it comes to the resources they use, this would make the decision process very different.

    Less repetition and human errors

    This goes for any form of automation of course: When you automate repetetive tasks, your work gets more enjoyable. Also, potentially related to this, you’re less prone to making errors. AI can do that automation. I recently saw a news item of how a medical insurance company in The Netherlands uses AI to “pre-process” international claims. international claims don’t follow their normal standards, so they used to have to manually process all those claims. Now, those claims go through an AI that already analyses the claim, tries to identify all important data, so that the people handling the claims only have to check whether the AI identified it correctly. After that, they can focus on handling the claim. This reduces a lot of human errors and repetition of boring work.

    So now what?

    Of course everyone has to come to their own conclusion on what to do with this. And there’s a lot more resources out there with information to consider. I have come to some conclusions for myself. Let me share those for your consideration.

    Specialized AI over generic AI

    AI can be really useful. Think of the medical examples I mentioned above. Because of the specialization, those systems have a lot lower energy consumption for training and running purposes. The main problem with generic AI is that because it is unclear what it should do, it is trained to do and understand everything. A lot of the training time needed might never actually be used.

    While for me the climate crisis is the most important reason to be critical of AI, I can also not ignore the copyright issues, and the problems with misinformation. AI may seem smart, but is actually quite dumb. It will not realize when it says something stupid. So with a high energy (and other pollution) investment, you get relatively unreliable results. And that’s without even mentioning potential bias. The model needs to be trained, but someone decides on which data it is getting trained.

    Experimentation is worth it

    Just like with any other technology, it is worth experimenting with AI. Last year I did experiment (and yes, I did experiment with ChatGPT) to get an idea of how AI can help. And while I found the usage helpful, I came to the conclusion that most of what it did for me at that point was make my life slightly easier. It did add value, but not enough for me to justify the problems I described above.

    What I did not experiment with, though, but where I do see a potential, is in small, energy-efficient models that you can run locally. But that again also comes back to my previous point: Those can be trained only for a specific purpose.

    Is there another option?

    One of the most important questions everyone considering AI should ask themselves is: Are there no other options?

    One thing I’ve seen happen a lot lately is the everyone is jumping on the AI bandwagon, so we must also do that! without thinking about the why? of implementing AI. A lot of applications of AI that I have seen have been to cover up other flaws in software. Instead of using AI to find data in your spaghetti database, you could also implement a better search functionality using tools that use less energy and that have been made specifically for search. ElasticSearch, Algolia and MeiliSearch come to mind, for instance. Some of those have been implementing some AI as well, but again: that is very specific and specialized.

    I’ve also heard people say “task x is hard right now to accomplish and AI can help”. In some situations, sure, AI is a good solution. But in a lot of situations, it might be your own application that just needs improving. Talk to your UX-er and/or designer and see how you can improve on your software.

    An important topic to factor into your considerations on whether to use AI or not should be what is the impact on the rest of the world? You’ve read some of the downsides now, factor those into your decision on whether you need AI.

    Long story short: Always be critical and never ever implement AI just because everyone is doing it.

    Concluding

    Please, do not start using AI for everything you need. Be very critical in every situation where the idea comes up to implement AI. If you do, ensure you keep in mind not just the cost to your company, but also the cost to the rest of the world. When using AI, focus on specialized and optimized versions, preferably those that can be run locally on energy-efficient computers. And always, but really… always be critical of AI input and output.

  • Expanding on our training offering

    November 1, 2024
    ddd, docker, git, ingewikkeld, knowledge sharing, kubernetes, laravel, symfony, symfonycon, training

    I don’t know if it matters that I’m the son of two teachers, but training is in my DNA, knowledge sharing is in my DNA. From the moment I started learning PHP based on a book and scripts I downloaded from the Internet, I’ve tried to help other people who ran into issues that I might have the solution for. I was involved in the founding of the Dutch PHP Usergroup and later PHPBenelux, and local usergroups like PHPAmersfoort and PHP.FRL.

    When I started working for Dutch Open Projects, we started organizing events there. First PHPCamp and later SymfonyCamp. I did my first conference talk at Dutch PHP Conference on Symfony, and have since then delivered talks and workshops at several conferences including SymfonyDay Cologne, Dutch PHP Conference and more. We’ve done in-house training courses from Ingewikkeld at several companies in The Netherlands and even outside of The Netherlands.

    Mike, Marjolein and I were brainstorming about Ingewikkeld and what direction to take, and training popped up as something we could take some steps in. So we did. We partnered up with the fantastic In2it and started Ingewikkeld Trainingen. Our next step in offering training courses to companies. With topics such as Symfony, Laravel, Docker/Kubernetes and Git, I think we’re offering a solid base for many development teams.

    One of the courses we’re offering is the Symfony and DDD in practice course that I’m also doing at SymfonyCon. It was sold out earlier but they were able to squeeze in a few extra seats (so grab those tickets there if you want to attend at SymfonyCon). If you can’t or don’t want to attend at SymfonyCon, you can also book this course as on-site training for your team. Get in touch with us if you’re interested in that!

    If you had a look at our offering but you can’t find what you’re looking for, then do get in touch with us! We’d love to discuss your needs and see if we can do a custom course for your team!

    I’m really, really excited about this next step in our training offering.

  • Quick tip: Rector error T_CONSTANT_ENCAPSED_STRING

    October 15, 2024
    error, php, rector, T_CONSTANT_ENCAPSED_STRING

    OK, here’s a quick one that got me stuck for some time. I made a change to a file and, you won’t expect it: I made a little boo boo. I didn’t notice it, but our Rector did. It gave me the following error:

    [ERROR] Could not process
             "path/to/file.php" file,
             due to:
             "Syntax error, unexpected T_CONSTANT_ENCAPSED_STRING415". On line: 82

    Given this error, I started checking the file on line 82. But there was nothing bad there. It all seemed fine. I didn’t understand why Rector was erroring. I made some other changes to the same file, but the error kept occuring on line 82. I was stumped!

    Until I noticed the error changing. Not the mentioned line number. But the actual error. The number in T_CONSTANT_ENCAPSED_STRING415 changed. And that’s when it clicked: The number there is the actual offending line number. I suspect the line 82 is the place in the Rector code where the exception is being thrown. The 415 is the actual line number. And indeed, when I checked that line, I immediately saw the boo boo I made.

  • API Platform Con 2024: About that lambo…

    September 21, 2024

    It’s over again. And it’s a bittersweet feeling. I got to meet many friends again, old and new. I’ve learned quite a few things. But it’s over, and that’s sad. Let’s have a look at the highlights of the conference.

    Laravel

    This was obviously the biggest news out there: During the keynote of Kevin Dunglas it was announced that API Platform, with the release of version 4 which is available now, not only supports Symfony, but also supports Laravel. And there are plans to start supporting more systems as well in the future. So it could well be that API Platform can become available for Yii, CakePHP, perhaps even WordPress or Drupal. All API Platform components are available as a stand-alone PHP package, so in theory it could be integrated into anything. And to jump immediately to the last talk of the conference as well, self-proclaimed “Laravel Guy” Steve McDougall compared for us what he used to need to do in Laravel to create an API, and what he can now do. And apparently, if you use Laravel, you get a Lambo. Not my words 😉

    This was by far the biggest news for the conference. And it surely brought on some interesting conversations. I for one am happy that API Platform is now also on Laravel, as I work with Laravel every once in a while as well. This will make creating APIs a lot easier.

    Xdebug, brewing binaries and consuming APIs

    Some other interesting talks were done by Derick Rethans, Boas Falke and Nicolas Grekas. Derick updated us on recent Xdebug developments by demoing a lot of stuff. Boas showed us in 20 minutes how he can brew binaries using FrankenPHP. Check how he did it! And Nicolas showed some interesting stuff for using Symfony’s HTTP Client.

    Meeting friends

    The hallway track is important at every conference, and so it is at API Platform Con. I was so happy to be able to talk to people like Sergès Goma, Hadrien Cren from Sensio Labs, Łukasz Chruściel, Florian Engelhardt, Allison Guilhem, Derick Rethans. Special mention: Francois Zaninotto. We hadn’t spoken in ages, it was so good to meet him again.

    Big thanks

    Big thanks to the organizers of the API Platform Con. It has been amazing. I feel very welcome here in the beautiful city of Lille. Hopefully see you next year!

    Up next

    Up next in unKonf, which is in Mannheim in two weeks time. This promises to be fun!

  • API Platform Con is coming

    September 16, 2024
    apiplatform, apiplatformcon, conferences, ddd, DPC, laravel, lille, php, sylius, symfony, xdebug

    Last year was my first API Platform Con, and it was an amazing experience. I heard good things about FrankenPHP and was able to get my first container migrated to FrankenPHP at the conference. It was that simple. I was blown away by the talk on the decentralized web and inspired by the possibilities that the decentralized web presents. I hope the stuff Pauline Vos presented there will become more mainstream soon. The great talks combined with the beautiful city of Lille and the very friendly community made it an amazing conference. I was quite happy to be invited back this year and am looking forward to being there again later this week.

    So let me share the stuff that I’m looking forward to.

    The subject of the opening keynote is still a surprise. The website states The subject of this conference will be announced shortly and I’m curious what it will be. Kevin is a great speaker though, so this will be good regardless.

    Sergès Goma on devs becoming villains

    I’ve seen this talk at Dutch PHP Conference and it is fantastic. It makes you think. I like it when talks make you think. It makes you laugh. I love it when speakers are able to put humour into a talk. Yes, this is one not to miss. Even if I’ve seen it before already.

    The performance dilemma

    Then it’s hard to pick. Both Allison Guilhem speaking on real-time message handling and notifications and Mathias Arlaud speaking on optimizing serialization in API Platform are really interesting subjects. I think I’ll wait until the last minute to decide which one to actually check out.

    After lunch we’ll have another mystery keynote. I guess we should be there as well!

    Admin generator!

    Admin generators have been there since the early days of before symfony (yes, with a lowercase s back then) 1.0. And Francois Zaninotto has also been around that long. I’m really looking forward to Francois speaking about the API Platform admin generator!

    Something something DDD

    Yeah, next up I’m on stage, so I can’t check out the other talk in that slot. The other one is in French anyway, so I probably would’ve skipped on it anyway. My friend is not much better than croissant, chauffeur, Disneyland Paris, baguette.

    Xdebug

    I am, still, a very happy VDD (var_dump/die)-driven developer. I know I should do more with Xdebug. And who better than to hear from than Derick himself.

    After the panel discussion it’s time for the community party. And given we’ll have a party, I’m not sure if I’ll be in time for the first session on Friday. I’ll try though because while I’ve played with FrankenPHP a bit, I’m not extremely familiar with Caddy, and in the first session Matt Holt will be talking about Caddy. So that could be very interesting!

    One Billion?!

    Florian Engelhardt will be doing some fun stuff with optimizing PHP in his experiment on processing a 1 billion row test file with PHP. That’s a lot!

    It’s alive!

    I’ve mentioned me playing with FrankenPHP already and Boas Falke will be talking about building stand-alone binaries using FrankenPHP. That is awesome!

    Consume!

    I work with a lot of API’s in all my projects, so I’m curious if Nicolas Grekas will be able to teach me what I’m doing wrong. He’s presenting his talk Consuming HTTP APIs in PHP The Right Way so I’m expecting a lot of shaking my head about how I’ve been doing this wrong the whole time.

    Sylius!

    My favorite e-commerce framework also has presence at API Platform Con, in the form of Łukasz Chruściel speaking on how they’ve handling migrating to API Platform 3. I’m working on a big migration project (although no API Platform), so it’s good to learn about how they handled this.

    Laravel?

    API Platform is a Symfony-based system, right? RIGHT? Well, apparently not. Perhaps the talk I’m looking forward to most is Steve McDougall’s talk on using API Platform within a Laravel application. I work a lot with Symfony, but I also work quite a bit with Laravel. Having a powerful tool such as API Platform available on Laravel projects is an excellent way to make Laravel projects even more fun. So yes, I’m really really looking forward to this one.

    Lille

    Having said all that, I’m also looking forward to visiting the beautiful city of Lille again. Last year on the Saturday after the conference I took a long walk through the city before driving back home. Lille is a fantastic city. Will I see you there?

  • A quick-follow up about the generated fields

    September 12, 2024
    doctrine, generated fields, json, mysql, symfony

    Since I posted about the generated fields I’ve learned some interesting new information that I wanted to share as well. There is an interesting detail that may lead to issues.

    In my previous example I used the following to map to fields inside the JSON:

    price_info->"$.salesprice"

    The -> here is the interesting part: There are two variations, being the aforementioned -> and another one ->>.

    -> simply extracts the value from the JSON. This is best used to ensure correct types from the data for basically anything but strings. If you have, for instance, an integer value or a boolean in your JSON and you map it to a correctly typed column, you’ll get an error if you use ->> because that will always cast it to a string. Why? I’ll get back to that. For now: If your value is not a string, best use ->.

    ->> does more than simply extracting the value. It also unquotes the value. This is useful for strings, because in JSON they are always between ". If you use -> on a string value, then you will get that value, plus the quotes. Such as "value". If you use ->> then you will get value. Without quotes. So ->> is the best one to use for string values. However, as mentioned before, this does mean that it will always assume a string value. So if your value is not a string, it will be made a string. If your generated column is not of a string type, this will give you errors about typing.

  • Using generated fields in MySQL

    August 23, 2024
    doctrine, generated fields, json, mysql, php, symfony

    Recently I have been working on a migration of data from an old application to a brand spanking new Symfony 7 application. This migration included getting rid of the old Couchbase document database and moving to MySQL. MySQL has for quite some time already had a special JSON field type that is actually pretty cool. And using Doctrine, you don’t even notice it’s JSON, because there are automatic transformations to a PHP array structure.

    I was looking for the right way to set indexes on specific fields inside the JSON structure for easy querying, and as I was doing so I learned about MySQL generated fields. This is pretty cool stuff, as it will automatically create virtual fields in your table with the data as it exists inside the JSON. And those fields can be part of an index!

    Step 1 is obviously creating the column. So let’s add a column to a table called articles:

    ALTER TABLE articles 
    ADD sales_price INT AS (price_info->"$.salesprice"),
    ADD cost_price INT AS (price_info->"$.costprice");

    In this example, the price_info is the JSON field. Inside the JSON structure, it has a field with key salesprice and a field with key costprice, and I here add those as seperate columns sales_price and cost_price into the table articles.

    Above query I can now add to a Doctrine migration.

    Now I also need to specify to Doctrine in the entity that the fields exist. But they are special fields, because they are read-only (you update the value by updating the JSON). So you need to configure that for your entity. For instance:

    /** * @Column(type="json") */
    protected $priceinfo;
    
    /**
     * @Column(type="int", nullable=true, insertable=false, updatable=false)
     */
    protected $sales_price;
    
    /** * @Column(type="int", nullable=true, insertable=false, updatable=false) */
    protected $cost_price;

    Notice the fact that I set insertable and updatable to false. Also important: I set nullable to true. Why? Because in a JSON field, there is no guarantee that a value is there. If a value is missing, MySQL will set it to NULL. If your field is not nullable, it will fail on writing the record to the database.

    Now, if you create a new Article and set JSON that contain the costprice and salesprice fields, as soon as you persist it in the database, the values of $cost_price and $sales_price are automatically populated based on the values in the JSON.

    Since they are now regular properties of your entity, you can also use them in the where clauses of your queries. They are, for all intents and purposes, simply regular fields in your table. The only thing you can not do is set the value on your entity and persist it, and expect it to stick. If you need to change the value, you need to update the JSON.

    I just wanted to share this because I think this is extremely cool and useful stuff. As for a practical example: One use case I know of at least is to store data structures from external systems in exactly the structure you receive it, and let MySQL sort everything out for you. Especially when you may also need that structure to communicate back to the external system, it is good to keep the structure the way you received it.

  • RabbitMQ in your Bitbucket pipeline

    August 15, 2024
    bitbucket, pipelines, rabbitmq, service

    Last week one of my tasks was getting our Behat tests to run successfully in the Bitbucket pipeline. Now, we have an interesting setup with several vhosts, exchanges and queues. Our full definition of our RabbitMQ setup is in a definitions-file. So that should be easy, right? Well, think again.

    Unfortunately, Bitbucket does not allow for volumes for services in your pipeline, so there is no way to actually get our definitions file into our RabbitMQ service. After searching for ways to solve this, with my express wish to not have to build my own RabbitMQ image, I ended up coming to the conclusion that the solution would be to… well, build my own image.

    Creating the image was very simple. As in, literally two lines of Dockerfile:

    FROM rabbitmq:3.12-management
    
    ADD rabbitmq_definitions.json /etc/rabbitmq_definitions.json

    I build the image and push it to our registry. So far so good. Now, I needed to alter our service definition in bitbucket-pipelines.yml. This was also not that hard:

    services:
        rabbitmq:
            image:
                name: <registry-url>/rabbitmq-pipeline:latest
                username: $AZURE_USERNAME
                password: $AZURE_PASSWORD
            environment:
                RABBITMQ_DEFAULT_USER: <rmq-user>
                RABBITMQ_DEFAULT_PASS: <rmq-pass>
                RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS: -rabbitmq_management load_definitions "/etc/rabbitmq_definitions.json"

    The trick in this definition is in that environment variable RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS. This basically tells RabbitMQ to load the definitions file we baked into the image in the first step. That will then set up our whole RabbitMQ, so that the code executed during the Behat tests will be able to connect as usual. RabbitMQ will be available in your pipeline on 127.0.0.1.

1 2 3 … 60
Next Page

skoop.dev

  • Bandcamp
  • Mastodon
  • Bandcamp