Last week one of my tasks was getting our Behat tests to run successfully in the Bitbucket pipeline. Now, we have an interesting setup with several vhosts, exchanges and queues. Our full definition of our RabbitMQ setup is in a definitions-file. So that should be easy, right? Well, think again.
Unfortunately, Bitbucket does not allow for volumes for services in your pipeline, so there is no way to actually get our definitions file into our RabbitMQ service. After searching for ways to solve this, with my express wish to not have to build my own RabbitMQ image, I ended up coming to the conclusion that the solution would be to… well, build my own image.
Creating the image was very simple. As in, literally two lines of Dockerfile:
FROM rabbitmq:3.12-management
ADD rabbitmq_definitions.json /etc/rabbitmq_definitions.json
I build the image and push it to our registry. So far so good. Now, I needed to alter our service definition in bitbucket-pipelines.yml
. This was also not that hard:
services:
rabbitmq:
image:
name: <registry-url>/rabbitmq-pipeline:latest
username: $AZURE_USERNAME
password: $AZURE_PASSWORD
environment:
RABBITMQ_DEFAULT_USER: <rmq-user>
RABBITMQ_DEFAULT_PASS: <rmq-pass>
RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS: -rabbitmq_management load_definitions "/etc/rabbitmq_definitions.json"
The trick in this definition is in that environment variable RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS
. This basically tells RabbitMQ to load the definitions file we baked into the image in the first step. That will then set up our whole RabbitMQ, so that the code executed during the Behat tests will be able to connect as usual. RabbitMQ will be available in your pipeline on 127.0.0.1
.
Leave a Reply