Tower anti-patterns

Hopefully you find that the “spaghetti code” anti-pattern is something from the past (if not, you should do yourself a favour and find yourself another project ūüôā ). Does it mean hat our code is now perfect? And does it mean that it is so transparent that everyone instantly understands the motivation and its purpose behind each line? I’m not going to answer that (retorical) question. But I wondered; what are the new “bad practices”? How do they look like? So I gathered my experiences and trying to find and name them accordingly into some self declared tower anti-patterns.

Huge disclaimer: So I tried coming up with some new namings of the misfits I found and poured that into the so called “tower anti-patterns”. This is just for fun, I’m sure you can find a thing or two which don’t cover the title, or is even faulty logic on my side. For example, the anti-patterns are not only seen from a technical point of view, but also from organizational perspective.

I was glad I couldn’t find examples of total failures or laughable situations in my recent experience. But I could find some uncomfortable situations in code bases and its company surroundings that just felt wrong, they just aren’t fit for its purpose. That doesn’t mean the code itself was that bad, but the combination of project methodology, budget, code architecture and purpose just weren’t aligned. All these things need to be in balance. If not in balance it’s not a fit.

There are many possible situations of disbalance within the force. The big difference with “big ball of mud” / “spaghetti code” is that following anti-patterns are not neccesarily equal to total failure. It IS possible to recover from these (from a technical standpoint) or prevent even worse things happening. So here the tower anti-patterns:

tower of pisa

Tower of Pisa

The analogy to a software project is that it starts great but it eventually will be hard or risky to extend. The good news is that this is not all too bad; if you keep it small then it can still be great. Don’t extend too much, instead build a “tower” next to it. Think microservices.

Jenga tower

I borrowed this analogy from an ex colleague of mine, which he used during a business meeting with a customer. Just like the Jenga game, it’s matter of time until everything collapses due to a bad foundation. Every time someone touches the code the risk of breaking stuff increases. To recover from this is really hard, but it can be contained. Just don’t touch the code anymore. Put everything in an immutable container and leave it be. Don’t fix if it ain’t broken

Trump tower

Like a baboon with much noise this compares to a violent volatile development environment (keywords are; many stakeholders, large investments and tight deadlines). There is much money poured into these kind of prestige projects. This is also the seed for its decay because in such environment barely nothing can grow in a sustainable manner. Eventually it will show cracks and will go down because of stupid mistakes and distrust. If you’re in such a project you won’t be able to speak your mind, because there is probably too much baboonism. Although there might be parts which are quite ok, they tend to be not excelling. Once the shit hits the fan the baboons are suddenly gone. From technical point of view there might be some parts worth saving. When considering your own well-being; I strongly advice to just walk away before the baboon does. When that happens think twice who is left to blame for‚Ķ

Tower of babel

This is about a monolithic software project that no-one fully understands. It has become so large that it’s hard to get a grasp of every component. Think enterprise applications after many years of development. Usually there are a few architects within the company that still know the entire landscape, but they will slowly become a single point of failure. It is not impossible to recover from this situation but it will take time. One needs to cut the application in pieces, create hard separations by implementing microservices. Then it will be easier to replace parts, and it is not neccessary to know all ins and outs of every component.

Secure web proxy with Symfony

For the past few months I got caught up into puppet, monitoring and other devops stuff. Wether or not security is part of devops (honestly I don’t care as long as security is taken into mind when developing) I was also tasked with securing some backend systems for the company that I work for as part of ISO 27001 certification. One of them was a PHP based application which had a code base, so old, so spaghetti bolognese that it could wake you up in the middle of night soaking in sweat when thinking about its security design. Nevertheless it was a critical application, that couldn’t be merged easily to a new code base like Symfony or another tool. So how to improve its security?

If it was built in Symfony, I could easily setup a firewall, add NelmioSecurityBundle and maybe add a 2-factor bundle and get on with it. But yeah, it wasn’t. Maybe I could just go into the spaghetti and implement some meatballs with composer and make the best of it? Meh…. There will always be the risk of open holes, maybe a “require ‘check_login.php’;” is forgotten, or login method might be vulnerable from a good old SQL injection. So this was not an option.

The app was used publicly by some of our customers, so IP whitelisting was no option. It just needed public protection from vulnerability scanners, So I wanted it to be wrapped in a secure package. More like a portal. What if I wrapped the entire application into a secured Symfony environment? But that would incur another session_start from the old application. So it needed to more separated than that. And at that point the idea of a Symfony web proxy was born. The Symfony layer would handle all the security, after which every request is proxied to the old PHP application. So the requirements would be

  • A symfony application @ my.symfony.proxy which functions as login portal
  • After logging in, every request is ported to an another site. Something like https://my.symfony.proxy —> http://old_application_at_private_location:8080

Here’s how with Symfony 4.x (using flex) on PHP 7.2.

Setting it up

First we create a new project and install and setup some needed libraries. I mentioned them here below

Now that we installed everything we needed we can configure our application.


Lets start with my favourite and most complex part of our application (now we’re still sharp). First we configure our security.yaml

Now comes the interesting part. I wanted to use the same user and password database provided by the old native PHP app. So I need to create an entity that respects the existing table, and add this as a user provider in the security.yaml.

Because the old app uses plain md5 hashing for the password without any salt or iterations I needed to change the encoder to some lower standards, so it integrates seamlessly with the old standard. Obviously this is not recommended security. If this is your exact same case, and would like to enhance this security. Then I recommend you to add an extra salt column, and rehash the password on login with some salt and iterations with the default encoder. On the login event you only need to check if there is already salt or not to pick the right encoder.

Now we can configure the firewall, wich is somewhat standard. Change according to your needs. Don’t forget to name the right provider.

Last but not least, add the access control

User Entity

In my case I wanted to reuse an existing user database (within the old application), which is served under MySQL. So I created an entity which reflects this table to be able to reuse this data. This is how my entity looks like:


We need a few methods; first the login and logout method, which are pretty straight forward.

Then finally the core of this whole blog, a catchall controller. This method is called upon every other request (catchall magic is handled in the routing part further on). It just reads the current request, and passes everything within a curl request (Guzzle) to the old application that needs to be protected. In my case I placed the application in nginx under port 8080, which is shielded by a firewall and only accessible from localhost. Change the URL to your own “old application”

So now we basically created a proxy application. Now you can serve every existing site with your own custom domain (don’t use this for bad stuff, mkay?).

Security listener

Underneath all of this, we still need to add a little more custom logic to take over the login process. Instead of logging in directly with the original login of the application, I wanted to use symfony login. This is done with a security listener. Be sure that you change the code to an actual working login.


There are four routes needed for the controller. Besides the security related routes there is one route which is particularly interesting. This is the catchall route which should be named lastly to catch all other routes that are not matched by the fixed security related routes. This is done by providing a regex with just .* to match anything.


The template is a somewhat basic login template based on a simple twitter bootstrap theme. Placed under security/login.html.twig

Wrapping it up

To conclude; it’s relatively easy to setup a web proxy with modern security standards to protect your old (irreplaceable) spaghetti app. With Symfony at its base, some configuration and the usage of the right libraries, we only need a few files with satisfactional result.

I created a repository which contains all of the above, which you can use out of the box. This is found at my github account @

CopperheadOS review

Sadly CopperheadOS died (at least for me it did). The director of CopperheadOS found himself way to important with his dick move to seize ownership of the entire company, and left the other 50% owner (the main author behind CopperheadOS) empty handed.

Goodbye from CopperheadOS

I guess the following still applies for other ROMS. Just like Michael Micay advised, just use plain AOSP without gapps. For me this is CarbonROM with F-Droid and Yalp, while waiting for a more security and privacy centered ROM.

Unlike many, who apparently got nothing to hide (, I’m a bit keen on my privacy and security. This has probably something to do with my daily job as a DevOps engineer, but should nevertheless be interesting for anyone who wants to be back in control of their data. This CopperheadOS review is about taking that control.

One approach on security is to know where your weak spots are. For me this was my Android phone. It seems like everyone is downloading all kinds of binary apps, and are trusting them blindly. This feels likes the Window 95-98 era where many would just download and run any *.exe without hesitation. Another argument was the obscure Google Play services, not knowing what it does send back to Google and what doesn’t. And even when I do know, I don’t have any reasonable opt-out possibility for sending any data. In my quest of searching for alternative ROM for better security and privacy I found CopperheadOS. Which I’ll be reviewing here.

CopperheadOS is a plain AOSP with additional hardening and focus on open source apps, without any binary sending my data to the cloud. This was somewhat a leap of faith, as I would be obliged to part from some binary apps I rely on in my daily life (altough not entirely true, but read along). But as it turned out there are lots of open source alternatives which offer same or nearly same functionality.

App repository

Because CopperheadOS is stripped from the Google Play Framwork (hooray!) it comes with F-droid as app installer. With F-droid you can find only open source apps, so this is my primary source of finding new apps before switching to an alternative.

Binary apps

My main focus is to stay functional, not becoming some paranoid hermit with a tin foil hat. With a healthy feeling of unwillingness I had to install Whatsapp to stay connected with family and friends. I could’t convince them to use Conversations (which requires registering a jabber account). And some banking apps don’t come open source either. This is why I installed aptoide to be able to install trusted binary apps. You need to (must actually) only install trusted apps. Altough I think Aptoids’ malware checking flow could be better concerning the signature checking. Checking signature against other marketplaces is kind of vague for me. So be on your guard here.


Already mentioned above, I’d like to use Conversations. It offers same functionality as whatsapp, but offers great encryption on a federated network. Which is IMHO the best kind of privacy AND security you can get nowadays without compromise. A jabber account is free for registration (e.g. Once you have registered a jabber account and installed conversations, switch over to OMEMO encrypted messaging.


CopperheadOS comes with Chromium as default browser. Although I like Chromium very much at my desktop (not to be confused with closed source Chrome). It doesnt allow extensions on Android. Therefore I switched to Firefox to be able to install an adblocker (ublock origin) and a cookie wall (self destructing cookies for inactive tabs).

Search engine

Sorry to say Google; you gave me great search results but I don’t want to live in your filter bubble anymore. After a period of time I started to trust the quality of search results from DuckDuckGo. I’m not saying this company is great for total anonymity, but it is far more better than Google. And in combination with self destructing cookies, searching becomes even more anonymous.


Another great thing of CopperheadOS is it’s SMS app Silence, it offers a standard SMS app with the possibility of encrypting your messages. Before first usage a key needs to be exchanged with someone who also runs Silence, after which all text messages are encrypted from that point forward.

Cloud backup

Another Google thing that needed to be untied was the backup procedure. For this I installed a Nextcloud server on a VPS and the DavDroid app on Android which syncs my contacts, agenda and tasks to NextCloud. Another nice feature is the automatic backup of newly created photo’s.

Other open source apps and alternatives

CSipSimple: Excellent VOIP client
DavDroid: Syncs my contacts, calendar and tasks with my NextCloud server
OpenVPN for android: Which I use on public wifi hotspots
OSMAnd~: Excellent navigation app based on open street maps



Nothing in the world comes for free, Google needs to earn some to be able to provide all the services the offer. Altough I don’t want this to be a Google rant, the ties between stock Android and the Google cloud is becoming very diffuse, and gave me an itchy feeling about it. This is why I wanted a clear cut between my data and Google, instead of paying with my privacy and not knowing how much I’ll be paying in the end. With CopperheadOS you can make a clear cut, just to be back in control of your own data without any significant compromise. Installation was pretty straigh forward on a Nexus, so it’s easy to just give it a try I’m sure that (if you have the same considerations I have) you won’t be disappointed.

WordPress CVE scanner using

In my previous posts I already wrote about a WordPress CVE scanner (part 1, part 2). It kept haunting me with failures and disappointment. It began with blackbox scanning (slow and performance killing) which moved to whitebox scanning with Wordstress which proved to be buggy. So it needed to be addressed one more time (hopefully).


A whitebox WordPress vulnerability scanner getting its CVE’s from, which is simple to use… I decided to write my own in WP-CLI.


WP-CLI is an awesome tool to manage your WordPress installation from the command line, and it recently started supporting extensions. So I created one, and wrote some documentation for it. Installation is done with:

Documentation can be found at

It doesn’t need to be more complex than this.

Wordstress a whitebox CVE scanner for WordPress

a better whitebox scanner is found in a more recent post of mine

A.k.a. WordPress security monitoring 2. In my previous article about worpress vulnerability monitoring the tool wpscan was used. This tool is a black box scanner, which gave us too much false positives and generated a great deal of load on our server which is somewhat a waste of resources. So I went searching for a whitebox scanner to have better results and to make more efficient use of server resources. In this quest I stumbled upon Wordstress.

It came to me as a surprise that this tool isn’t actively used or downloaded. Which is a pity. I think Wordstress is greatly undervalued, because it adresses some issues which are fundamental if you take WordPress security seriously. Let me explain why by explaining the architecture, which will imply the benefits.

The Wordstress project consists of two parts:

– A WordPress plugin which exposes versions for (core, plugins and themes). Only viewable with a certain key through a GET request.
– The Wordstress Ruby gem, wich fetches all versions from the installed plugin (so you can scan remotely). The found versions are checked against the online WordPress CVE database (

Console output of the ruby gem:

If you cron the check, and report the output (mail, monitoring tool, chat), you’re automatically reported of new vulnerabilities in your site. We added our output to a Grafana dashboard, displaying the numer of vulnerabilitis for every of our WordPress setup.

Recently I¬†forked the gem and updated it to use the new API v2 from and implemented some extra output methods (like Nagios with appropiate exit codes). Also did some bugfixes which adds https support and improvements on error messaging. Adding new features wasn’t that hard, so please commit your missing features as a PR’s to Paolo will be happy to merge them.

I don’t have exact numbers here, but the majority of hacks worldwide are due to outdated software. This is why Wordstress is so important, which adresses this problem. Instead many people prefer to use only a WAF plugin like Sucuri or Ithemes. Don’t get me wrong, these tools are very helpful, and I think you should use a WAF in some way, but it doesn’t address the core problem. Good security always starts with updating software.

Code nostalgia with Quick Basic

With my wife I was discussing the discipline of doing backups, in doing so I was thinking of how old my oldest file would be. How many years could I go back in time? I’m not talking about backup retention schemes but about my oldest creation, the oldest modification date. So exploring the crypts of dusty ARJ archives looking at the bits and pieces I stored, I found some old code of mine that was so dusty and old, it made me smile and filled me with a feeling of nostalgia. I wanted to share this code nostalgia with you.

Facts and stats

Last modification date: 18 december 1991 (previous century :-))
My age back then: 11 years
Platform: MS-DOS
Language: Quick Basic
Project: Programming the mastermind game
Status: buggy/but some parts work

The code

Surprised about the ramblings of my 11-year old self, I came to the idea of reviving the code. With some fiddling with the dosbox emulator it was suprisingly easy to make it work again! I put a recording at the end of this post. It’s somewhat magic to see 25 years old code working on current computer.

Useless for sure, it contains a bug concerning the black pins, but it doesn’t matter, I was happy that night ūüôā



And the screen capture:

code nostalgia

Implementing a custom monitoring stack with Nagios, InfluxDB and Grafana

At Netvlies we were using Nagios for quite some time as our monitoring stack. It was a hell of a job to maintain (let alone setting it up). Nevertheless it does its job quite well, until it just wasn’t enough. Its interface is crappy and configuration was a mess. The manual editing of config files in vim is not an inviting job to do. Furthermore we wanted to extend our monitoring stack¬†to not just monitor hardware and middleware metrics, but application and security metrics as well. From that point of view a search began for new tools and setups. These were our requirements:

  • an app (Android/iOS) for alerting
  • possibility for writing custom checks
  • easy configuration for adding/altering checks
  • monitoring data must be accessible through API or database
  • possibility to assign alerts to someone
  • no SAAS/cloud solution. Our data is our own.

There are many, many tools, to name a few of them: Nagios, Icinga, Oculus, Sensu, New relic, Flapjack, Cabbix, Gacti, Graphite, Grafana, Kibana, … and many more. But none of them seemed to have all desired features. I realized that I needed to create a stack of different tools to meet all of the requirements. Also many tools had a sort of overlapping when comparing to each other, so I decided to split up responsibilities for the different tasks to meet the separation of concerns principle. While googling around I found this paradigm seems already be (partly) adopted by some companies (like Elastic and InfluxData). Combining their and other toolsets and looking at what I needed, a distillation to different layers was quickly made.

The monitoring stack

checks the gathering of metrics
storage plain/raw result storage
aggregation enrichment, calculation or humanizing values
view displaying the aggregated results
alerting parallel to view layer, action may be required by someone
reporting Needed for longterm decisionmaking

With this stack it’s just going to the candy store to find the best solution for each layer(s). Below is our resulting stack and some explanation in why we chose for a specific solution.

I quickly fell in love with the ELK stack (Elasticsearch ,Logstash, Kibana) and TICK stack (Telegraph, Influxdb, Chronograf, Kapacitor). But there is also a new kid on the block called Grafana (which is actually a fork from Kibana). If you’ve already tried Grafana yourself you won’t be needing to hear from me that it is just awesome. Grafana is an easy configurable dashboard with multiple datasources to show all kind of near-realtime metrics. Have a look at Grafana and you’ll know what I’m writing about. So the view layer was quickly chosen.

Checks and storage
We didn’t had that many time to set up a completely new stack for all of our +/- 50 servers, so I wanted to reuse the Nagios checking mechanisms through NRPE and store it’s metrics into an acceptable format. I found InfluxDB the most ideal candidate for this. It could also have been elasticsearch but InfluxDB seemed to be more suited for just metrical data, and has some default retention schemes out of the box (and i’m lazy). Furthermore Grafana has great support for InfluxDB with user friendly query editors.

After choosing the storage backend we needed to transfer Nagios data to InfluxDB. For this we found that Graphios was the missing puzzle piece. Graphios is great, but it missed storing the service state (Ok, Critical, Warning, Unknown). For this reason I forked the repo, in which Graphios stores the service state in a field called “state_id”. You can check here if you’re interested.

Staying in Nagios land still left us with the configuration mess. To easen the pain we installed Nconf to manage the Nagios configuration. In Nconf every check, host, command, etc is managed throug a web interface, and it has some powerfull templating as well. Configuration is generated with a single click and is validated in a pre-flight mode after wich it can be deployed to the Nagios runtime. It took some time to migrate all current configuration into Nconf, but it was worthwile now that we have a the possibilty to add a new server with default checks in just a few seconds.

We used the aNag app for alerting, which is an excellent app for the Android platform. Unfortunately there is nothing like it for the iOS platform. Furthermore no actions can be seen or be discussed. So a kind of chat-client would be easier. For this we found HipChat very usefull to dump any alerts in that could be delegated to the right person, or be replied to als “false positive”, “working on it”, etc. We used HipSaint to hook up HipChat to Nagios.

Currently we don’t have uses cases where aggregating is usefull yet, but once we do need them I guess I would be looking into Logstash. Reporting is also not used yet, but should be easy once requested, as there are many client libraries for InfluxDB in different languages.



Grafana is just awesome to see in action. And is easy to sell as it is more tangible than something more abstract like InfluxDB. Also I’m very enthousiastic about the TICK and ELK stack, as both of them do some kind of separation of concerns. The one tool that does it all doesnt exist¬†and if there was any tool nearly like it would be way to fat (and expensive as well). The best way to handle monitoring is accepting that it should be seen as a stack, and implementing your own will give you the right tool for the job.

WordPress vulnerability monitoring

Due to recurring security issues with WordPress I wanted a some kind of wordpress vulnerability monitoring for our¬†Wordpress implementations. The current monitoring setup is implemented¬†with good old fossil Nagios. Despite of many better alternatives, Nagios still does a great job in alerting me through the aNag app. In this post I’ll describe a simple Nagios setup for continuously monitoring WordPress vulnerabilities. ¬†It’s pretty straightforward if you already know Nagios. Nevertheless the scripts I wrote to wrap things around should be re-usable (more or less) to be used in any other setup (ELK?).

Scanning WordPress is done with the wpscan tool, which can be downloaded from . Output of this tool is stored, transformed and displayed in the Nagios way of doing this. As a prerequisite you need to install this tool on your server (hint; use the RVM method) and have PHP/MySQL installed for different subcommands that will be called.

Nagios configuration

First¬†we’re going to write the command, service and service template in Nagios. Check interval is set to once every 24 hours. And by using a parent service (template), you can easily create more¬†checks for other WordPress instances. I copied the configuration¬†from my¬†generated config,¬†which is created by Nconf, but shouldn’t make any difference in directly using this config.

The actual command(s)

When you look closely at the configured command you will see a lot of piping. ¬†I could have put all into one command, but that wouldn’t be re-usable in a future possible ELK setup. It’s also a bit easier to change these scripts to your needs, as its purpose is more obvious.

The actual scanning is done with /opt/wpscan/wpscan.rb –update -u $ARG1$. The output of wpscan¬†is intended to be read by humans, so we need to convert this into something that Nagios understands. Note that this feature should be released in the future as it is announced in the 3.0 release of wpscan ( Until then we use our own filter for transforming which is defined in¬†$USER1$/wpsecurity_filter¬†. This filter will process the wpscan output to a JSON string which is passed to¬†$USER1$/wpsecurity_message¬†that¬†will exit for your with the right exit code and message for Nagios. See gists below for this filtering and messaging to Nagios:

If you’ve read closely, you might already noticed a¬†missing command, namely¬†$USER1$/wpsecurity_store $HOSTNAME$ $SERVICEDESC$. Reason to name this separately is because it is optional. From the outside it just moves STDIN to STDOUT, seemingly doing nothing. In its internals, it stores the output in a MySQL database, which¬†can be accessed with a custom¬†PHP script from¬†the Nagios dashboard (by using the “notes_url” configuration directive). See gist below for this storage:

By using different subcommands we can easily replace the filter later on when wpscan 3.0 is released, also¬†the wpsecurity_store is optional as it outputs exactly what it receives (so you can leave that out if you don’t want storage; also remove the notes_url from Nagios config if you do so).

Vulnerabilities overview

In the Nagios dashboard there is only room for just one sentence for the state of that service. Just naming the number of found vulnerabilities isn’t enough. That’s why we stored the output in MySQL. With a notes_url config we can redirect to a separate PHP file which will display the output generated by wpscan. Please adjust to your needs ūüôā as it is just bare minimum. In the example below are 2 PHP includes which can be downloaded from the SensioLabs repo¬†

Nagios dashboard

If all went well (see /var/log/nagios/nagios.log for errors), you should be seeing something like screenshot below. Watch for the the document icon right from the service description. This link will lead you to the next screenshot.



wordpress vulnerability monitoring

RabbitMQ and PHP consumers

Are you using RabbitMQ and PHP and trying to consume messages in¬†PHP? You might have encountered some difficulties¬†when trying to daemonize¬†a PHP script (maybe even used supervisord?… yuck). If not, think about it; ever seen successful implementions?… PHP isn’t built for this,¬†it’s against its nature. I’m getting distracted here, but there is an excellent article about this subject;¬†PHP is meant to die. ¬†Maybe it’s wishfull thinking, but I think that PHP is really in need getting more mature about this “nature” and memory management. Anyway, what about a successful implementation with RabbitMQ and ¬†PHP consumers?

In this post¬†I’ll describe some common problems when consuming messages in PHP, which I encountered during development and solved with some 3th party¬†packages and a bit of configuration. I’m assuming you are already familiar with RabbitMQ and PHP, so I’ll leave that out. You might already have used the RabbitMQBundle or the¬†underlying php-amqplib by videlvaro. These packages are not tied to this article, but are worth mentioning when you intend to¬†create¬†messages in PHP and/or setting up the fabric (automatic exchange and queue creation).


Daemon or cron run

Periodically checking for available messages is impossible in RabbitMQ without fetching it from the queue, so you don’t know the messages count. In the cron scenario this will create a problem when processing time exceeds period x (next cron overlaps the previous cron run). So we definitely want a daemon, but not in PHP.



Separation of concerns

Seen from the architectural point of view; there is the message consumption part and a message processing part. In an ideal world these should be separated to be responsible for their own part. This way debugging becomes a lot easier, because it will be clear if the problem should be searched in RabbitMQ consumption or in your application’s proces. Another benefit of separating is that¬†the interface on the processing part becomes re-usable, and not neccesarily dependant on RabbitMQ. From the testing point of view;¬†your processing part is probably more¬†testable when it is seperated from the consumption part.

And above all (for me) is the improvement of the deployment procedure. Because a¬†daemon like this is a simple patch-trough, it never has to be restarted (unless your message consumption configuration needs change). Normally when updating your application only domain logic changes within the processing part. So after deployment messages are automatically processed “the new way” without needing to restart some consumption service.



As I discussed this with ricbra during a PHP Benelux conference, it came to me as a surprise that he already solved this, by addressing these exact same problems and by writing a consumer in Go. This separate consumer  just consumes a message which is passed to a pre-configured processing command. The exit code of your command is used by that same consumer to requeue or ack the message.


RabbitMQ and PHP


Supervisord is used to ensure that the daemon is running and is started during boot. An example config for supervisord would be something like this: