CopperheadOS review

Unlike many, who apparently got nothing to hide (, I’m a bit keen on my privacy and security. This has probably something to do with my daily job as a DevOps engineer, but should nevertheless be interesting for anyone who wants to be back in control of their data. This CopperheadOS review is about taking that control.

One approach on security is to know where your weak spots are. For me this was my Android phone. It seems like everyone is downloading all kinds of binary apps, and are trusting them blindly. This feels likes the Window 95-98 era where many would just download and run any *.exe without hesitation. Another argument was the obscure Google Play services, not knowing what it does send back to Google and what doesn’t. And even when I do know, I don’t have any reasonable opt-out possibility for sending any data. In my quest of searching for alternative ROM for better security and privacy I found CopperheadOS. Which I’ll be reviewing here.

CopperheadOS is a plain AOSP with additional hardening and focus on open source apps, without any binary sending my data to the cloud. This was somewhat a leap of faith, as I would be obliged to part from some binary apps I rely on in my daily life (altough not entirely true, but read along). But as it turned out there are lots of open source alternatives which offer same or nearly same functionality.

App repository

Because CopperheadOS is stripped from the Google Play Framwork (hooray!) it comes with F-droid as app installer. With F-droid you can find only open source apps, so this is my primary source of finding new apps before switching to an alternative.

Binary apps

My main focus is to stay functional, not becoming some paranoid hermit with a tin foil hat. With a healthy feeling of unwillingness I had to install Whatsapp to stay connected with family and friends. I could’t convince them to use Conversations (which requires registering a jabber account). And some banking apps don’t come open source either. This is why I installed aptoide to be able to install trusted binary apps. You need to (must actually) only install trusted apps. Altough I think Aptoids’ malware checking flow could be better concerning the signature checking. Checking signature against other marketplaces is kind of vague for me. So be on your guard here.


Already mentioned above, I’d like to use Conversations. It offers same functionality as whatsapp, but offers great encryption on a federated network. Which is IMHO the best kind of privacy AND security you can get nowadays without compromise. A jabber account is free for registration (e.g. Once you have registered a jabber account and installed conversations, switch over to OMEMO encrypted messaging.


CopperheadOS comes with Chromium as default browser. Although I like Chromium very much at my desktop (not to be confused with closed source Chrome). It doesnt allow extensions on Android. Therefore I switched to Firefox to be able to install an adblocker (ublock origin) and a cookie wall (self destructing cookies for inactive tabs).

Search engine

Sorry to say Google; you gave me great search results but I don’t want to live in your filter bubble anymore. After a period of time I started to trust the quality of search results from DuckDuckGo. I’m not saying this company is great for total anonymity, but it is far more better than Google. And in combination with self destructing cookies, searching becomes even more anonymous.


Another great thing of CopperheadOS is it’s SMS app Silence, it offers a standard SMS app with the possibility of encrypting your messages. Before first usage a key needs to be exchanged with someone who also runs Silence, after which all text messages are encrypted from that point forward.

Cloud backup

Another Google thing that needed to be untied was the backup procedure. For this I installed a Nextcloud server on a VPS and the DavDroid app on Android which syncs my contacts, agenda and tasks to NextCloud. Another nice feature is the automatic backup of newly created photo’s.

Other open source apps and alternatives

CSipSimple: Excellent VOIP client
DavDroid: Syncs my contacts, calendar and tasks with my NextCloud server
OpenVPN for android: Which I use on public wifi hotspots
OSMAnd~: Excellent navigation app based on open street maps




Nothing in the world comes for free, Google needs to earn some to be able to provide all the services the offer. Altough I don’t want this to be a Google rant, the ties between stock Android and the Google cloud is becoming very diffuse, and gave me an itchy feeling about it. This is why I wanted a clear cut between my data and Google, instead of paying with my privacy and not knowing how much I’ll be paying in the end. With CopperheadOS you can make a clear cut, just to be back in control of your own data without any significant compromise. Installation was pretty straigh forward on a Nexus, so it’s easy to just give it a try I’m sure that (if you have the same considerations I have) you won’t be disappointed.


WordPress CVE scanner using

In my previous posts I already wrote about a WordPress CVE scanner (part 1, part 2). It kept haunting me with failures and disappointment. It began with blackbox scanning (slow and performance killing) which moved to whitebox scanning with Wordstress which proved to be buggy. So it needed to be addressed one more time (hopefully).


A whitebox WordPress vulnerability scanner getting its CVE’s from, which is simple to use… I decided to write my own in WP-CLI.


WP-CLI is an awesome tool to manage your WordPress installation from the command line, and it recently started supporting extensions. So I created one, and wrote some documentation for it. Installation is done with:

wp package install markri/wp-sec

And documentation can be found at

It doesn’t need to be more complex than this.

Wordstress a whitebox CVE scanner for WordPress

a better whitebox scanner is found in a more recent post of mine

A.k.a. WordPress security monitoring 2. In my previous article about worpress vulnerability monitoring the tool wpscan was used. This tool is a black box scanner, which gave us too much false positives and generated a great deal of load on our server which is somewhat a waste of resources. So I went searching for a whitebox scanner to have better results and to make more efficient use of server resources. In this quest I stumbled upon Wordstress.

It came to me as a surprise that this tool isn’t actively used or downloaded. Which is a pity. I think Wordstress is greatly undervalued, because it adresses some issues which are fundamental if you take WordPress security seriously. Let me explain why by explaining the architecture, which will imply the benefits.

The Wordstress project consists of two parts:

– A WordPress plugin which exposes versions for (core, plugins and themes). Only viewable with a certain key through a GET request.
– The Wordstress Ruby gem, wich fetches all versions from the installed plugin (so you can scan remotely). The found versions are checked against the online WordPress CVE database (
Console output of the ruby gem:

mdekrijger@knutsel:~/Projects/wordstress/lib$ wordstress -k 31fa45ebbc3f13ac38b0d832a29d179010dfa327
| Scan summary |
| Wordstress version | 0.72.0 |
| Scan started | 2016-05-27 10:27:06 +0200 |
| Scan duration | 1.706 sec |
| Target | |
| WordPress version | 4.5.2 |
| Scan status | Scan completed successfully |
| Vulnerabilities found |
| WordPress version | 0 |
| Plugins installed | 0 |
| Themes installed | 0 |

If you cron the check, and report the output (mail, monitoring tool, chat), you’re automatically reported of new vulnerabilities in your site. We added our output to a Grafana dashboard, displaying the numer of vulnerabilitis for every of our WordPress setup.

Recently I forked the gem and updated it to use the new API v2 from and implemented some extra output methods (like Nagios with appropiate exit codes). Also did some bugfixes which adds https support and improvements on error messaging. Adding new features wasn’t that hard, so please commit your missing features as a PR’s to Paolo will be happy to merge them.

I don’t have exact numbers here, but the majority of hacks worldwide are due to outdated software. This is why Wordstress is so important, which adresses this problem. Instead many people prefer to use only a WAF plugin like Sucuri or Ithemes. Don’t get me wrong, these tools are very helpful, and I think you should use a WAF in some way, but it doesn’t address the core problem. Good security always starts with updating software.

Code nostalgia with Quick Basic

With my wife I was discussing the discipline of doing backups, in doing so I was thinking of how old my oldest file would be. How many years could I go back in time? I’m not talking about backup retention schemes but about my oldest creation, the oldest modification date. So exploring the crypts of dusty ARJ archives looking at the bits and pieces I stored, I found some old code of mine that was so dusty and old, it made me smile and filled me with a feeling of nostalgia. I wanted to share this code nostalgia with you.

Facts and stats

Last modification date: 18 december 1991 (previous century :-))
My age back then: 11 years
Platform: MS-DOS
Language: Quick Basic
Project: Programming the mastermind game
Status: buggy/but some parts work

The code

Surprised about the ramblings of my 11-year old self, I came to the idea of reviving the code. With some fiddling with the dosbox emulator it was suprisingly easy to make it work again! I put a recording at the end of this post. It’s somewhat magic to see 25 years old code working on current computer.

Useless for sure, it contains a bug concerning the black pins, but it doesn’t matter, I was happy that night 🙂


GOTO mastermind

COLOR 0, 2

FOR t = 1 TO 300
i = RND
IF i < .25 THEN
  kleur = 1
  IF i < .5 THEN
    kleur = 1
    IF i < .75 THEN
      kleur = 6
      kleur = 9
    END IF

lr = INT(RND * 600)
bb = INT(RND * 300)
LINE (lr, bb)-(lr + 8, bb + 8), kleur, BF

COLOR 14, 2
LOCATE 12, 27
PRINT "Welkom in Mastermind"
LOCATE 15, 20
PRINT "Geprogrammeerd door Marco de Krijger @"
PLAY "l8 a< b< c> d> e> f> l4 g "
FOR flash = 1 TO 400
  NEXT flash
  LOCATE 22, 30
  COLOR 5, 2
  PRINT "Druk een toets"
  FOR flashing = 1 TO 400
  NEXT flashing
  LOCATE 22, 30
  COLOR 10, 2
  PRINT "Druk een toets"

COLOR 10, 2
LOCATE 2, 30
LOCATE 25, 5
PRINT "Druk 'H' voor Help"
COLOR 12, 2

'bepalen toevalsgetallen tussen 0 en 9
FOR i = 1 TO 4
  x$(i) = MID$(STR$(INT(10 * RND)), 2, 1)

beurt = 1

nummer = 1
beurt$ = MID$(STR$(beurt), 2)
IF beurt <= 9 THEN LOCATE , 5
PRINT " ";
PRINT "("; beurt$; ")    ";

toets$ = INKEY$
IF toets$ = "" THEN GOTO l2

IF toets$ < "0" OR toets$ > "9" THEN GOTO l2
poging$(nummer) = toets$
PRINT toets$; "    ";
IF nummer < 4 THEN
  nummer = nummer + 1
  GOTO l2
PRINT "        ";

zwart = 0
wit = 0
'beoordelen 1
FOR i = 1 TO 4
  bron(i) = 0
  doel(i) = 0
FOR i = 1 TO 4
  IF poging$(i) <> x$(i) THEN GOTO nxt1
  zwart = zwart + 1
  doel(i) = 1
  bron(i) = 1

'beoordelen 2
FOR i = 1 TO 4
  FOR j = 1 TO 4
    IF zwart + wit = 4 THEN GOTO nxt2
    IF i = j THEN GOTO nxt2
    IF doel(j) THEN GOTO nxt2
    IF bron(i) THEN GOTO nxt2
    IF pogint$(i) <> x$(j) THEN GOTO nxt2
    wit = wit + 1
    doel(j) = 1
    bron(i) = 1
  NEXT j

IF zwart = 4 THEN GOTO klaar

'1: niets goed
IF zwart = 0 AND wit = 0 THEN
  PRINT "                          [niets]"
  GOTO daarna

'2: alleen wit
IF zwart = 0 THEN
  FOR i = 1 TO wit
    PRINT CHR$(63); "  ";
  NEXT i
  GOTO daarna

'3: alleen zwart
IF wit = 0 THEN
  FOR i = 1 TO zwart
    PRINT CHR$(33); "  ";
  NEXT i
  GOTO daarna

'4: zwart en wit
FOR i = 1 TO zwart
  PRINT CHR$(33); "  ";
FOR i = 1 TO wit
  PRINT CHR$(63); "  ";

IF beurt < 10 THEN
  LOCATE , 5
  beurt = beurt + 1
  GOTO l1
  LOCATE , 6
  COLOR 14, 2
  PRINT "        ";
  FOR i = 1 TO 4
    PRINT x$(i); "    ";
  NEXT i
  PRINT "Is de juiste combinatie."
  GOTO einde4

FOR i = 1 TO 4
  PRINT CHR$(33); "  ";
COLOR 14, 2
PRINT "         geraden!"

COLOR 14, 2
LINE (220, 90)-(410, 160), 5, BF
LINE (219, 90)-(410, 161), 8, B
LINE (233, 100)-(397, 150), 9, BF
LINE (246, 111)-(385, 140), 12, B
LOCATE 9, 32
PRINT "Nog een keer j/n?"
LOCATE 10, 32
PRINT "                 "
LOCATE 10, 40
INPUT "", keuze$
IF keuze$ = "n" THEN
GOTO banana
IF keuze$ = "j" THEN
GOTO mastmind
LOCATE 10, 32
PRINT "Ongeldig antwoord"
FOR wacht = 1 TO 1500
NEXT wacht
GOTO einde



And the screen capture:

code nostalgia

Implementing a custom monitoring stack with Nagios, InfluxDB and Grafana

At Netvlies we were using Nagios for quite some time as our monitoring stack. It was a hell of a job to maintain (let alone setting it up). Nevertheless it does its job quite well, until it just wasn’t enough. Its interface is crappy and configuration was a mess. The manual editing of config files in vim is not an inviting job to do. Furthermore we wanted to extend our monitoring stack to not just monitor hardware and middleware metrics, but application and security metrics as well. From that point of view a search began for new tools and setups. These were our requirements:

  • an app (Android/iOS) for alerting
  • possibility for writing custom checks
  • easy configuration for adding/altering checks
  • monitoring data must be accessible through API or database
  • possibility to assign alerts to someone
  • no SAAS/cloud solution. Our data is our own.

There are many, many tools, to name a few of them: Nagios, Icinga, Oculus, Sensu, New relic, Flapjack, Cabbix, Gacti, Graphite, Grafana, Kibana, … and many more. But none of them seemed to have all desired features. I realized that I needed to create a stack of different tools to meet all of the requirements. Also many tools had a sort of overlapping when comparing to each other, so I decided to split up responsibilities for the different tasks to meet the separation of concerns principle. While googling around I found this paradigm seems already be (partly) adopted by some companies (like Elastic and InfluxData). Combining their and other toolsets and looking at what I needed, a distillation to different layers was quickly made.

The monitoring stack

checks the gathering of metrics
storage plain/raw result storage
aggregation enrichment, calculation or humanizing values
view displaying the aggregated results
alerting parallel to view layer, action may be required by someone
reporting Needed for longterm decisionmaking

With this stack it’s just going to the candy store to find the best solution for each layer(s). Below is our resulting stack and some explanation in why we chose for a specific solution.

I quickly fell in love with the ELK stack (Elasticsearch ,Logstash, Kibana) and TICK stack (Telegraph, Influxdb, Chronograf, Kapacitor). But there is also a new kid on the block called Grafana (which is actually a fork from Kibana). If you’ve already tried Grafana yourself you won’t be needing to hear from me that it is just awesome. Grafana is an easy configurable dashboard with multiple datasources to show all kind of near-realtime metrics. Have a look at Grafana and you’ll know what I’m writing about. So the view layer was quickly chosen.

Checks and storage
We didn’t had that many time to set up a completely new stack for all of our +/- 50 servers, so I wanted to reuse the Nagios checking mechanisms through NRPE and store it’s metrics into an acceptable format. I found InfluxDB the most ideal candidate for this. It could also have been elasticsearch but InfluxDB seemed to be more suited for just metrical data, and has some default retention schemes out of the box (and i’m lazy). Furthermore Grafana has great support for InfluxDB with user friendly query editors.

After choosing the storage backend we needed to transfer Nagios data to InfluxDB. For this we found that Graphios was the missing puzzle piece. Graphios is great, but it missed storing the service state (Ok, Critical, Warning, Unknown). For this reason I forked the repo, in which Graphios stores the service state in a field called “state_id”. You can check here if you’re interested.

Staying in Nagios land still left us with the configuration mess. To easen the pain we installed Nconf to manage the Nagios configuration. In Nconf every check, host, command, etc is managed throug a web interface, and it has some powerfull templating as well. Configuration is generated with a single click and is validated in a pre-flight mode after wich it can be deployed to the Nagios runtime. It took some time to migrate all current configuration into Nconf, but it was worthwile now that we have a the possibilty to add a new server with default checks in just a few seconds.

We used the aNag app for alerting, which is an excellent app for the Android platform. Unfortunately there is nothing like it for the iOS platform. Furthermore no actions can be seen or be discussed. So a kind of chat-client would be easier. For this we found HipChat very usefull to dump any alerts in that could be delegated to the right person, or be replied to als “false positive”, “working on it”, etc. We used HipSaint to hook up HipChat to Nagios.

Currently we don’t have uses cases where aggregating is usefull yet, but once we do need them I guess I would be looking into Logstash. Reporting is also not used yet, but should be easy once requested, as there are many client libraries for InfluxDB in different languages.



Grafana is just awesome to see in action. And is easy to sell as it is more tangible than something more abstract like InfluxDB. Also I’m very enthousiastic about the TICK and ELK stack, as both of them do some kind of separation of concerns. The one tool that does it all doesnt exist and if there was any tool nearly like it would be way to fat (and expensive as well). The best way to handle monitoring is accepting that it should be seen as a stack, and implementing your own will give you the right tool for the job.

WordPress vulnerability monitoring

Due to recurring security issues with WordPress I wanted a some kind of wordpress vulnerability monitoring for our Wordpress implementations. The current monitoring setup is implemented with good old fossil Nagios. Despite of many better alternatives, Nagios still does a great job in alerting me through the aNag app. In this post I’ll describe a simple Nagios setup for continuously monitoring WordPress vulnerabilities.  It’s pretty straightforward if you already know Nagios. Nevertheless the scripts I wrote to wrap things around should be re-usable (more or less) to be used in any other setup (ELK?).

Scanning WordPress is done with the wpscan tool, which can be downloaded from . Output of this tool is stored, transformed and displayed in the Nagios way of doing this. As a prerequisite you need to install this tool on your server (hint; use the RVM method) and have PHP/MySQL installed for different subcommands that will be called.

Nagios configuration

First we’re going to write the command, service and service template in Nagios. Check interval is set to once every 24 hours. And by using a parent service (template), you can easily create more checks for other WordPress instances. I copied the configuration from my generated config, which is created by Nconf, but shouldn’t make any difference in directly using this config.

The actual command(s)

When you look closely at the configured command you will see a lot of piping.  I could have put all into one command, but that wouldn’t be re-usable in a future possible ELK setup. It’s also a bit easier to change these scripts to your needs, as its purpose is more obvious.

The actual scanning is done with /opt/wpscan/wpscan.rb –update -u $ARG1$. The output of wpscan is intended to be read by humans, so we need to convert this into something that Nagios understands. Note that this feature should be released in the future as it is announced in the 3.0 release of wpscan ( Until then we use our own filter for transforming which is defined in $USER1$/wpsecurity_filter . This filter will process the wpscan output to a JSON string which is passed to $USER1$/wpsecurity_message that will exit for your with the right exit code and message for Nagios. See gists below for this filtering and messaging to Nagios:

If you’ve read closely, you might already noticed a missing command, namely $USER1$/wpsecurity_store $HOSTNAME$ $SERVICEDESC$. Reason to name this separately is because it is optional. From the outside it just moves STDIN to STDOUT, seemingly doing nothing. In its internals, it stores the output in a MySQL database, which can be accessed with a custom PHP script from the Nagios dashboard (by using the “notes_url” configuration directive). See gist below for this storage:


By using different subcommands we can easily replace the filter later on when wpscan 3.0 is released, also the wpsecurity_store is optional as it outputs exactly what it receives (so you can leave that out if you don’t want storage; also remove the notes_url from Nagios config if you do so).

Vulnerabilities overview

In the Nagios dashboard there is only room for just one sentence for the state of that service. Just naming the number of found vulnerabilities isn’t enough. That’s why we stored the output in MySQL. With a notes_url config we can redirect to a separate PHP file which will display the output generated by wpscan. Please adjust to your needs 🙂 as it is just bare minimum. In the example below are 2 PHP includes which can be downloaded from the SensioLabs repo

Nagios dashboard

If all went well (see /var/log/nagios/nagios.log for errors), you should be seeing something like screenshot below. Watch for the the document icon right from the service description. This link will lead you to the next screenshot.



wordpress vulnerability monitoring

RabbitMQ and PHP consumers

Are you using RabbitMQ and PHP and trying to consume messages in PHP? You might have encountered some difficulties when trying to daemonize a PHP script (maybe even used supervisord?… yuck). If not, think about it; ever seen successful implementions?… PHP isn’t built for this, it’s against its nature. I’m getting distracted here, but there is an excellent article about this subject; PHP is meant to die.  Maybe it’s wishfull thinking, but I think that PHP is really in need getting more mature about this “nature” and memory management. Anyway, what about a successful implementation with RabbitMQ and  PHP consumers?

In this post I’ll describe some common problems when consuming messages in PHP, which I encountered during development and solved with some 3th party packages and a bit of configuration. I’m assuming you are already familiar with RabbitMQ and PHP, so I’ll leave that out. You might already have used the RabbitMQBundle or the underlying php-amqplib by videlvaro. These packages are not tied to this article, but are worth mentioning when you intend to create messages in PHP and/or setting up the fabric (automatic exchange and queue creation).


Daemon or cron run

Periodically checking for available messages is impossible in RabbitMQ without fetching it from the queue, so you don’t know the messages count. In the cron scenario this will create a problem when processing time exceeds period x (next cron overlaps the previous cron run). So we definitely want a daemon, but not in PHP.



Separation of concerns

Seen from the architectural point of view; there is the message consumption part and a message processing part. In an ideal world these should be separated to be responsible for their own part. This way debugging becomes a lot easier, because it will be clear if the problem should be searched in RabbitMQ consumption or in your application’s proces. Another benefit of separating is that the interface on the processing part becomes re-usable, and not neccesarily dependant on RabbitMQ. From the testing point of view; your processing part is probably more testable when it is seperated from the consumption part.

And above all (for me) is the improvement of the deployment procedure. Because a daemon like this is a simple patch-trough, it never has to be restarted (unless your message consumption configuration needs change). Normally when updating your application only domain logic changes within the processing part. So after deployment messages are automatically processed “the new way” without needing to restart some consumption service.



As I discussed this with ricbra during a PHP Benelux conference, it came to me as a surprise that he already solved this, by addressing these exact same problems and by writing a consumer in Go. This separate consumer  just consumes a message which is passed to a pre-configured processing command. The exit code of your command is used by that same consumer to requeue or ack the message.


RabbitMQ and PHP


Supervisord is used to ensure that the daemon is running and is started during boot. An example config for supervisord would be something like this:

command=/usr/bin/rabbitmq-cli-consumer -e "/var/www/html/portfolio-app/current/app/console portfolio:create-thumb --env=prod --no-debug" -c "/var/www/html/portfolio-app/current/app/config/rabbitmq/thumbing.cfg"