The DroidWiki project operates some wikis, which are based on the MediaWiki software. These wikis are hosted on my own infrastructure consisting of two nodes which are used as a app servers running PHP 7 with FPM, as well as a load-balancing frontend-webserevr (nginx) to terminate the user-facing requests. This setup requires that the actual source code of the PHP application MediaWiki is present on both servers, ideally with the same version of all files at the same time.
In order to allow updates, a more or less stable update-process with supporting tools was used a long time, in order to update MediaWiki in a more or less automatic way. This way still required that all of the scripts are orchestrated by a human (me), who can react on individual errors and/or problems. The idea of how the deployment is done is mostly borrowed from the guys at the Wikimedia Foundation, who use a similar approach to update the software of the Wikipedias. This allowed me to keep the DroidWiki wikis updated with the wmf-versions of the MediaWiki software, which are rolled out in a weekly basis to the Wikipedias, as well as to the DroidWiki servers.
In order to reduce the manual effort of deploying and maintaining the DroidWiki wikis, so that the focus can go back to the actual content maintenance and writing, I decided to invest some time to make the Update-process even more automated. As I already used the build and deployment automation tool Concourse CI in other products, the decision was kind of easy to use this powerful tool, too. Setting the CI infrastructure up was pretty easy using a docker-compose file.
Automatically discover new MediaWiki versions
Concourse itself is triggered by so called resources which are periodically checked for new versions. This is basically done by a simple structure of shell scripts, a so called in script is called in a configurable interval to discover new versions. What is done in this script is not defined and up to the resource, which should know how new versions are discvovered for itself. The most common way is to use the git resource to discover new commits or branches with it.
In order to update MediaWiki to a recent wmf-version, I could’ve used the git resource, too, as the wmf-versions will be pushed to the Wikimedia Gerrit, too, tracking them would be kind of easy. However, until then I used to wait until a wmf-version rolled out to all Wikipedias of the Wikimedia Foundation in order to ensure that production issues are already discovered and fixed as well. To understand the problem here a bit better, a little explanation how the update cycle is done (all in the same week):
- Mo: Nothing (deployment-wise) happens usually
- Tue: The branch for the next wmf-version is created and deployed to the so called group0 wikis (e.g. mediawiki.org, test.wikipedia.org, …)
- Wed: Deployment to the so called group1 (non-Wikipedias), if there was no issues discovered so far
- Thu: Deployment to all remaining Wikipedias (group2), if there was no issues found so far
- Fri: Nothing (deployment-wise) happens usually
Usually, on friday, I started the deployment of the wmf-version of this week to be rolled out to the DroidWiki wikis. I kind of like this safety net, so the automated deployment needed to fulfill this requirement as well. This unfortunately made using the git resource kind of hard and not well suited anymore. So I decided to create a new resource on my own.
The resulting mediawiki-version-resource is a nodejs script that checks the installed MediaWiki version of a specified MediaWiki API URL (in my case the API of the german Wikipedia) and returns a new matched version whenever a new version was deployed on the german Wikipedia. This is usually the case on Wednesday, which is save enough for me to take on the version on the DroidWiki wikis, as well. So, checkmark for discovering new MediaWiki wmf-versions automatically in Concourse. *yay*
Deploying new version on my own infrastructure
The deployment of new versions of MediaWiki was done so far by ssh-ing into the deployment host server, which holds a special directory where a new version of MediaWiki can be put together:
- Checkout the new files from MediaWiki code
- Checkout the new files from each wmf-branched extension
- Update the i18n cache files
- Update the git info cache
This now needed to be done in Concourse. Unfortunately, this way of deploying a new MediaWiki version is not really stateless, a basic idea on which Concourse is built upon. Instead of creating a whole new way of deploying a new MediaWiki version to my servers, I take on the tradeoff by leaving the state of my pipeline on the ssh-Server, from which I deployed so far. In order to do that, I needed to use a new resource for Concourse to run tasks over a ssh connection. There’s already a resource which is doing that, the ssh-resource:
However, as it turns out, the resource has a hardcoded timeout of 10 minutes. However, especially deploying the result of putting all the MediaWiki parts together, can take some time, sometimes (not often, but often enough) longer than 10 minutes. Taking the risk of having an incomplete deployment because of a ssh timeout is not something I was willing to take. A bit of go “hacking” later, an updted fork (a pull request was unfortunately not merged as of today :() was ready to be used.
This is basically it, the pipeline is used for some weeks now and it is working like a charme. However, letting it deploy through to my production system automatically was something I didn’t turn on so far, not because I do not trust my pipeline and my deployment tools, it’s just a feeling. I kind of need to get “warm” with a deployment which can be done completely without needing my interaction 😉
The deployment tools and the pipeline configuration used for upgrading the MediaWiki version of the DroidWiki wikis are hosted publicly on github.com:
If anyone finds something, which can be done better, feel free to open a new issue or pull request! 🙂