Using WebStorm with asdf in WSL

I recently started using WebStorm on Windows to work on a Vue.JS-based project, and ran into an irritating issue when trying to run things with WSL and asdf:-

C:\Windows\system32\wsl.exe --distribution Ubuntu-20.04 --exec /bin/sh -c "export PATH=\"/home/andys/.asdf/shims:$PATH\" && 
cd /mnt/c/Users/andys/PycharmProjects/abpower-ui && /home/andys/.asdf/shims/node /home/andys/.asdf/lib/node_modules/npm/bin/npm-cli.js
run 'start:dev' --scripts-prepend-node-path=auto"

throw err;

Error: Cannot find module '/home/andys/.asdf/lib/node_modules/npm/bin/npm-cli.js'
at Function.Module._resolveFilename (node:internal/modules/cjs/loader:923:15)
at Function.Module._load (node:internal/modules/cjs/loader:768:27)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:76:12)
at node:internal/main/run_main_module:17:47 {
requireStack: []

Process finished with exit code 1

After a bunch of time searching fruitlessly for an answer while resisting the urge to throw things out the window, I figured I’d try something else.

Things to note:-

  • I am not an expert when it comes to node, npm, asdf or anything involving JavaScript
  • This may not be a great way to do it, I don’t know if this will make other things explode

When you first select the node interpreter under Run/Debug Configurations, WebStorm will run which node and get the path that way. When using asdf, this comes back as ~/.asdf/shims/node:-

This sets the Package manager option to something that doesn’t exist. This in turn apparently confuses the shit out of WebStorm, because it then tries to find the node_modules directory based on this path, fails, barfs and makes a mess everywhere.

Figuring that the way asdf shims interpreters was the cause, I went digging to see if I could find where npm-cli.js was under my ~/.asdf directory:-

[andys@longview]$ find ~/.asdf | grep npm-cli.js

Hmm, we might be onto something here.

[andys@longview]$ cd /home/andys/.asdf/installs/nodejs/15.9.0
[andys@longview]$ ls LICENSE bin include lib share
[andys@longview]$ cd bin
[andys@longview]$ ls
node npm npx


Okay, so what happens if we give WebStorm this path instead?

The Package manager path now looks right at least… but does it work?

C:\Windows\system32\wsl.exe --distribution Ubuntu-20.04 --exec /bin/sh -c "export PATH=\"/home/andys/.asdf/installs/nodejs/15.9.0/bin:$PATH\" && 
cd /mnt/c/Users/andys/PycharmProjects/abpower-ui && /home/andys/.asdf/installs/nodejs/15.9.0/bin/node 
/home/andys/.asdf/installs/nodejs/15.9.0/lib/node_modules/npm/bin/npm-cli.js run 'start:dev' --scripts-prepend-node-path=auto"

> abpower-ui@1.0.0 start:dev
> webpack serve --mode=development --host=

ℹ 「wds」: Project is running at
ℹ 「wds」: webpack output is served from /
ℹ 「wds」: Content not from webpack is served from /mnt/c/Users/andys/PycharmProjects/abpower-ui/dist


As I said before, I’m far from an expert when it comes to anything involving JavaScript so I have no idea if this is the right way to do things. But it works for me, so if you’re running into the same problem then this might help.

Pushing to Dokku with GitLab CI/CD

For running your own PaaS, Dokku is great. I could write a whole post on what you can do with Dokku, but in brief it works as a sort-of self-hosted Heroku – you create an app, push your code to it with Git and it’ll deploy it. I’m also a big fan of GitLab CE. I use it for my personal projects, mostly for things I don’t want to publish on GitHub for various reasons.

Wouldn’t it be nice if I could make the two of them work together, so that when I push some code to a repo in GitLab, it in turn pushes it to Dokku for deployment?

Well, you can.



To make this work, you will need:-

  • A Dokku app
  • A GitLab install, with GitLab Runner configured
  • A GitLab repo
  • A brand new SSH key
  • A way to tell GitLab to do things when a repo is updated


Creating a new Dokku app and a new GitLab repo is out of the scope of this post, so let’s assume:-

  • your Dokku app is called awesome-sauce, it can be found at, and the Git remote for it is
  • your GitLab repo is also called awesome-sauce, and the Git remote for it is

The first thing we need to do is to create a new SSH key for GitLab to use to push to Dokku:-

workstation$ ssh-keygen -P '' -C 'GitLab/Dokku Integration' -f gitlab
Generating public/private rsa key pair.
Your identification has been saved in gitlab.
Your public key has been saved in

Copy the public key (in our example, to your Dokku host, and tell Dokku to accept it:-

dokku-host$ sudo dokku ssh-keys:add "GitLab/Dokku Integration" /tmp/
SHA256:<...SHA256 hash...>

Next, go to the CI/CD settings in GitLab for your repo (for example,, and expand the Environment variables section. Create a new entry named SSH_PRIVATE_KEY, and for the value paste in the contents of the gitlab file:-

Now that this is done, we can start tying things together.

Integrating, continuously

The .gitlab-ci.yml file is used by GitLab to determine what tasks it should run when the repo is updated. It lets you do some pretty complex things, but for this example we’re going to keep it simple. In the local copy of the repo, create a .gitlab-ci.yml file with the following:-

- mkdir -p ~/.ssh
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- ssh-keyscan -H '' >> ~/.ssh/known_hosts
- deploy

stage: deploy
- git checkout master
- git pull
- git push master

Let’s break this down:-

  • The before_script section does some housework before we start. It creates an .ssh directory, adds our new SSH key (that we added through the GitLab web UI), sets some permissions and then pre-populates the known_hosts file with the public key of the Dokku server so that we don’t get prompted for it later.
  • The stages section groups together different deployment sections. In this example we’ve only got the one stage, named deploy.
  • Finally, the deploy_to_dokku section is what pushes our code to Dokku. It ensures we’re on the master branch (git checkout master), makes sure we’re up to date (git pull) and then pushes it to Dokku (git push). We mark it as being part of the deploy stage.

The git checkout and git pull steps seem at first glance to be redundant, but I found when setting this up that GitLab seems to check out the branch in a detached state, which confuses the Dokku server (which is expecting the master branch). Explicitly switching to the master branch and making sure it’s up to date overcomes this.

Of course, this assumes that the master branch is the one that you’re working on – you can adjust this to suit if needed.

Push it real good

All that’s left to do is to push our code to our GitLab repo:-

$ git remote add origin
$ git push origin master
Enumerating objects: 25, done.
Counting objects: 100% (25/25), done.
Delta compression using up to 4 threads
Compressing objects: 100% (17/17), done.
Writing objects: 100% (17/17), 1.68 KiB | 1.68 MiB/s, done.
Total 17 (delta 10), reused 0 (delta 0)
de8e665..cb34843 master -> master

Go back to the GitLab web UI, and go to the CI/CD -> Jobs page (, and you should see a job running, pushing your code to Dokku!

What next?

This is a really simple example which pretty much just re-pushes the contents of the repo to Dokku. You’ll probably want to run tests before pushing to Dokku, and it’s definitely worth taking a look at the GitLab CI/CD Pipeline Configuration Reference to see what else you can do with the .gitlab-ci.yml file.

Installing multiple versions of Python with Pyenv

I write a lot of stuff in Python – some of it useful, some of it… not so useful. But as a relative newcomer to Python, it’s almost always in Python 3. Most modern Linux distributions come with Python 3, so what’s the problem?

Although yes, a lot of distributions do come with a version of Python 3, it’s often an older version. It’s not uncommon to see 3.4 installed, which can be a problem if you’re relying on the newer ways of using async in 3.5+“Okay”, you say. “I can just build the newer version of Python 3 and install it alongside the system-provided one”. At that point, a large hand appears out of nowhere and slaps you across the side of the head. A loud voice, reminiscent of Death from the Discworld series, booms:-


Jack Nicholson in The Shining
Terrifyingly accurate representation of what yum will do if you change the version of Python on it

You look at your cup of coffee and wonder if someone slipped something into it when you weren’t looking, but there’s a serious point here – messing around with the system-installed versions of Python is a recipe for a world of hurt. There’s a fair few system utilities that are written in Python, and the distribution maintainers are doing their job by ensuring that the correct version of Python gets installed along with them. While it’s definitely possible to install a newer version of Python system-wide, it’s often not worth risking the potential for subtle fuckery if your newly-installed version of Python ends up being used for something that didn’t expect it.

Side note: at this point I should clarify that the imaginary situations I’m referring to here are when you need a particular version of Python for development. Making a new version of Python available for services deployed in production is definitely a problem you might run into, but is a topic for another post.

So what to do? Luckily, there’s a relatively straightforward way to make pretty much any version of Python available to you – and only you. Enter pyenv.

pyenv lets you easily switch between multiple versions of Python. It’s simple, unobtrusive, and follows the UNIX tradition of single-purpose tools that do one thing well.

This project was forked from rbenv and ruby-build, and modified for Python.

I won’t go over the installation instructions because I’d just be repeating the clear and concise instructions the pyenv project already has, but here’s what it’ll let you do:-

  • have multiple versions of Python installed at once – both Python 2 and Python 3
  • have multiple implementations of Python installed at once – for instance, Jython or Pypy alongside the standard CPython
  • set the version of Python in use on a per directory basis
  • set the global – for you – version of Python in use if you don’t otherwise specify a version

It does this by using shims – that is, inserting itself in your path and making sure that when you – or your application – calls the Python interpreter it will get the version that you intended it to get. It’s a lot safer than installing a version of Python system-wide, it doesn’t need root[0] and there’s no endless manual fucking around with PATH or LD_LIBRARY_PATH. It’s so useful that I even have the following in my dotfiles repo[1] I drag around between systems I work on:-


function install-pyenv() {
  if [[ ! -d ~/.pyenv ]]; then
    echo "Downloading pyenv..."
    git clone ~/.pyenv
    . ~/.bashrc
    echo "~/.pyenv already exists!"

if [[ -d ~/.pyenv ]]; then
  echo "~/.pyenv found, initialising pyenv..."
  export PYENV_ROOT="$HOME/.pyenv"
  export PATH="$PYENV_ROOT/bin:$PATH"

  eval "$(pyenv init -)"

This lets me just run install-pyenv on a system I haven’t already installed it on, and it will pull pyenv from GitHub and then do the things it needs to make it available in my session. Far nicer than being slapped around the face by a character from a fantasy novel series.

[0] while you don’t need to be root to install pyenv, since it will build the version of Python from source you may need to install additional packages (such as gcc) if they aren’t already installed.
[1] one of these days I will tidy it up enough to make it publicly-available, but it doesn’t do anything that these public projects already do, and probably do better.

Resetting user passwords on Mastodon

I recently installed Mastodon (yay!), didn’t enable email sending, then promptly forgot my password (duh!). Since Mastodon is using Rails, it’s pretty straightforward to reset it.

Log into the docker container

I’m running Mastodon in Docker with docker-compose (see here for more details). If you are too, you’ll need to log in to the web container to do this. If you’re not, you can skip this.

First, find the name of the web container. It’s probably mastodon_web_1, but check with:-

$ docker ps | grep mastodon_web | awk '{ print $NF }'

…and the output should look something like this:-


Log in to the container with the name you just got:-

$ docker exec -ti mastodon_web_1 bash

Use the rails console to reset the user’s password

Start the rails console with:-

bash-4.3$ rails c
Default type scope order, limit and offset are ignored and will be nullified
Creating scope :cache_ids. Overwriting existing method Notification.cache_ids.
Chewy console strategy is `urgent`
Loading production environment (Rails 5.2.1)

Get the Account object, with the username of the account whose password you want to reset. The details will also be echoed to the screen:-

irb(main):001:0> account = Account.find_by(username: 'myuser')
=> #<Account id: 1, username: "myuser", do...

Next, get the associated User object:-

irb(main):002:0> user = User.find_by(account: account)
=> User(id: integer, email: string, created_at: datetime, updated_at: datet...

Again, the details of the User object will be echoed to the screen as well as placed in the user variable. Now we can change the password attribute on the User record:-

irb(main):003:0> user.password = 'dontforgetitthistime'
=> "dontforgetitthistime"

…and then save it:-

[ActiveJob] Enqueued ActionMailer::DeliveryJob (Job ID: 8975893d-ba20-3453-b5ed-2911e846276a) to Sidekiq(mailers) with arguments: "UserMailer", "password_change", "deliver_now", #<GlobalID:0x00005573acbdb148 @uri=#<URI::GID gid://mastodon/User/1>>
=> true

And you’re done! If you haven’t tried Mastodon, feel free to head over to and sign up for an account.

Dokku and KVM on Ubuntu quickstart

Install prerequisites

Install qemu and supporting utilities:- sudo apt install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils virtinst

Download Ubuntu

At the time of writing, 18.04 is the latest. I usually store ISOs in a separate directory to the VM images, so feel free to adjust the below to suit. wget

Create a guest

Create a guest VM. Again, the below settings are really a minimum requirement, so adjust to your tastes. virt-install --name=dokku --vcpus=1 --memory=1024 --cdrom=ubuntu-18.04.1-live-server-amd64.iso --disk size=10 This will open up virt-viewer so you can see the console. Alternatively, you can use the --graphics vnc option to enable the VNC server.

Install Ubuntu

Install Ubuntu. You can accept the defaults. You might need to enable the universe repository, which you can do by editing /etc/apt/sources.list (see here for more information).

Install Dokku

Download the Dokku installer:- wget Have a look at before running it, making sure nobody’s snuck anything nefarious in there. If you’re feeling particularly paranoid, itself pulls the Docker install script, so you might want to grab that too and give it a once-over before continuing. Run the installer:- chmod +x sudo ./

Configure Dokku

Open http://<your vm IP>/ in a browser, and tell Dokku:-
  • your SSH public key
  • a hostname
  • whether you want to enable virtualhost naming
Once you hit Finish Setup, Dokku will shut down the web server and redirect you to, giving you the first steps to deploying apps to Dokku.

Two new (unrelated) Docker Compose definitions


docker-elk, a fork from deviantony/docker-elk. This fork includes an rsyslog container running on port 10514 so you can pump syslog into Elasticsearch out of the box.


docker-teamspeak, a fork of the now-defunct overshard/docker-teamspeak. I’ve updated this to support Teamspeak 3.3.0.