27 Jul 2019, 01:36

basename and dirname in Rust

I recently did some minor file name munging in Rust, and was reminded that one of the hard parts about learning a new language is the differences in vocabulary.

In UNIX, there are two command line tools, basename and dirname. They take a pathname as an argument and print a modified pathname to stdout, which is really handy for shell scripts. Several other languages copied that naming convention, and so I was really surprised to find that googling for rust dirname didn’t return anything useful1.

Here’s a usage example: Say you have a pathname /etc/ssh/sshd.config, if you use dirname on it, that prints /etc/ssh and basename prints sshd.config. Ruby, python, go all follow a similar pattern (ok, go calls its functions Dir and Base). Rust does not - it calls them something else2.

In Rust, the functions live under the Path struct and are called parent (the dirname equivalent), and file_name (the basename equivalent).

These names make sense! They’re just way outside the range of vocabulary I’m used to.

  1. Maybe now that this post is published, it will!
  2. Rust used to have functions under these names, up until late 2014-early 2015, but then the “Path reform” happened, which normalized the API a great deal and renamed a bunch of functions.

28 Nov 2018, 01:50

Memorizing passwords with Anki & 1Password

Recently, I started using Anki, a spaced repetition scheduler1, a lot to learn French using the Fluent Forever method, and while there have been setbacks, it’s been a pretty great experience overall. It seems to be super useful for memorizing and retaining all sorts of information! Since I have to memorize all sorts of passwords (phone unlock code, laptop login password, gym locker combination), why not use 1Password to help me retain them?

Why not, indeed!

Well, for one, there’s the problem of trust: You should always be very careful where you put your passwords. At the moment, I trust my local macOS keychain and 1Password to store (most of) my secrets safely and protect me from breaches. In the instance the anki-web servers have a breach, attackers could read my (unencrypted) passwords! I don’t trust the anki-web app that much.

So, since I trust 1Password, is there a way to maybe link from Anki to 1Password so I can see and check the password? Turns out, there is! 1Password has its own URL scheme, pretty well-documented here2.

Why do this? What passwords would you even memorize?

It’s likely that you, the reader likely will have an answer to the question of “which passwords even.” There’s the obvious one: Your “one” password, the one you use to unlock your password vault - you definitely don’t want to forget it, so you’ll probably want to use SRS to ensure you don’t forget it. The same goes for all the things you have to unlock in order to bootstrap your identity in case of catastrophic data loss - think backup encryption passwords (I use Arq) that you don’t type very often but that secure the rest of your data (you might have saved them on a keychain somewhere, but what happens if the computer with that keychain blows up? You’ll probably want it in your brain too).

Last but not least, I use this to memorize the kinds of everyday secrets where it’s kind of annoying to take out my phone or computer to look them up - credit card and ATM PIN codes, gym locker codes, that sort of thing.

Of course, you should make your memorization job as easy as possible - use diceware passwords whenever you can3. They’ll be much easier to remember that way.

How to set up that study deck

Setting up the deck is really not that much work, especially as I’ve pre-made an (empty) Anki deck with a card type that you can import & then start filling out with references to things you need to memorize. Here’s a step-by-step guide:

  1. In order to use the 1Password URL scheme effectively, you have to set an “advanced” option in 1Password first. In the settings, check the setting to enable “Copy UUID”:

the "advanced settings" window in 1Password

  1. Now, identify the passwords that you need to memorize. Ideally, that list is very short. I tagged them “memorize” in 1Password so they all appear in one place, but whatever works for you is best.

  2. Download this anki deck and import it into Anki. It should appear as “Memorize Passwords with 1Password” and contain a single example card.

  3. Click that deck in the Anki overview and click “Browse”. In the new window, hit enter to display the example card.

  4. There, you see it has a name and a UUID. That UUID identifies the 1Password entry that you wish to memorize, but isn’t itself secret - in the example card, it belongs to a test password that I created for this blog post.

  5. Delete that card (it’s useless to you, after all!) and close the card browser:

right-click the item and select "Delete"

  1. Click “Add” in Anki’s deck view and give the new entry a name; then, in 1Password, open one of the entries you want to memorize and select “Copy UUID”:

The 1Password context menu for a password with "Copy UUID" selected

  1. Back in Anki, paste that UUID into the “UUID” field.

Repeat steps 7&8 for all the secrets you want to memorize. Then, let’s study!

How to study these entries

Now comes the magic part: When you study that deck, Anki will ask you what the password is for the name you have given the card (say, your disk encryption password). Then, when you reveal the answer, it gives you a link that takes you to 1Password, where you can reveal the password and check that your answer is right. Then, go back to Anki and tell it how well you did. (Got it wrong? Got it right? Was it too easy?)

This works in both the iOS and macOS versions of Anki and 1Password; I haven’t yet tested the windows versions, but I suspect/hope they’ll work, too - let me know!

Here’s how it looks in macOS for me:

All that app switching is a bit of a hassle, but I believe it’s the best we can do for now! It sure feels better than storing important credentials in plaintext, and definitely is better than forgetting them!

  1. Also called “SRS” for short
  2. That URL scheme works in both to the iOS and the macOS app!
  3. 1Password has a diceware password generator, use the “Words” password generator mode!

27 Oct 2018, 01:50

Editing rustdoc comments in emacs

I’ve been writing a bunch of rust code lately, and it’s been a pretty great experience! The thing I enjoy most about it is that the documentation looks just so extremely good.

Which brings me to my major point of frustration with my rust-writing setup: Writing doc comments in emacs’s otherwise excellent rust-mode is a pain. You always have to insert the doc comment character sequence de la ligne, and writing doctest examples was even worse: You write rust code, inside markdown, in rust comments. Add smartparens and other helper packages, and editing gets really annoying pretty fast.

So, I decided to look around for solutions, and found something pretty cool: Fanael’s edit-indirect is an emacs package that will take lines from the current buffer, put them into a new buffer, transform them, apply a major mode, and then let you edit them. When you’re done, you apply the changes back to the original buffer. If this sounds like org-edit-src-code, that’s because it’s directly inspired by it. (-:

So I wrote this piece of elisp glue to help my rustdoc editing experience, and so far it’s pretty great: Navigate to a rustdoc comment, hit C-c ' (the same keys you’d use in a literate org file), up pops a buffer in markdown-mode; edit that and then hit C-c ' again to apply the changes back to the original buffer. Easy!

If you write rust in emacs, I hope you’ll try this out and if you do, let me know how it works for you!

19 Nov 2017, 16:00

Enabling the F4 key in macOS

This problem has been a mystery to me, and I figure to a bunch of other people, too: If you hit F4 in Mac OS X (or macOS) since Lion, it does not have any effect. What.

It appears that the key (when hit without modifiers) is disabled for some reason: I mainly rely on the Function keys on my mechanical keyboard to switch windows in tmux, and e.g., if you hit shift-F4 (the same thing, according to the terminal), it actually works.

There’s a bunch of forums that advise deleting ~/Library/Preferences/com.apple.symbolichotkeys.plist, which also removes all your custom app shortcuts. I have a bunch of those, and would prefer to keep those, thank you!

Turns out you can not do that and still get the desired behavior:

A milder fix

The main insight that led me to this fix is outlined on this post1: The symbolic hotkeys plist is a mapping of key codes to some parameters. So, after some experimentation, I cooked up this command line (which, if you try it, make sure you create a backup of the ~/Library/Preferences/com.apple.symbolichotkeys.plist file!)

defaults write ~/Library/Preferences/com.apple.symbolichotkeys.plist AppleSymbolicHotKeys -dict-add 96 '{enabled = 1; value = {parameters = (96); type = standard; }; }'

This, I think, does the following: It adds key 96 to the plist (96 stands for F4, according to the krypted blog post), with a parameter that I can only guess makes it send the 96 keycode (and if it doesn’t, at least doesn’t do harm), as a “standard” key, and enables that key.

After logging out and back in, pressing my F4 key unmodified works, and all my custom app shortcuts are still there. Win!

Do let me know if this works for you!

  1. This post does not have any attribution on it, but it appears that it is written by Charles Edge. Thanks, Charles!

26 May 2017, 18:37

Something obvious (in retrospect) about ES6 promises

I’ve been pretty excited about the new features of EcmaScript 6 (ES6, or just “modern JavaScript”) for a while, but yesterday it really struck me how entirely different some of them make the experience of writing JS code!

First: A promise1

Promises are one thing that’s new in ES6. They encode, in a neat little state machine, how an asynchronous action might progress. From the mozilla docs:

A Promise is in one of these states:

  • pending: initial state, not fulfilled or rejected.
  • fulfilled: meaning that the operation completed successfully.
  • rejected: meaning that the operation failed.

A short example of using the (equally new) fetch function (and the equally, equally new arrow function syntax) for accessing HTTP content:

    then((response) => response.text()).
    then((txt) => console.log(txt.split("\n")[0]));

Which would make an HTTP request to this blog’s Atom feed, returning a promise; then when that promise resolves with a response, we request the response body, which returns a promise in turn. When the second promise resolves, we print the first line of the body.

At first glance, this is much easier to follow than the callback hell we all had to deal with before. But wait - there’s more!

As you’d expect from a properly asynchronous tool, you can call .then on promises even if they’re resolved. (Because things might happen faster or slower than your computer can execute the next JS statement, of course!)2

And that brings us to the neat thing that I saw for the first time yesterday.


I was pair-programming with somebody yesterday, and we were musing about chaining HTTP requests. We’d written a thing that was firing off all sorts of requests using fetch simultaneously, and waited for them all to resolve using Promise.all. However, we wanted to fire the requests off one after the other.

So, without blinking, my pair writes this code:

    (p, url) =>
        p.then(() => fetch(url).then(handleResponse)),

What. Uh. This does the right thing, but huh? A bunch of insights have let do this short piece of code:

  • Promise.resolve() returns a promise that is already in the resolved state. But as mentioned before, it can have .then called on itself3. And so will every promise returned by .fetch.

  • .then in turn returns a promise, which lets us chain them together.

  • .reduce will run a function across an array’s contents and the previous function’s return value.

And so, using the resolved promise as a zero element, this piece of code gathers up requests, one after the other.

Kinda amazing.

Suddenly, Burritos

I made a promise, but allow me to drift off into maths appreciation briefly: Promises, combined with some algebra (and operators like reduce that take advantage of the algebraic nature of stuff) allow you to express realistically cool things in a tidy way.

I’d encourage you to go forage in the mozilla docs for more ES6 features (fetch alone is worth a lot)! Look for Object.assign and other gems!

But more importantly, think about what you could build if you had a sensible and well-integrated state machine abstraction for your most complex software task.

  1. The promise is that I won’t use the word “monad”4
  2. It’s worth noting that .then also returns a promise.
  3. it’s “thenable”, in ES6 parlance, which I find hilarious.
  4. Well, oops. This time doesn’t count. Also, you’re reading footnotes.

10 Dec 2016, 16:47

Configuring iTerm2 for mosh: URLs

I use a Mac as my main typing/character-displaying computer, and on macOS, iTerm2 is the best terminal emulator that I’ve found so far. In addition to iTerm2, I also use mosh, the mobile shell, to get a fast, interactive and disconnection-resistant SSH-like connection to hosts on which I need to use the commandline.

So, in order to make getting to these hosts fast, I’ve made something that sets up bookmarks which open a new terminal window for me: The ruby gem ssh_bookmarker runs in a LaunchAgent anytime my ~/.ssh/known_hosts or ~/.ssh/config files change and drops a bunch of bookmarks in a directory that gets indexed by spotlight.

Now, whenever I want to open a remote shell, I use spotlight and type the host name. Very handy! (You can also use open ssh://my.cool.server.horse and get a new iTerm tab with the SSH session in it, and that’s exactly what goes on in the background.)

That works perfectly for SSH (to see how to set this up, see the FAQ and search for “handler for ssh://“), but I’d like to do this with mosh or other custom URL schemes, too! This is not as readily available as ssh:// URL handling, but it can be done.

For about 5 years now, I’ve had to look up how to do this and cobble together a solution from various rumors, stackoverflow articles and digging through source code. No more! This time I’m blogging the solution so future-me can have an easier time of it.


First, you’ll need iTerm2 - I use version 3.0.12, but the newer the better. Then, you’ll need mosh - I install it from homebrew, and the program location is /usr/local/bin/mosh.

Throughout this post, we’ll also be using the jq and duti tools, you can get them from homebrew, too.

The iTerm profile and its GUID

First, you’ll need an iTerm profile dedicated to mosh-ing. Any settings you want are ok, but you need to set this as the command: /usr/local/bin/mosh $$HOST$$

Now that you have this profile, you’ll need its GUID. This is easiest by exporting your new profile as JSON from iTerm’s Profiles preferences:

  1. Select the Mosh profile you just created,
  2. Open the “Other Actions” gear menu below the profile list.
  3. Select “Copy Profile as JSON”:

"Copy Profile as JSON" in the mosh profile's "Other Actions" menu

To figure out the profile’s GUID, run:

pbpaste | jq '.Guid'

This should print a UUID in double quotes. Make a note of that string! We’re going to use it as THEGUID below.

URL handling - LaunchServices

URL handling in macOS comes in two steps: First when you run open somescheme://host/, LaunchServices looks up what program handles the given URL scheme. To set iTerm2 up as the handler for mosh:// URLs, I use duti:

duti -s com.googlecode.iterm2 mosh

At this point, running open mosh://my.cool.server.horse should open a new iTerm tab, but it won’t open a mosh connection yet. What else do we need to do?

URL handling on iTerm’s end

Once iTerm gets instructed to open a mosh:// URL, it looks up the URL scheme in its scheme<>profile mapping. Since mosh is not in there yet, let’s fix this (replace THEGUID with the output from jq in the GUID section:

defaults write com.googlecode.iterm2 URLHandlersByGuid -dict-add mosh THEGUID

And then restart iTerm2.


If all this worked correctly and all the IDs line up, running open mosh://my.cool.server.horse should open a new iTerm window running mosh, attempting to open a connection to a cool example server.

Next steps

You can save yourself the trouble of keeping track of these GUIDs, especially if you use some sort of management tool (like ansible) to automatically set up your Macs. I have started experimenting with Dynamic Profiles and specifying GUIDs as host names, and that might have some pleasing results, too. I’ll post an update when I get this fully working.

Also, this doesn’t yet work for mosh:// URLs with a user name specified (or rather, the user name gets ignored and only the host part gets passed to mosh). It’s likely that you’ll have to wrap the mosh tool with another tool in order to get that to work.

In the meantime, I hope you enjoy.

30 Apr 2016, 13:18

Some things I learned about dealing with RSI

The first time had a painful RSI attack was in 2003. It was as if my world collapsed: I’d dealt with hand weirdness since the late 90s (twitches, tingles) but I didn’t recognize that as symptoms of RSI. When both my hands started hurting and even everyday chores like folding laundry turned painful, I started doubting whether I could continue my career in technology.

As it turns out, it is possible to deal with RSI, not be in pain and have a career that involves a lot of typing.

Things I (thought I) had gotten right before this started

I have always been kind of an ergonomics nerd, even before I knew that what I felt was RSI; so it was twice as hurtful that I was in pain when it struck, and everything seemed even more hopeless than it would already have been.

Before all this started, I’d invested in an expensive “ergonomic” keyboard (Kinesis Ergo Elan – I even took this to work with me), a trackball and a good swiveling chair that could be adjusted to fit my body; my desk was the best height I could get it, and the monitor was positioned such that my neck could be straight when I sat down.

I’m sure these things kept me going for a bit longer than if I hadn’t bought them (and hey, spending money on expensive stuff feels better if you can tell yourself it’s for your long-term health), however:

Mistakes I’d made (so please don’t repeat them yourself)

I sat in front of this setup day and night, working and writing. I did this for so long that I couldn’t sit up straight and had to pull my knees up to my chin so my body would not slump forward - my neck and shoulders were very unhappy with this, but this let me stay online for two hours longer. In the end, the only parts of my body not practically immobilized were my fingers and hands.

I had set up custom key combinations in apps and editors that resulted in my fingers stretching and reaching across keys a lot, and I used these very often. At times, my thumb and pinkie finger would be at opposite ends of their keyboard halves, which is pretty far apart on the keyboard I used most often.

Since I was so overworked (doing sysadmin work at a part-time job 3+ days a week, working on university courses the other 4 days, working on side projects, plus chatting on IRC and on forums), I didn’t think I could afford to take breaks or play any sports or work out, and so I just stayed at the keyboard.

What happened then

After one particularly stressful week of 3 homework assignments plus work plus an exam, I woke up with pain in my fingers, palms and arms. That pain didn’t go away for a week, which is when I saw a doctor. They told me what I’d suspected & feared: these were symptoms of RSI, and I would have to step back and stop working for a while. Argh!

They gave me some cortisone cream and a wrist brace, and told me to go to a therapy center to apply heat and electric pulses for the pain. None of these things helped.

I was out of commission for a month; while I wasn’t able to work on anything, I read through a 800-page volume on the atrocities of capitalism, so you can probably imagine the cheery mood that this set me in.

After that month, it was time to go back to work, but my fingers still hurt. So I put on the wrist brace while working and just typed less. Turns out you aren’t meant to do that, and doing it both breaks the brace and increases your pain.

What did help in the end

The real turning point in this ordeal was when I started actually reading about what people suffering from RSI can do about it themselves. These two books really helped me understand what I’d been doing wrong and how I could stop doing it & start doing something better instead:

Both have a great mix of background and exercises that help un-cramp the tiny muscle that you shouldn’t strain, and mobilize the ones that should actually do the moving. I highly recommend reading them both.

Here are the things I learned and did that helped me the most:

  • The small muscles in your hands really are meant for small high-precision movements. Instead of using them for everything, use the larger muscles in your shoulders, your back, and in your arms to move your hands in place on the keyboard. It’ll feel better and look more elegant - if you feel like you’re playing a piano, you’re doing it right.

  • Make sure you have good posture. At rest, you should have right angles in your hips and elbows. Shoulders back and back&neck straight. Look straight ahead and slightly down at your monitor. Laptops make this hard, so you may have to invest in a stand or a monitor, and in an external keyboard.

  • Take regular typing breaks. Get the computer to remind you about these breaks. On the Mac, Timeout 2 is good. On Windows and Linux, I recommend Workrave.

  • Set your break reminder to interrupt you briefly every 7-10 minutes and take the break to sit back, relax for a few seconds.

  • Set a longer break every 45 minutes to 1.5 hours, and take the time to stand up, walk and stretch. Get up and drink some water, your eyes will thank you, too.

  • Stretch in your spare time. Anything that opens up your chest or stretches your neck muscles is great. These Aikido wrist stretches feel amazing. As with anything else, do not overdo the stretches. You should feel a gentle stretch, never any pain or cramping elsewhere.

  • All this will mean you type a little less, so you’ll be very deliberate about what you type (this happens automatically!), and you’ll make fewer mistakes.

  • Once your pain is receding, build upper body and arm strength. It doesn’t matter what you do - hit the gym, do the 100 pushup challenge, do yoga, go rowing - anything goes. As long as you exercise your back, shoulders and arms, they will get stronger and you will be better able to keep a good posture, and make those large motions that help you type the right way. The key here is to not overdo exercise, especially in the beginning. You will feel like you have something to catch up to, but the way to improvement is by steadily putting in a little work so you get stronger. If anything starts hurting or going numb, stop and take a few days’ break and start doing something more gentle.

  • Get more sleep. Chances are you haven’t been sleeping well since the pain started, so you will need to sleep more to help your body heal. Exercise helps tire you out, so do some of that! (But again, don’t overdo it; or focus on muscle groups unaffected by RSI – e.g., go on bike rides.)

  • It’ll take a few weeks to months, and it’s very frustrating at first, but you will feel better.

You’ll notice that I don’t mention much specific equipment here. This is because you can achieve good posture and typing habits no matter what you type on. That said, if anything you use regularly gives you trouble, replace it! If your desk is too high, get a footrest and a chair that goes up enough so your arms are at a right angle. Find the setup that works well for you, and then build habits around that setup.

What happened to me since then

For the past (gosh) 13 years I’ve worked in IT, there were long periods where I was in no pain, but there were also stressful periods when RSI returned. These are the worst - not only am I in pain when this happens, but the whole set of old thoughts and habits comes flooding back: is this all worth it, will I be able to keep working on this thing that I love, let me just finish this one large project, etc.

When this happens, it’s good to take a step back and figure out what happened, then adjust any habits that I have formed. So far, there was always something that I could adjust that would help me feel better after a little while - most often, that’s a combination sleep, exercise and stretches.


If you suffer from it, RSI may seem like an unavoidable thing that can end your career. It is not. You can beat this, and you will feel a lot better by helping your body move the way it wants to move.

Please take care of yourself. <3

21 Jan 2016, 18:57

Better filters for gmail with google apps scripts

At my workplace, we use github pretty extensively, and with github, we use organization teams. They allow assigning permissions on different repos to groups of people, but, are a really great way of @-mentioning people. This is wonderful, but sadly, github doesn’t make it easy for gmail filters to tell the difference between an email notification that you got because it was interesting to you, or because somebody sent a heads-up @-mention to a team you’re on.

I thought that was impossible to solve, but I was so wrong!

The setup: github notification email basics

Github makes it relatively easy to opt into getting all sorts of notifications that might interest you. Sadly, it doesn’t make it easy to stop it from notifying you about things that aren’t of interest to you anymore: Either you can’t turn off a notification in the first place, or you have to visit every single thing that it notifies about and hit “Unsubscribe”. Not optimal!

In theory, it should be easier to filter github’s notification emails by relevance than it is to filter on their webface; at least with emails, you can use third-party filtering tools, right?1

If you’re using gmail, you’re shaking your head now (as I did): All the criteria that you could usefully use in gmail filters (From address, Subject, To address) are the same across all sorts of notifications you get from github. Ugh.

However, they do set a header field, X-Github-Reason: It is set to team_mention if the sole reason you’re getting an email is because somebody mentioned one of your teams (not because you subscribed to an issue on purpose, say). However, there’s a snag: Gmail can’t match on that with its default filters.

Fortunately, Lyzi Diamond has written up a wonderful, and completely working solution to this problem using a mechanism that I was vaguely aware of in the past, but didn’t look at in detail: Google Apps Scripts.

(Go on, read her article; I’ll wait.)

Google Apps Scripts?!

Some time ago, Google made Google Docs, and for some reason they added a feature where you can edit JavaScript software projects (it’s mostly ok; the editor is no Emacs, but you can get by). And they also added a facility that lets you trigger those scripts in regular intervals, say once a minute. And they added lots and lots of bindings into their Apps For Business product suite, with much better functionality than they expose in their user-facing APIs2

In effect, Apps Scripts really powerful cron jobs that google runs for you, which can process your email.

My current github notification filter setup

So, as you may have gathered above, I have a Opinions on how a notification should affect my life:

  • If a person in the work org writes in about one of “my” issues or pull requests, I would like to know immediately (this means, the email should go into my inbox).

  • Same if they @-mention me personally. This probably means they’re blocked, or need help or are asking for a review.

  • If somebody @-mentions only a team I’m on, the email should be available under a label, but not go into my inbox.

I’ve modified Lyzi’s script for my purposes (also, I made it parse simple RFC822 headers, but not multi-line ones). The resulting script is in this gist.

Setting this up in your gmail account

This is a pretty manual process, sorry there’s no shell script you can pipe into bash (-:

  1. Create a gmail filter to match from:notifications@github.com that assigns a label (mine is _github_incoming) and archives the email. (The google apps script will send github notifications to your inbox according to the criteria above!)
  2. Create a new script project and copy/paste the script from my gist as indicated in Lyzi’s blog post. It has screenshots! It’s great!
  3. Adjust the variables at the top to reflect the labels that you want email to be tagged with.
  4. Set up a trigger: I set mine up like this to call processMessages once a minute:

trigger to call `processMessages` once per minute

  1. Set up notifications for that trigger: If anything should go wrong (I have a bug, there was a syntax error while pasting), you should get a notification. Click on “notifications” and set up a notification to email you hourly (or immediately if you like to get lots of email in case something goes wrong).

That’s it! Now your inbox should accumulate much less clutter!

I am pretty impressed with the things that Apps Scripts can let you do; my dream is a thing that cleans out email in small batches during off-hours (since bulk-deleting hundreds of thousands of messages can render your account unusable for hours). Maybe I’ll experiment with this soon!

  1. For my purposes, I’m focusing only on filtering out notifications that I’m getting solely because a team name that I’m on is @-mentioned in a pull request; you could imagine all sorts of other, more complex criteria!
  2. Just look at the meager offerings in the public API for managing gmail filters; you can create filters… and that’s it. I could go on about this API for days.

04 Jan 2016, 19:03

Deptyr, or how I learned to love UNIX domain sockets

Let’s say you have a program that needs to do I/O on a terminal (it draws really nice ascii graphics!), but it usually runs unsupervised. If the program crashes, you want a think like s6 or systemd to restart that program. The problem here is the terminal I/O: Since most process supervision tools usually redirect standard I/O to a log file, the wonderful terminal graphics just end up being non-ascii chunder that confuses you if you try to tail the log file.

My usual approach would have been to start the program under screen (screen -D -m if you’re interested), but that way you lose part of your process supervision tools’ capabilities: There’s a process in between the supervisor and your actual program, so you can’t send e.g. SIGKILL with your standard tools (e.g., svc -k /svc/your-tool) to force it to exit.

However, this approach is generally what I want – I’d like the crashy program to run under a pseudo terminal like screen to have its I/O be available elsewhere, and also make the pseudo-terminal’ed process be a direct child of the process supervisor. One feels reminded of a cake that is had & eaten.

I searched up and down, and besides some djb announcement in the early 90s of a tool that might be made to do what I want (which doesn’t compile under modern OSes anymore, and is also fantastically underdocumented), I didn’t find anything. screen -Dm was my best bet, but ugh! Time to see if we can do something hilarious with UNIX semantics. Spoiler: We totally can.

First: Pseudo Terminals - how do they work?

Pseudo Terminals (aka pseudo TTYs or PTYs) are a fun and kinda horrible facility in UNIX: A process can allocate a PTY, and gets a controlling and a client end1. If you’re writing a terminal-emulation program like xterm, it would keep the controlling end - this is what allows it to read what’s being written to the client end and send text to the client, as if that text appeared in a real terminal. Your terminal emulator would pass the client end to a shell session and then read what the shell sends to stdout or stderr.2

The one thing you really need to know about PTYs here is that the controlling and the client end both come as UNIX file descriptors. They’re a number attached to a process, much like file handles, sockets or other silly things you can use with read/write.

So, my thinking goes: Let me write a little UNIX tool that sets up a new PTY, then sends the controlling end to another process, then retains the client end for itself and calls exec to start my crashy program. Calling exec doesn’t adjust the process hierarchy, and would be exactly what other tools do to start programs under process supervision.

A diagram of the process tree with a wormhole

If only there was a way to send that controlling end elsewhere…

But… uh, can you send the controlling end of a PTY to another process? Turns out you can!

UNIX domain sockets3 are what they call a socket facility (“Internet” is another socket facility). These are file-like objects that behave almost exactly like real network sockets to localhost - they have two ends, you can send and receive data via sendmsg and recvmsg, but they have a few more functions! One is that one end can query the other end’s user ID and other authentication data.

Another cool function of UNIX domain sockets is that you can send structured data like file descriptors over them. Remember file descriptors? Both ends of a PTY are file descriptors!

Yay! Just send the controlling end of the PTY through a UNIX domain socket to a process that’s running under a terminal emulator like screen! We can do this!

Oh right: Prior art & introducing deptyr!

My amazing colleague Nelson had already written a tool called reptyr, which did the things I wanted to do, just almost in inverse: It uses ptrace to attach to a process that’s running under another terminal and force it to set up a new PTY, it then makes the process send the controlling end to reptyr through a UNIX domain socket so it can proxy your input and the process’s output.

Since reptyr’s code base is geared towards doing just that re-PTY-ing of existing programs (it’s really not my pun), I decided to rearrange it in a new tool for starting processes headlessly, called deptyr.

Deptyr has two modes of operation: One is to act as the “head”: It’s the thing that receives the controlling end of a PTY and acts as a proxy for your program’s output & any user input.

The other mode is the one that runs under process supervision - it sets up a PTY, connects to the “head” deptyr, and then execs your program with stdin/stdout redirected.

Once I’ve got the original thing thing I wanted to work, I’ll post an update with the config I used to actually run it under supervision. Initial experiments point to yes, but we’ll see (-:

  1. the standard terminology for the controlling and client end is is the “master” and “slave” ends. I find the standard terms extremely distasteful; in addition to extreme lack of taste, they don’t even correctly convey what’s going on, so controlling/client ends it is.
  2. This is what tools like screen and xterm do! It’s pretty interesting to learn about this in detail – it’s pretty easy to run into a situation where you want to control a tool like a terminal emulator would. Sadly, I don’t know a lot of literature on PTYs. Send me your favorites!
  3. Beej has a pretty good intro to programming UNIX domain sockets!

02 Jan 2016, 20:24

Hosting my blog on Google App Engine with Letsencrypt

Editing my last post in Octopress was such a pain that I decided to switch the blog over to Hugo. While doing that, I decided that the yak stack wasn’t deep enough and that I should be moving my blog to https in the process. Here is my story (and links to automation shell scripts!)

(This is what happens when you give me a pot of black tea on New Year’s Day after 6 hours of sleep!)

The Yaks

I was hosting this blog on Amazon S3 - it’s static files, so that seemed reasonable. However, you can only host non-https sites on S3 - to get https, you have to use Cloudfront, and then that would require that cloudfront talks to S3 over http - that’s pretty ridiculous.

My colleague Carl found a great solution, though: If you write a tiny amount of configuration, and a go file containing package dummy, you can get Google App Engine (GAE) to host your weblog’s static files on their infra, with a reasonable HTTPS story!

All that you need now is an SSL certificate, and hey - letsencrypt gives you free certificates with reasonable (and most importantly, automatable) processes - perfect!

Getting that SSL Certificate

The default letsencrypt client expects to run on your web server as root. Google app engine however doesn’t give you any of that - you get no web server, no code exec and most certainly no root.

This sounds displeasingly impossible, but thankfully, we don’t have to use the letsencrypt client, except to set up an account. Once I had the private key file, I used letsencrypt.sh by Lukas Schauer to automate the SSL certificate issuance process.


This is how letsencrypt operates (they have a really really good technical document too, so feel free to skip this section): They first check that you have access to the domain that you request the certificate for, by providing you a challenge URL and response body that they expect to get back when they hit that URL. Once they can see the right response (with a timeout), they issue a certificate for your private key.

The Automation Caper

With google app engine, we can deploy web apps, so I initially wrote a little go program that would respond to these requests and kept it under source control. This wasn’t great for a number of reasons, and the biggest one was that I had to copy/paste these tokens back and forth - a toilsome process.

Now, letsencrypt.sh has a “hook” facility for the certificate issuance process: It calls a shell script or function for every step of the challenge/response flow. Writing the script to do the right thing was pretty trivial, and this is what it does (follow the links if you like bash scripts):

All this is held together by a kinda convoluted Makefile - here are the most important targets:

  • make deploy calls this script to generate the latest HTML, and deploy the app to GAE.
  • make certificates calls letsencrypt.sh with the right arguments and should allow me to renew the certificates that I created once they are closer to expiring (2016-03-31!)

Annoying Things That Cost Me Way Too Much Time

Two things in this setup were really pretty frustrating:

One, letsencrypt.sh requires a perl program to extract your regular letsencrypt client’s private key into usable format (they store its RSA parameters in JSON, everything else under the sun expects the key format to be PEM).

This perl program requires Crypt::OpenSSL::Bignum and ::RSA, which were serious pains to install under El Capitan. What I ended up doing was install openssl from homebrew and link the headers (which they place out of the way) into place so that the install process could find them, like so:

ln -sf /usr/local/opt/openssl/include/openssl/ /usr/local/include/openssl

With the symlink in place, these two modules could install, and I could finally convert the private key to the right format. (Finding the right combination of cpan and file system things took me about an hour, ugh.)

Conclusion: letsencrypt, your client’s private key format sucks & converting it into anything remotely useful is annoyingly difficult.

The second frustrating/unfamiliar thing that cost me time was that if you have two GAE apps (one for a live blog and one for a “test” blog) and a certificate that covers both blogs’ domains, you have to upload the same certificate to both apps so that the GAE custom domain picker can even refer to it.

Conclusion: The GAE SSL cert upload form is convoluted and annoying, and I really want an API for this.

How well does it work?

I could bring my blog up under SSL within less than 4h, and that included a bunch of hacking. If you use the automation scripts and tricks for avoiding pitfalls I mentioned above, you should be able to get this running in far less time (I hope)!1

My weblog’s git repo is here. If you do use this, please let me know how it goes!

  1. I’ll probably write an update full of screams of frustration if cert renewal time comes and everything fails.2
  2. …but you won’t be able to read that update because my blog’s SSL config will be broken. So it goes! (-: