Anthony J. Martinez

Mobile Linux - Update 2

This is a short one: just buy an iPhone or whatever Android thing you want.

SSH at Scale with OpenSSH Certificates - SmartCard Backed Keys

Many weeks ago, when I prematurely declared a final note on the SSH at Scale topic, I was left wondering if I could go further and loop my Librem Key into the mix. This post details what I found hiding just below the surface of OpenSSH and OpenSC to let one maintain all necessary secrets on a SmartCard and still use OpenSSH Certificates for login.

First find your pkcs11 module (on Linux)

$ PKCS11_MODULE=$(pkcs11-tool | sed -n 's|.*(default:\(.*\))|\1|p;')
$ echo "${PKCS11_MODULE}"

Using the PKCS15 interface, export the public parts of associated keys for signing and authentication. In the case of my Librem Key, the relevant IDs are 01 and 03:

Signature Pubkey:

$ pkcs15-tool --read-ssh-key 01 -o
Using reader with a card: Purism, SPC Librem Key (000000000000000000009BB4) 00 00

$ cat
ssh-rsa AAAAB3N...DVUhQ== Signature key

Authentication Pubkey:

$ pkcs15-tool --read-ssh-key 03 -o
Using reader with a card: Purism, SPC Librem Key (000000000000000000009BB4) 00 00

$ cat 
ssh-rsa AAAAB3N...fuzQ== Authentication key

Now that both public keys are on disk, one can utilize the corresponding SmartCard-backed private keys in a normal OpenSSH Certificate flow. The first, and least intuitive, difference is in the call to ssh-keygen. When the private key is on disk one passes the path after -s and this key is used as the Certificate Authority. In the new case, the path to the public key is given after -s and the path to the pkcs11 module is given after -D. An example follows, signing the exported to create a certificate:

$ ssh-keygen -D ${PKCS11_MODULE} -s -n ${USER} -I EXAMPLE_CERT -z $(date +%s) -V +15m
Enter PIN for 'OpenPGP card (User PIN (sig))': 
Signed user key id "EXAMPLE_CERT" serial 1662515192 for amartinez valid from 2022-09-06T20:45:00 to 2022-09-06T21:01:32

$ ssh-keygen -Lf
        Type: user certificate
        Public key: RSA-CERT SHA256:V2KMVlJjPOn86z6a2srEcnMQj78OujEXJ597PJ6+wyY
        Signing CA: RSA SHA256:HoXa4G9gmsln+8gOUPEeKNmcCA0cppiUlmUuEjt8joA (using rsa-sha2-512)
        Key ID: "EXAMPLE_CERT"
        Serial: 1662515192
        Valid: from 2022-09-06T20:45:00 to 2022-09-06T21:01:32
        Critical Options: (none)

Using the generated cert with a private key on the SmartCard requires that the target host is configured to trust the key used as a CA above. An earlier post details how such a configuration could be accomplished. Assuming your target host is properly configured all that is left is telling ssh where to find the pkcs11 module, -I, and which certificate to utilize -o CertificateFile=.... Another example follows doing just that:

$ ssh -I ${PKCS11_MODULE} -o beaglebone
Enter PIN for 'OpenPGP card (User PIN)': 

Last login: Wed Sep  7 00:25:47 2022 from ...
amartinez@beaglebone:~$ sudo grep -i cert /var/log/auth.log | tail -3 | grep -v grep
Sep  7 01:52:21 beaglebone sshd[1344]: Accepted publickey for amartinez from ... port 40728 ssh2: RSA-CERT SHA256:V2KMVlJjPOn86z6a2srEcnMQj78OujEXJ597PJ6+wyY ID EXAMPLE_CERT (serial 1662515192) CA RSA SHA256:HoXa4G9gmsln+8gOUPEeKNmcCA0cppiUlmUuEjt8joA

The scenario used in these examples was exceedingly simple, and in production one should use an entirely different smartcard (or other pkcs11 device like a TPM) to act as the CA. If, like me, you tend to use your SmartCard with gpg rather than pcscd it is worth noting that use of the certificate works just fine without the -I option to ssh provided the associated key is available through gpg-agent serving as your ssh-agent. Signing, as far as I can tell, does require first stopping the local user's gpg-agent and then starting pcscd globally to use pkcs11. As a final note, the above works exactly the same way in Windows with the exception of the format of the paths given.

Cross-Compiling with Debian Multiarch

About a year ago, I created a project to keep notes on how I cross-compiled one of my Rust crates for some legacy systems on both ARMv7 and i686. At the time it was sufficient to statically link the entire C-runtime into these binaries. I left it at that and continued along my merry way without a care in the world for cross-compilation of any higher complexity.

Fast-forward to a few weeks ago, and I again had need for cross-compilation of a Rust binary at work towards an ARMv7 target with a dependency on the target system's libpcap. Time and time again my attempts, based on my original experiences with cross-compiling Rust for ARMv7, were met with frustrating failure. The linker could not find lpcap.

My searching of the vast series of tubes did not bring me any direct joy, often with some critical steps missing, but a distant memory reminded me of deb --add-architecture. This was the magic I sought, and if you seek to do similar work the finer points follow:

  1. If you have dependencies on libraries that differ between architectures, as many do, then a Debian based system may help you get those headers your crossbuild toolchain needs
  2. Use dpkg --add-architecture <TARGET_ARCH> to add your target architecture
  3. Run apt update to get the list of available packages for that target.
  4. Once you know which libraries you depend on, make note of the specific version present in your release. libpcap-dev itself has no armhf installation candidate, but apt search libpcap-dev will show you that libpcap0.8-dev (in Debian Stretch) is the package you really want and it does have an installation candidate.
  5. Continuing the libpcap-dev example, run apt -y install libpcap0.8-dev:armhf to install development headers specific to the armhf architecture.

Doing this has saved quite a bit of time as I can now compile on a much more powerful system than the embedded industrial PC that is my actual target (or the Beaglebone Black that has the same processor).

SSH at Scale - Revisited

The final note in my series on secure operation of SSH at scale will be brief:

Make sure to pay attention to MaxStartups.

Setting this too high will likely cause major performance issues as the CPU on any servers peg, and stay pegged. Setting it too low will negatively impact the systems trying to connect to your server. The setting itself controls how many connections can be in a "startup" state - prior to having completed authentication. Be sure to consider all expected use that sshd may answer, including client probes to verify the server is up. If these are driving a need to increase MaxStartups, try running a separate service specifically to handle these probes. Deconflict ports as needed.

Mobile Linux - Update

TL;DR - Librem 5 USA is in, PinePhone Pro is out.

Having now had both devices side by side for several weeks, the time has come to make some decisions about what I continue using. While I have a vast array of tools aimed at a growing set of tasks, I do try to keep to using the best tool for the job. This often means that as I upgrade, or in some cases simplify, there comes a time at which I need to let go of one tool in favor of another.

In this case, I will send the PinePhone Pro to a friend who helped me in the early days of my Linux use. That will put the device into the hands of someone else who can work with upstream projects to advance the state of mobile device support in Linux. For myself, I will continue to work towards improvements through the Librem 5 USA on PureOS.

Over the last few weeks, I have already found a number of cases where it has been the smarter play to dock my phone and get something done than to boot my Librem 14. Both, after all, run the same software. Since my last post I have done the following to my phone:

Beyond that set of tasks, I have also started looking at writing simple GTK4 applications in Rust. Eventually, I intend to create one adaptive application that serves as a landing page for buttons allowing me to do simple tasks that themselves have no grpahical interface. It has been quite the learning experience having never even attempted to write a GUI application before.

Mobile Linux

Yesterday, I finally received the Librem 5 USA quite a long while after my orignal Librem 5 order (and upgrade). Now I have two phones that run mobile Linux of one flavor of another. As both use the Phosh environment, using either is more or less the same with some key differences coming from the hardware itself. I hope to use some of my free time to try to identify the core functional differences, and assist the developers in bridging the gaps. With both Mobian and PureOS being based on Debian, and there being a fair deal of cross pollination in the developer base, I think there is a reasonable chance both devices obtain daily-driver status this year. Neither is quite there, in my opinion, but neither is immensely far either.

Right now, only the Librem 5 has functional convergence with the ThinkPad USB-C Dock Gen 2 that I have been using for a while. It works perfectly with my Librem 14, so it is not hugely surprising that it works with the Librem 5 as well. The hub functions work just fine on my PinePhone Pro, but I get no video output at all and the monitor does not appear to be detected. On the Librem 5 I do have to do a bit of a song and dance to a usable display. While using my 4K panel at 60hz is possible from my Librem 14, it is not on my Librem 5. Using it at 4K30 is possible from a strictly technical standpoint. The device is not usable at that resolution as there is extreme mouse and keyboard lag. Use at 1440p is not much better, and video playback is choppy. If I drop all the way down to 1080p, all is pretty well and good.

The smartcard slot might be my favorite feature of the device so far. I already have two other OpenPGP smart cards, and have used one of them via a USB-C adapter in my PinePhone Pro, but having one built right into the Librem 5 is a nice touch. For most people this is probably irrelevant, but I do enjoy being able to safely and securely handle authentication, signing, and encryption tasks right from a device that is slightly easier to carry than a laptop. The Librem 5 is heavy.

As time marches on, I intend to develop some applications I can use across my fleet of Linux computers that allow me to easily do repetitive tasks. These will likely be Rust/GTK4. Maybe they will even be configurable enough to have general utility for anyone who feels like using them. Time will tell.

PinePhone Pro - Update

The last few weeks have been... busy. Some of what has been keeping me occupied has been time spent giving various tasks a shot on my PinePhone Pro. To start with, I was running postmarketOS with the phosh UI. Given enough encouragement, I could do things like send and receive SMS, MMS, and even phone calls. It was the encouragement that troubled me. The (understandable) choice to base the OS on Alpine Linux was not, in my view, helping me arrive at solutions any faster.

As one does when having run a variety of Linux flavors over the decades, I elected to try another mobile-focused distro: Mobian. There was nearly a chorus of angels when I booted and immediately had functional calling without having to offer any encouragement to my sound settings. Configuring MMS to work properly took very little effort, and did not require a reboot once I figured out how to make a recent NetworkManager bug stop ruining my IPv6 life. While some elected to roll back that particular package, I elected to simply disable IPv6 entirely on my mobile connection.

Convergence, or the ability to plug the device into a USB-C dock and use it like a regular computer just like I do with my laptop, does not yet work. There are ongoing efforts within a pretty spectactular libre software community, and while you definitely do not want me writing your kernel I can certainly break it in exciting new ways and provide detailed feedback on what happened. To that end, I have been much more active in various development channels to help with testing and review of pull requests. Seemed like a good place to start!

PinePhone Pro

For a long while, I have wished for a pocketable Linux computer that itself had the power to accomplish some meaningful tasks. Being in the form factor of a phone has never been a real priority for me, and in the past I have accomplished this task reasonably well with BeagleBone Black and Raspberry Pi based devices. Adding mobile data to these devices is, of course, possible but I would not go as far as to call it enjoyable. This is where the interest in mainline Linux phones was born, for me.

After ordering, and then never receiving, a Librem 5 (even after upgrading to the Librem 5 USA) I saw an interesting press release from PinePhone announcing the Explorer Edition of their newly released PinePhone Pro. Given this device boasts more power, an LTE module with which I am already intimately familiar, and just so happens to cost half what the Librem 5 costs I decided to click order. The device arrived, and out of the box was able to handle most of my use cases.

There are teething problems, but since I actually have the device in hand I can be part of the solution to these problems.

PinePhone Pro Box

PinePhone Pro SSH

PinePhone Pro Web

PureBoot's Missing Feature


Purism's PureBoot offering on their products misses a critical feature:

Inspection and validation of the EC firmware.

Purism claims this is not a vulnerability, but a feature not yet implemented. It is on a roadmap that lacks dates that could be shared, according to Purism's CSO.

Read the full report below.

As Reported to Purism

The Full Report
Lack of EC validation risks undetected hardware backdoor in the Purism EC


The discussion around this vulnerability started when Anthony J. Martinez
forked the Librem EC project and created a branch to add numpad
functionality. Upon successfully building, flashing, and using the newly
crafted EC firmware both Anthony and John Marrett were concerned by the lack
of notification or warning from PureBoot.

The integration of the keyboard firmware into the EC code seemed a promising
avenue for attacks. The keyboard firmware being based on QMK made it easy to
explore the code base and look at prior art.

Considering the possibilities we identified two promising avenues of attack:

 - Command injection where a common keypress is used to launch an attack
 - A keylogging attack

Command Injection

The QMKhuehuebr [1] project on GitHub implements an attack where, when the
user presses Super+L (the default lock key on PureOS), a sequence of commands is
executed before the lock.

This attack can be used to launch and persist a shell script or binary from a
web source and use it to gain persistent access to the user account


Making use of the Dynamic Macro [2] feature of QMK an attacker could program
the firmware to record the first characters typed after boot. This would
include disk and OS passphrases entered during boot up. This information
could easily be persisted in RAM as long as the PC was powered on, possibly
including standby. More concerted development could allow the attacker to
store the recorded keystrokes in the EC firmware space, though this would be
more complex to implement.

The recorded keystrokes could be recovered by the attacker either through
physical access to the device. They could also be transmitted to a remote
server by combining the keylogging attack with the command injection attack
described above.


It's not clear to us how this attack can be prevented. The best method would
be for the EC to be validated before being loaded or by PureBoot; however, we
are not certain that this can be done. Our understanding is that the EC
handles boot processes prior to these systems being initialized.

There are a few important questions in this space as well:

Can this attack be stopped if the user makes use of the Librem [3] key? We can
still flash the EC, though it would require disassembling the laptop to
access the EC flash chip as discussed in this blog post [4] on EC booting

Based on Purism response: The Librem key will not prevent EC access and

Can you prevent booting from USB on the Librem 14? Does the Librem default,
when purchased, to a configuration requiring a password to boot from USB?

Based on Purism response: Will not be implemented. Restricts user freedom.

It will be possible to disable EC writing using DIP switches [5] which
would protect against a remote attacker compromising the firmware. We
question the effectiveness of this as a means of protection as this
class of vulnerability is best suited to an evil maid type of attack.

Based on Purism response: Being implemented, with no clear timeline.

Proof of Concept

We have not developed a proof of concept for this attack, but we may do so
at a later point in time.

Vendor Response

Purism is working on enabling the DIP switches to lock EC writing and suggests
the use of glitter nail polish [6] to make tampering evident. They state that
this issue is common to the entire industry and not specific to their product
line. They also prioritize user freedom over solutions that would lock the
user out of the EC.

Purism has declined to request a CVE to identify this vulnerability, but do
acknowledge that an actual vulnerability exists.


From our perspective:

 - Modification of the EC will allow an attacker to subvert the mechanisms
   that follow it, including PureBoot functionality
 - Requiring a password to boot from USB would increase the difficulty and
   evidence of this attack at low cost
 - The use of the DIP switches may be the only technically feasible solution
   that is possible at this time
 - Respecting user freedom is extremely important
 - Cryptographic validation of the EC, ideally allowing the user to sign their
   own firmware, is the only completely effective solution to confirm that the
   security of the device has not been compromised


2021-12-05  Initial communication of vulnerability to Purism security contact
2021-12-06  Initial response from Purism team, discussion follows, all delays
            on researcher side
2021-12-20  Final response from Purism team
2021-12-27  Revision of vulnerability write up based on discussions


Personal Note

It is no secret that I have been a vocal supporter of Purism and their efforts to bring Open and Secure systems to the masses. This often comes at a price premium which is understood to actively support the high quality development of libre software and systems. Omitting, in design, a check so critical while simultaneously posting with great frequency about the extreme level of security offered does not sit well with me. Attempts to contribute resolutions to other identified gaps have gone nowhere beyond making this vulnerability clear. Then there is the Librem 5 debacle, which further erodes confidence in the company itself. As my hardware already in hand works, I will not turn it into eWaste. I just cannot, in good faith, add to it or suggest it to anyone else.

SSH At Scale With OpenSSH Certificates - Final points

This is the third post in a series on using OpenSSH Certificates to secure access to large numbers of similar devices. The first post can be found here, and the second can be found here.

The practical example left the means of scaling to the immagination, but there is one thing that is not obvious without looking at the source code of ssh-keygen itself:

A certificate may not have more than 255 principals

To use certificates at scale, where we will assume to be talking on the order of 2^16 or more, one needs to split a target queue into chunks of no more than 255 devices. Once split up into these chunks, it is rather simple to fetch certificates allowing access to devices from each chunk and map those to their corresponding target lists. Simple set and dictionary objects can handle this well within Python, for example, and can be used to feed a worker pool. The AsyncSSH library in Python supports certificates exceptionally well, and would make a good basis from which one could build both an SSH CA and client tooling capable of highly concurrent and secure access to a very large number of similar targets.

It is worth noting that when using certificates and eliminating password usage, an emergency hatch is necessary. My exmaple used a single CA key, and the loss (or compromise) of that key would be Very Bad News™. One should not depend on a single point of failure, so consider a rotational scheme where your devices know up front about a possible set of keys. Perhaps, if you are in an embedded environment, your base image contains one common CA that will always be available in an emergency but is hopefully never needed in production. Such a key should ideally be stored away in an airgapped HSM with strict, and audited, access policies governing its use. Another set of CA keys may be defined during device provisioning, and could correspond to keys available from a certificate service available over some network connection.

Finally, when using certificates in the ways discussed in this series, client keys can be ephemeral. The certificate authority grantes the powers needed to access the systems that trust it to any public key it signs. If this is combined securely with an external auth provider trusted by the CA, then any client tooling created can utilize per-job key material that is itself never exported to disk. When validity periods are kept to a minimum, this greatly reduces the potential for abuse and the window of opportunity for attacks is narrowed.

SSH At Scale With OpenSSH Certificates - Practical Example

As promised in my last post, here is an example setup for how one might use machine-specific data to shape SSH access using OpenSSH Certificates. To avoid too much irriation on my local system, I created a simple test container using podman and the following Dockerfile:

# syntax=docker/dockerfile:1

RUN apk --no-cache add openssh-server bash && \
    adduser -s /bin/bash -D -u 1001 demo && \
    echo "demo:$(dd if=/dev/urandom bs=1 count=32 2>/dev/null | base64)" | chpasswd -c sha512 && \
    mkdir -p /opt/ssh/config

WORKDIR /opt/ssh

CMD ["/bin/bash", "/opt/ssh/config/run"]

The content of /opt/ssh/config/run is as follows:


set -e


if [ ! -e "${HOST_KEY}" ]; then
   ssh-keygen -t ecdsa -b 256 -N '' -q -f "${HOST_KEY}"

/usr/sbin/sshd -D -e -h "${HOST_KEY}" -f "${CONF}"

The reference sshd_config is:

# Setting some core values that are helpful for use in a
# system using Certificates.

HostKey /opt/ssh/config/ssh_host_ecdsa_key
HostCertificate /opt/ssh/config/

LoginGraceTime 10s
PermitRootLogin no
StrictModes yes
MaxAuthTries 3
MaxSessions 10

PasswordAuthentication no
PubkeyAuthentication yes

AuthorizedKeysFile	 none

TrustedUserCAKeys /opt/ssh/config/ssh_ca_keys
AuthorizedPrincipalsCommand /opt/ssh/config/auth_principals %u
AuthorizedPrincipalsCommandUser nobody

# override default of no subsystems
Subsystem	sftp	/usr/lib/ssh/sftp-server

The magic happens in auth_principals:


set -e

case "${1}" in
	echo "${HOSTNAME}-demo"

While auth_principals is a fairly trivial Bash example the key points are that:

  1. AuthorizedPrincipalsCommand needs to return a string matching one of the principals encoded on the presented certificate when a user tries to login with a given username

  2. This can call upon anything the machine knows about itself and can programmatically acces. The use of HOSTNAME is just an example. As an administrator you can do as you like. Be creative!

Running the Example

After building the test container, fire it up:

$ podman run --rm -it --hostname=$(openssl rand -hex 8) -p 9022:22 -v ./config:/opt/ssh/config:Z ssh-cert-example
Server listening on port 22.
Server listening on :: port 22.


Note the volume mount of a config directory which itself contains:

  1. The scripts, and configs, shown above: sshd_config, run, and auth_princpals
  2. The HostKey and HostCertificate
  3. A hello script that I will set as the ForceCommand on a sample user certificate
  4. An ssh_ca_keys file referenced in sshd_config with the public key associated with the SSH key used as a CA signing key.

Given that I created the container with a random HOSTNAME value, the container needs a little inspecting before I can proceed:

$ podman inspect strange_heisenberg | jq '.[0].Config.Hostname'
Minting a Certificate

From the information above we know that a user, demo can access the system by presenting an OpenSSH Certificate with a principal matching ${HOSTNAME}-demo. Given that the HOSTNAME variable expands to cac0e8fd0d329d7e let us sign a certificate accordingly:

$ ssh-keygen \
	-I \
	-V $(date +%Y%m%d%H%M%S):$(date --date="+15 minutes" +%Y%m%d%H%M%S) \
	-z $(date +%s) \
	-n cac0e8fd0d329d7e-demo \
	-O force-command=/opt/ssh/config/hello \
	-s ../ssh_ca \
Signed user key id "" serial 1642122125 for cac0e8fd0d329d7e-demo valid from 2022-01-13T19:02:05 to 2022-01-13T19:17:05

Checking the contents:

$ ssh-keygen -Lf
        Type: user certificate
        Public key: ECDSA-CERT SHA256:qfxed1FR8kXtXMXBWTjEjwPLjBKWz0nKbthaFGGVO/E
        Signing CA: ECDSA SHA256:Tb4fK9xMEtZRnxHlXsvXaPoPj1A8vtxNXvWkb1Wpju8 (using ecdsa-sha2-nistp384)
        Key ID: ""
        Serial: 1642122125
        Valid: from 2022-01-13T19:02:05 to 2022-01-13T19:17:05
        Critical Options: 
                force-command /opt/ssh/config/hello
Logging In

The easiest part of all, logging in as a client:

$ ssh -i demo -o -p 9022 demo@localhost
Welcome to cac0e8fd0d329d7e! OpenSSH Certificates are cool huh?
Shared connection to localhost closed.

What trickery is this? No prompt to accept a random key fingerprint into my known_hosts?? Surely you jest!? No, I just added the to ~/.ssh/known_hosts as a @cert-authority entry:

@cert-authority [localhost]:9022,[::1]:9022 ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMGDesyChnteRlL3/fkcFUQk+qDuL5dnbFPeT8oejuaDOv4UT3yLU/2bXJZlEjbknztORXuy3ViqCBQskqPkfPglyv0Uqpn4VhRbh9j1fK6MzcPg50OWDw1hioCohazx7w==
Checking Server Access

In the world of distributed SSH access without certifiate use, and with an industry worst-practice of shared accounts with shared credentials, no one ever has any clue who logged in as demo. Maybe it was someone authorized to do so. Maybe it was someone who left an organization a decade ago.

The output from my test container, for each access, looks like this:

Accepted publickey for demo from port 53930 ssh2: ECDSA-CERT SHA256:qfxed1FR8kXtXMXBWTjEjwPLjBKWz0nKbthaFGGVO/E ID (serial 1642122125) CA ECDSA SHA256:Tb4fK9xMEtZRnxHlXsvXaPoPj1A8vtxNXvWkb1Wpju8
Received disconnect from port 53930:11: disconnected by user
Disconnected from user demo port 53930
Accepted publickey for demo from port 53934 ssh2: ECDSA-CERT SHA256:qfxed1FR8kXtXMXBWTjEjwPLjBKWz0nKbthaFGGVO/E ID (serial 1642122125) CA ECDSA SHA256:Tb4fK9xMEtZRnxHlXsvXaPoPj1A8vtxNXvWkb1Wpju8
Received disconnect from port 53934:11: disconnected by user
Disconnected from user demo port 53934

And after waiting for my 15-minute validity period to expire:

Certificate invalid: expired
maximum authentication attempts exceeded for demo from port 53940 ssh2 [preauth]
Disconnecting authenticating user demo port 53940: Too many authentication failures [preauth]


The building blocks for OpenSSH certificate use are simple and accessible to admins of all skill levels. Substantial benefits exist over the use of LDAP, authorized_keys, or shared credentials:

  1. Certificate auth is lightning fast
  2. One need only maintain TrustedUserCAKeys on servers
  3. One need only maintain @cert-authority entries, which can be scoped hostnames, IPs, etc, on client systems
  4. Certificates are portable. If you have a system in an airgapped bunker, one can mint a certificate with a limited validity period attached to an ephemeral private key that will allow access to the system to someone physically present. Try that with LDAP.
  5. It is clear who accessed what and when.

While the examples given were simple and manually executed on a Bash shell, there are a number of ways one could build a highly-available (and secure) web service CA. Python and Rust both have appropriate libraries, and I am certain other languages do as well. With a little imagination, and a lot of attention to detail, you too can have secure SSH access that is both easy to deploy and easy to maintain.

There are a few limitations to be aware of, and I will cover these in another post.

SSH At Scale with OpenSSH Certificates

The Issue

Service and maintenance of widely deployed Linux based systems can be a challenging task. This often requires distributed global support personnel with varying levels of system access. A means of auditing when and where that access is used should be a strict requirement. In fleets of IoT devices where base configurations are common, but resources are limited, one might find a need to balance system simplicity and complex access models for support teams.

An Ideal Solution with OpenSSH Certificates

OpenSSH Certificates provide a means of gating access to Linux systems with extremely minimal overhead on either the client or server, and are supported in most every version of OpenSSH released in the last decade. If your systems are not severely deprecated this solution can work for you.

Configuration for use of certificates is quite simple, and requires no more than an understanding of a few parameters in sshd_config:

Example Flow
Example IdentityToken
	"user_id": "",
	"principals": ["device_id1", "device_id2", "device_idN"],
	"nbf": 1641016800,
	"exp": 1641017100
Example Certificate

A number of ways exist by which one might mint an OpenSSH Certificate, ssh-kegen included.

Assuming a CA is running some process that accepts JWTs, validates the signing JWK, and verifies claim fields against some input validation defined by organizational needs, the creation of a certificate for the IdentityToken shown above might look like:

ssh-keygen \
	-I \
	-s ${CA_KEY_PATH} \
	-n device_id1,device_id2,device_idN \
	-z 12345678 \
	-V $(date --date=@1641016800 +%Y%m%d%H%M%S):$(date --date=@1641017100) \
Abstracted use case

The resulting from the example above can then be returned to the user who may use the certificate to access systems where:

When such access occurs, the authorization logs will show that:


OpenSSH versions from any non-deprecated distribution have supported certificate login for several years. A simple, and robust, solution exists for accessing distributed systems at scale. With some creativity, and a toolbox of open standards, one can provide secure and auditable access to systems over SSH. In a later post, I will share samples showing how one might configure clients and servers for OpenSSH Certificate use.

A Truly Functional Dock for the Librem 14

When I previously wrote about the joys of using a functional USB-C dock with the Librem 14, I spoke too soon. The dock I originally purchased, along with the power supply I bought with it, caused more trouble than they were worth. These included:

As one might imagine, this drove me insane and I quickly stopped using that dock at all. Several months later, the issuance of a new work laptop (Lenovo X13) included the option to get a Lenovo Thinkpad USB-C Dock Gen 2.

This thing just works. In more than two weeks of use, my laptop has not randomly shut off on AC power. When I plug the USB-C cable into the right-side port, it starts charging if charging is needed. Every time. Video never randomly stops working either. Audio is great, the ethernet device can fully saturate my home network, and I have plenty of ports to use so the only thing plugged into the laptop itself is the USB-C cable.

If you do happen upon one of these docks, do note that updating its firmware will require a Windows PC with support for use of a USB-C dock. That, for me, is satisfied by my work-issued X13. This will have to work until there's a more open option available that is also proven to work reliably with the Librem 14.

Fun with Emacs

Somewhere around a year ago I bailed on vi(m) as my primary editor. I did this after two decades of faithfully carrying the vi(m) torch in the holy flame wars of "no myyyy editor is better." For the first few weeks of GNU/Emacs use, I tried to use Emacs natively with its own keybindings. This was, to an old vi(m) user, maddening. I quickly found myself using EVIL mode, and proclaiming that Emacs is actually the best version of vi(m) in existence (and yes, I tried NeoVim and all the rest).

In a given week, I frequently find myself connected to remote hosts on which Emacs is not installed and this has left me with plenty of time to use my old friends from the vi family. With some twenty years of familiarity it is not like any of the keybindings or wizard-like motions have fled my mind. At some point, quite probably because it is often difficult to reason about which mode one might be in when accessing a system over a remote link best described as "glacial", I started to get pretty tired of smacking Esc all the time. I confess to frequently abusing sed -i when I know exactly what I want to change and where.

There is an easier way that I keep forgetting even exists: TRAMP.

This very post was written using TRAMP to:

  1. Connect to my server as my normal user
  2. Change to the user under which my site's process runs
  3. Create and edit the markdown from which the post is rendered

Some of the more advanced extensions I run in Emacs for use as a rather powerful Python and Rust IDE appear to conflict with TRAMP, but running in a minimal config is trivial and functional so I may not even mess with figuring out exactly where the failure is induced. Over the next few weeks I may mess with seeing how this works in my day to day job where I must frequently access remote hosts just to edit a text file or two. Doing it all from Emacs has some appeal.

Even with TRAMP, I still find myself annoyed with changing modes to do a number of things. EVIL was removed from my init.el and several modes are now less hampered by binding conflicts. While it's only been a day, I am enjoying it rather a lot. Adding a little enjoyment back to computing is worth it.

Adding GNU/Emacs Info to Debian-based Systems

For reasons explained here, the standard documentation of GNU/Emacs does not ship alongside the application itself in Debian-based systems without use of the non-free repositories. Those of us wishing to have the in-depth documentation that typically ships with GNU/Emacs will need to tell apt how to get it. The process is largely identical to that described in my previous post where I enabled bluetooth on my Librem 14. In fact, I only added one more Package stanza as follows:


Package: emacs-common-non-dfsg 
Pin: origin
Pin-Priority: 501

With this added all that was left was an apt update and an apt install emacs-common-non-dfsg.

My GNU/Emacs now contains the documentation it was supposed to have to begin with.

Enabling Bluetooth on the Librem 14

As the Librem 14 ships with PureOS it lacks the non-free firmware required for the onboard Bluetooth to function out of the box. Fortunately, this is a trivial problem to solve. Purism does not control how you configure your hardware, you do.

The Process

Note: the following assumes you are running PureOS 10

  1. Create /etc/apt/sources.list.d/bullseye-nonfree.list with the following contents:
# Debian Bullseye non-free for firmware-atheros
deb bullseye non-free
deb-src bullseye non-free
  1. Create /etc/apt/preferences.d/bullseye-nonfree with the following contents:
Package: *
Pin: origin
Pin-Priority: 1

Package: firmware-atheros
Pin: origin
Pin-Priority: 501
  1. Update apt:
sudo apt update
  1. Install the non-free drivers:
sudo apt install firmware-atheros
  1. Toggle the hardware kill-switch for the WiFi/Bluetooth, and enjoy.

Life Below Gig

For the last two years or so, I have lived in The Netherlands and enjoyed gigabit downstream at home. Unfortunate family circumstances have me back home in Texas, and the step backwards having lost some 80% of my downstream is remarkable. Almost more alarming is the huge lack of upstream (6mbps vs 40mbps).

It's clear that when I return I will have to pay whatever extortion Xfinity requires as they're the monopoly power in my zip code. It's also clear that A Rust Site Engine likely needs some new features like pagination to make it easier to consume this site if you don't have super fast internet.

Software Projects @

For quite some time I have wondered how I would manage support for user input, contributions, or support in my various libre projects. With the recent addition of Spaces support to both Synapse and Element, I have created a space full of rooms for each of my projects here.

Individual rooms are as follows:

As I update these project, I will use these rooms to make appropriate announcements. For now this will be a manual project, but later I may write a bot to manage these announcements for me.

Basic Tails Setup

The following mini-guide will take you down the path to a basic Tails install with one important extra feature: support for offline USB HSM use.


Getting Tails

Here you have two choices, which are well described here, but boil down to:

HSM Support

Once you have a base Tails install the rest is quite simple.

  1. Boot your new Tails USB
  2. Connect to Tor
  3. Hit Super and start typing "Configure persistent volume"
  4. Create your passphrase to encrypt the persistent storage volume
  5. Click the Create button
  6. When the feature list appears, enable "Additional Software"
  7. Reboot
  8. Unlock your persistent storage in the Welcome Screen
  9. Under "Additional Settings" on the Welcome Screen expand the options and choose "Administration Password"
  10. Connect to Tor
  11. Open a terminal and run sudo apt update && sudo apt --yes install opensc libengine-pkcs11-openssl
  12. Tails will update and ask if you want to persist this Additional Software. Tell it yes, you want the additional software available every time you unlock your Persistent Storage

At this point, if you reboot and unlock your persistent storage your Tails system will be able to use any USB HSM supported by OpenSC. Installation of software from the persistent storage does not require an administration password, and for added security it is probably best to avoid setting one unless your workflow requires administrative rights for some reason. After your software finishes installing from persistent storage you are ready to use your HSM directly with tools like:

Signing Example

# Here 20 is the key ID of a signing key on a Nitrokey HSM 2
amnesia@amnesia:~$ openssl dgst -engine pkcs11 -keyform e -sign 20 -out special.sig special.img
engine "pkcs11" set
Enter PKCS#11 token PIN for UserPIN (MY-MAGIC-KEY):

# And now to verify the resulting signature
amnesia@amnesia:~$ openssl dgst -engine pkcs11 -keyform e -verify 20 -signature special.sig special.img
engine "pkcs11" set
Verified OK

Librem Key in Tails 4.21

While Purism has upstreamed their changes to Nitrokey libraries, those changes haven't trickled down to the masses quite yet. If, like me, you happen to have a Librem Key and also keep a Tails stick handy this little tip should let you use the two together at least as far as gpg is concerned.

  1. Boot Tails and set an admin password
  2. Create /etc/udev/rules.d/40-libremkey.rules containing
ATTR{idVendor}=="316d", ATTR{idProduct}=="4c4b", ENV{ID_SMARTCARD_READER}="1", ENV{ID_SMARTCARD_READER_DRIVER}="gnupg", GROUP+="plugdev", TAG+="uaccess"
  1. Run sudo udevadm control --reload-rules
  2. Plug in your Librem Key
  3. Verify it now shows up when you run gpg --card-status

CrossBuild Initial Release

CrossBuild Provides a Dockerfile to build a container that supports cross compilation for ARMv7 and i686 targets from x86-64 hosts.

This project was created out of a need to have a single-shot process for building a few Rust projects with stripped binaries featuring static C-runtimes for:

While cross exists, and is far more full featured, the project lags behind in updating the base OS, GCC, and QEMU versions used (as of this writing).

An example exists that handles my specific use case for one project, and might serve as inspiration for modifications to suit more complicated needs.


Using podman since I do not have docker itself installed:

# Assuming you have cloned this repository and are in the repo
$ cd docker
$ podman build -t crossbuild:dev -f ./Dockerfile


Again using podman and assuming you're in the repository:

$ podman run --rm \
	-e REPO_URL="" \
	-e BIN_NAME="connchk" -v ./example/:/opt/build \
	crossbuild:dev /opt/build/

# ...

$ tree example
├── armv7-unknown-linux-gnueabihf_connchk
├── i686-unknown-linux-gnu_connchk

$ file example/*_connchk
example/armv7-unknown-linux-gnueabihf_connchk: ELF 32-bit LSB executable, ARM, EABI5 version 1 (GNU/Linux), statically linked, BuildID[sha1]=00811fa9637b6abf243fb707a8970b0cea43ba4f, for GNU/Linux 3.2.0, stripped
example/i686-unknown-linux-gnu_connchk:        ELF 32-bit LSB executable, Intel 80386, version 1 (GNU/Linux), statically linked, BuildID[sha1]=6e1d92bb01aed5b0adb673b1dfbe5fb28cf5da18, for GNU/Linux 3.2.0, stripped

Quick update on Librem Key Usage

In another post, I noted the steps I took to get my Librem Key working for my cryptography needs within Pure OS. This covers:

One thing I noted in the last post was that if you wanted to use the pkcs11 interface of the smartcard, it was necessary to kill off gpg-agent first. What I failed to notice myself, was that on normal boot the pcks11 interface grabs the device first. Since I was not frequently booting to run straight into SSH, I was generally unplugging my Librem Key and only plugging it back in when I needed to SSH. This allowed gpg-agent to snag the device and make it seem like nothing was wrong to me.

Fast forward to today, when I'm using SSH immediately almost every time I start my machine. Since I don't wish to wear out the USB plug in short order, and since clearly software is to blame, I set forth on finding a solution:

sudo systemctl disable pcscd && sudo systemctl stop pcscd

There you have it, now your gpg-agent reigns supreme over your smartcard and you can go about your business just fine. Should you wish to use the pkcs11 interface just stop your local gpg-agent, and start pcscd:

systemctl --user stop gpg-agent && sudo systemctl start pcscd

Use the latest Rust

Given the risks associated with CVE-2021-29922 anyone using A Rust Site Engine should make sure to build with Rust 1.53 or later. Generally, ARSE should only be used behind a reverse proxy that mitigates the risks but safety is a short rustup update && cargo install arse away.

Smooth Sailing on PureOS

After some eleven weeks of full time use of my Purism Librem 14 using Qubes OS, I have decided to give the native Pure OS as shot. While this will not provide the same degree of isolation I had in Qubes OS, the primary use cases for this machine do not necessarily require that degree of separation. When I do not wish to leave a trace, or otherwise find that the native attack surface is too great, I have a Tails stick I can boot.

As I write this, my laptop has been running Pure OS smoothly for two days. Almost the entirety of that time has passed while using the USB-C hub that was giving me fits in Qubes OS. In fact, right now I've got a USB-C SSD plugged in to send a Qubes backup across the ocean to my backup server at home. This very same operation failed somewhat regularly if I used the USB-C hub in Qubes, but has not been problematic yet in Pure OS.

Migrating my data from multiple qubes, into a single yet still "reasonably secure" Pure OS install was fairly simple thanks to Borg Backup. For those maybe curious about how such a thing can be accomplished, this is what I did:

With all of this done, I can use the system just as I was previously for management of my private network and for development purposes. While I am no longer using Split GPG and Split SSH, my private key material is not directly on the system and can be accessed only when my Librem Key is plugged in and unlocked. As I noted before, if one wishes to execute cryptographic operations using the pkcs11 interface it is still necessary to first stop gpg-agent. An alternative, however, is just to encrypt to yourself using gpg directly: gpg -se -r yourname <filename>.

So far, the only thing I have installed from a non-purism repository is syncthing. This actually exists in the default repositories, but the version is extremely stale so I added the developer's repository and stable branch to my sources.list.d and pinned the package to come from there in all cases. Long time users of Debian-based sytems will not be surprised by this at all.

More on the Librem 14

While I still need to do a longer test, it seems many of the USB-C issues I have observed on my Librem 14 while running Qubes OS may be the fault of the OS. A few hours of operation using the USB-C hub in Pure OS, the OS that shipped with the laptop, did not show any of the problems I have had running in Qubes.

Given that I still have another laptop also running Qubes OS, and that I more often use it for security tasks, it might even make sense to just install Pure OS and run the Librem 14 that way as my primary system. Before I do that, I'll spend some more time running the OS as a live system off a USB stick.

The audio jack does not work in any OS, and that is a bit disappointing. Thankfully, my USB-C hub does have an audio jack that works and in Pure OS I can also pair my bluetooth headset. I am hopeful that future releases of the EC from Purism will address the audio jack.

USB-C Hub is Out

The USB-C PD hub I have been using with my Librem 14 and Lenovo X390 just does not want to behave when connected to the Librem 14. The device itself shuts down at random and while I am curious to find out exactly why that investigation is not going to be a priority for a while.

Running the Librem 14 on USB-C power directly, and using the onboard HDMI port for video output, has proven rock solid but when involving the hub things are a bit iffy. I do wish I had remembered to bring the barrel-connector power supply that came with the laptop along for the trip across the Atlantic. I do have a USB-C to DisplayPort cable, so perhaps I will see if I have any video instability issues over that port by itself.

A new theory... Librem 14 + USB-C Woes

Given that today has been a day full of compiling Buildroot on the Librem 14, and that my battery has been stable at 98% the entire time, it might just be the now-removed USB-C hub/doc causing my troubles. If it is not that, it could also be the very small amount of RAM given the sys-usb qube in my Qubes OS system.

One thing I have noticed is that sometimes the screen just shuts off while I'm using the dock, though it only ever does this on the Librem 14. This never happens on my work laptop, but my work laptop is also not running Qubes with sys-usb in play and limited to 300MB of RAM.

When my builds complete, I will bump the RAM up on my sys-usb qube and see if that makes any difference. My unusual choices bite me in the backside sometimes!

TLS Implementation Failures

By now we have all attempted to access a website in any modern browser and found ourselves reading a warning that proceeding is dangerous. These tend to pop up when one encounters self-signed certificates, which themselves are not inherently evil, rather than certificates issued by one of the many globally trusted root certificate authorities. Failures in TLS implementation are not necessarily due to the use of self-signed certificates, but could rest in a failure to add the signing certifiate to the appropriate trust store after having verified the signer is who they say they are.

Everyone verifies certificates, right? Failing to do so extinguishes any real benefit of transport layer security, and exposes an extraordinarily large attack surface in the multitude of RESTful APIs and chat services that make the world of IoT tick. If, for whatever reason, your service does not mandate client certificates how safe can you be if you are not certain your clients are checking certificates? Since it requires more work to ignore certificate checking (examples below) surely no one is goiing the extra mile to do it wrong...

Unfortunately, ignoring certificate checks is fairly normal in some circles (looking at you, IoT) and if you want to know if a device on your network is guilty the process for finding out is trivial. This, of course, also means that a malicious attack is just as easy. So is preventing such attacks: always check certificates.

Are you curious if the brand new IoT widget you just recieved is Doing It Right™? By now we know every one of these devices is constantly phoning home to the mothership about your every move, but how can you check if this is done securely? Glad you asked!

If a picture is worth a thousand words...

No time to watch an ASCII Cast?

  1. bettercap to gather information on network hosts, and ARP spoof
  2. sslsplit to forge TLS certs on the fly
  3. An iptables pre-routing NAT rule to direct TLS traffic through sslsplit
  4. tshark to inspect the raw traffic, and anything intercepted by sslsplit
  5. Five minutes of your time

Final Thoughts

If the answer to "are you verifying certificates?" is no, then you are doing it wrong and putting both sides of your communications at risk. If you are a developer, and you do not know if you are checking certificates go take a look at your libraries and find out which extra options you need to use to disable checking. Search your source for these options. If you find them, file a bug and fix it. Immediately!

Librem 14 USB-C Charging in Qubes OS - During Qubes Backup

While still plugged in, and fully charged, I kicked off a Qubes Backup. In my last post, I noted how horribly inefficient that process is so I thought it might reveal something. Clearly, the first few minutes do actually decrease the charge of my battery even while plugged in. Fortunately, it appears that with two threads at work the charging profile is able to keep the decrease to a minimum.

For my next test, I think I will do something like run a huge DispVM and compile an extra bloated Linux kernel.

Librem 14 USB-C Charging in Qubes OS - Baseline

Sometime either last week, or the week before, my Purism Librem 14 gave me a bit of a scare. While transfering a large amount of data across the internet, and plugged in, it just died. It was fairly hot here in The Netherlands, and in the loft my office is in it was even hotter. My intial belief, or really fear, was that the brand new laptop was dead. Such is my luck in most cases, and after more than a little time in combat I am conditioned to expect the worst. Heat kills, and it would not be the first time I have seen it happen.

Fortunately, the device itself was not dead as in brick but was dead as in 0% charge remaining. Even though the USB-C power supply I am using, and the USB-C hub I was powering through at the time can handle plenty of wattage, it appears that the power requested was not greater than or equal to the power consumed. As our dear friend Newton explained quite a long time ago there ain't no such thing as a free lunch. I ran out of juice. Charging at 10W might have something to do with it:

Now the above is just my first sample of the charging profile, and the data were taken while the laptop was mostly idle. The data were collected the same way I did it before, parsed with our friend Python, and plotted with Bokeh.

For the next test(s), I will throw some load at the system and see what I can determine from the battery stats. What I am looking for now is whether or not I see the indication of charging whilst the battery energy continues to decrease minute over minute at full load. What is most interesting to me so far is that I have in fact been able to use the full power of the system for hours on end, and it appears that my decision to execute a backup in Qubes (itself an extraordinarily inefficient process) with low battery life may have had some impact on the situation. It should not have mattered, but what should be and what is are rarely unified in practice.

On the upside, and as a topic to detail in another post, the scare did provide the necessary push towards actually configuring Borg Backup scripts for my most critical VMs. Now, in addition to my full VM backups there are also de-duplicated and encrypted Borg backups of those VMs that I can run in seconds to my backup server back in the US.

The Python responsible for the chart above
from glob import glob

import pandas as pd

from bokeh.plotting import figure
from bokeh.embed import autoload_static
from bokeh.resources import CDN

samples = glob("*.txt")

charge_profile = { "energy": [], "rate": [] }

for sample in samples:
    with open(sample, 'r') as f:
        for line in f:
            if "energy:" in line:
            if "energy-rate:" in line:

df = pd.DataFrame(charge_profile)

p = figure(x_axis_label="Time (m)", y_axis_label="Wh (Navy), W (Firebrick)", title="Librem 14 Charging Profile")

p.multi_line([df.index, df.index], [, df.rate], color=["navy", "firebrick"])

js, tag = autoload_static(p, CDN, '/tech/ext/librem-14-charging.js')

# Then write the js and tag out...

Librem 14 + USB-C PD Hub in Qubes OS

A little more than a week ago, I picked up a few USB-C items to try out with my Librem 14 laptop...

Librem 14 displaying 4K video over a VAVA VA-UC020 Hub

The BatPower P120B works great for powering both my Librem 14, and my work laptop. The two USB-A power ports are great as I've now freed up space on my power strip that was previously occupied by a dual-port USB-A charger. It has also lightened my load on the road, as I no longer require a laptop charger and USB chargers for other devices.

All of the features of the VAVA UC020 are working in Qubes OS. Both my USB keyboard and mouse are plugged into my 4K monitor, which is connected to one of the VAVA's USB-A ports and its HDMI port. Early in the boot process, the panel came alive and has stayed that way ever since. As soon as I plugged in my wired headset, the audio device was available for assignment.

Switching between my work laptop and the Librem 14 is now as easy as swapping one cable!

SmartCards and Fedora

Attempting to use my second Librem Key with Fedora presented some challenges in dealing with pcscd. The root cause is that polkit does not allow normal users access to pcsc or the smartcard itself. This can be resolved with a single rule:

In /etc/polkit-1/rules.d/42-pcsc.rules:

  function(action, subject) {
    if (( == "org.debian.pcsc-lite.access_pcsc" || == "org.debian.pcsc-lite.access_card") &&
        subject.isInGroup("wheel")) {
          return polkit.Result.YES;

For the subject.isInGroup condition, I used the group wheel as I am the only member of that group on the system in question. Use your own descretion here, or use an even more specific condition to allow only one user like subject.user == "foo".

Additional Points

While this does allow access through pkcs11 and pkcs15 tools or gpg, I have not yet found the magic potion that will allow me to use both. Whichever tools are used first have a monopoly on the device. That said, on a modern Linux distro just using pkcs11 ought to do the trick.

Update: 2021-06-18

You can simply kill gpg-agent if you wish to use the pkcs11 interface after gpg takes a greedy lock on the device.


Use -engine pkcs11 with openssl subcommands that support it:

openssl rsautl -engine pkcs11 -keyform e -inkey <KEY_ID> -encrypt -in <INPUT> -out <OUTPUT>


Use "pkcs11:id=%<KEY_ID>?pin-value=<PIN>" as the identity file argument for ssh either on the command line, or in an ssh_config file. You will likely wish to get the PIN value itself from somewhere so it's not just in plaintext in your history:

ssh -i "pkcs11:id=%03?pin-value=123456" user@host

Or in an ssh_config file:

Host host
  IdentityFile "pkcs11:id=%03?pin-value=123456"
  User user

Adding SSH Agent Support to Split GPG

Split GPG is a very cool feature of Qubes OS but it leaves out one critical feature: enabling SSH support so the GPG backend qube can make use of an authentication subkey. There are a few different ways to solve this, and this guide provided some of the inspiration for what follows.

The Landscape

Here are the requirements for what follows:

Qubes RPC Policy

The first step is to configure an appropriate Qubes RPC Policy. A basic, and generally sane option, is to use a default configuration that asks the user to approve all requests and allows any qube to target any other qube with such a request. In my own configuration there are explicit allow rules for specific qubes where I use SSH frequently for admin purposes.

In dom0 create /etc/qubes-rpc/policy/qubes.SshAgent:

admin personal-gpg allow
@anyvm @anyvm ask

Actions in the Split GPG VM

The following actions all take place in the qube configured to act as the GPG backend for a Split GPG configuration.

Enable SSH support for gpg-agent:

$ echo "enable-ssh-support" >> /home/user/.gnupg/gpg-agent.conf

Update .bash_profile to use the gpg-agent socket as SSH_AUTH_SOCK by appending:

if [ "${gnupg_SSH_AUTH_SOCK_by:-0}" -ne $$ ]; then
	export SSH_AUTH_SOCK="$(gpgconf --list-dirs agent-ssh-socket)"
export GPG_TTY=$(tty)
gpg-connect-agent updatestartuptty /bye >/dev/null

Create /rw/config/qubes.SshAgent with the following content, and make it executable:

# Qubes Split SSH Script

# Notification for requests
notify-send "[`qubesdb-read /name`] SSH Agent access from: $QREXEC_REMOTE_DOMAIN"

# SSH connection

Update /rw/config/rc.local appending the following:

ln -s /rw/config/qubes.SshAgent /etc/qubes-rpc/qubes.SshAgent

Sourcing .bash_profile and /rw/config/rc.local should put the qube in a state where, if available, a GPG authetication subkey will be available to ssh-agent:

Example from my system:

[user@personal-gpg ~]$ ssh-add -l
4096 SHA256:V2KMVlJjPOn86z6a2srEcnMQj78OujEXJ597PJ6+wyY (none) (RSA)

Template VM Modifications

For my tastes it made the most sense to make a systemd service available to all qubes using my f33-dev template, and then start that service from /rw/config/rc.local on qubes I want to use the new feature.

In the approprite Template VM create a service similar to the following, but replace personal-gpg with the name of your Split GPG backend qube.


Description=Qubes Split SSH

Environment="AGENT_SOCK=/run/user/1000/SSHAgent" "AGENT_VM=personal-gpg"
ExecStart=socat "UNIX-LISTEN:${AGENT_SOCK},fork" "EXEC:qrexec-client-vm ${AGENT_VM} qubes.SshAgent"


Once this has been added run the following, and shut the template qube down:

sudo systemctl daemon-reload

The Client Side

In the actual SSH client qubes, there are a few actions required to complete the loop.

Append the following to .bashrc - make sure this matches the AGENT_SOCK in your systemd service:

### Split SSH Config
export SSH_AUTH_SOCK="/run/user/1000/SSHAgent"

In /rw/config/rc.local append the following to start the service:

systemctl start split-ssh

Source .bashrc and /rw/config/rc.local and with the split GPG backend qube running test that your key is available:

[user@admin ~]$ ssh-add -l
4096 SHA256:V2KMVlJjPOn86z6a2srEcnMQj78OujEXJ597PJ6+wyY (none) (RSA)

Since my Qubes RPC policy allows the admin qubes to reach personal-gpg without my confirmation, a system notification appears stating:

[personal-gpg] SSH Agent access from: admin


With a few simple steps the power of Split GPG can be extended to include SSH Agent support. As a result, network-attached qubes used for administration of remote assets no longer directly store the private key material used for authentication and the attack surface is that much smaller. There are a few ways to get the pubkey to add to remote ~/.ssh/authorized_keys but the easiest way is probably ssh-add -L.

Librem 14, Librem Keys, and Qubes OS

With the arrival of my second Librem Key, I thought now would be a good time to go over how I use Qubes OS features along with some more products from Purism for various signing, encryption, and authentication tasks.

The Landscape

Here are the various components at play:

The base of all but one of my qubes is Fedora.

Security Device

Getting Started

The new Librem Key needs to have its PIN set, and since my Qubes OS configuration uses a USB qube it will be necessary to give my running disposable VM access to the key itself:

In dom0, where my target vm is disp4632 and my BACKEND:DEVID is sys-usb:2-1:

$ qvm-usb attach disp4632 sys-usb:2-1

In the disposable VM run:

[user@disp4632 ~]$ gpg --card-status
Reader ...........: Purism, SPC Librem Key (000000000000000000009BB1) 00 00
Application ID ...: D276000124010303000500009BB10000
Application type .: OpenPGP
Version ..........: 3.3
Manufacturer .....: ZeitControl
Serial number ....: 00009BB1
Name of cardholder: [not set]
Language prefs ...: de
Salutation .......: 
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 64 64 64
PIN retry counter : 3 0 3
Signature counter : 0
KDF setting ......: off
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]

Change the PINs

  1. gpg --card-edit in the disposable VM
  2. admin at the gpg/card> prompt
  3. passwd at the gpg/card> prompt
  4. Select 1 and follow the prompts, where the first PIN is the default: 123456
  5. Select 3 and follow the prompts, where the first Admin PIN is the default: 12345678
  6. Select q and quit.

Initialize the New Librem Key

Remove the new Librem Key

In dom0 run:

qvm-usb detach disp4632 sys-usb:2-1
Insert the original Librem Key

In dom0 run:

# Assuming you plugged the original key into the same port
qvm-usb attach disp4632 sys-usb:2-1
Insert and mount the Librem Vault

In dom0 find the appropriate block device, and attach it to the disposable VM:

qvm-block list

qvm-block attach disp4632 sys-usb:sdb1

In the disposable VM find the attached disk (likely /dev/xvdi)

[user@disp4632 ~]$ lsblk
--- SNIP ---
xvdi    202:128  1 28.9G  0 disk

Then mount the disk:

[user@disp4632 ~]$ udisksctl mount -b /dev/xvdi
Mounted /dev/xvdi at /mnt/removable

Note that I did not sudo mount /dev/xvdi /mnt/removable as the operation does not require root, and we do not use powers we do not need do we?!

Extract the encrypted backup from the Librem Vault
[user@disp4632 ~]$ cp /mnt/removable/gpg-backup/backup* .
Unmount and Remove the Librem Vault
[user@disp4632 ~]$ udisksctl unmount -b /dev/xvdi
Unmounted /dev/xvdi.

In dom0:

qvm-block detach disp4632 sys-usb:sdb1
Decrypt the backup

This assumes you have installed opensc and have pkcs15-tool and pkcs11 drivers.

First, find the Key ID for encryption key on your existing Librem Key:

[user@disp4632 ~]$ pkcs15-tool -D
Using reader with a card: Purism, SPC Librem Key (000000000000000000009BB4) 00 00
PKCS#15 Card [OpenPGP card]:
        Version        : 0
        Serial number  : 000500009bb4
        Manufacturer ID: ZeitControl
        Language       : de
        Flags          : PRN generation, EID compliant


Private RSA Key [Encryption key]
        Object Flags   : [0x03], private, modifiable
        Usage          : [0x22], decrypt, unwrap
        Access Flags   : [0x1D], sensitive, alwaysSensitive, neverExtract, local
        Algo_refs      : 0
        ModLength      : 4096
        Key ref        : 1 (0x01)
        Native         : yes
        Auth ID        : 02
        ID             : 02 <-- THIS ID
        MD:guid        : ee23dccc-fc38-2dc2-3bc8-bb5f859168d4


Now use it to decrypt the pbkdf2 key used to encrypt the GPG backup tarball itself. This hybrid encryption scheme allows securely storing data of arbitrary sizes and using pbkdf2 with randomly generated secrets and then encrypting those secrets with the Librem Key's encryption key.

Decrypting the pbkdf2 password file with the Librem Key:

[user@disp4632 ~]$ openssl rsautl -engine pkcs11 -keyform e -decrypt -inkey 02 -in backup.key.enc -out backup.key
engine "pkcs11" set.
Enter PKCS#11 token PIN for OpenPGP card (User PIN):

Decrypting the GPG backup with the pbkdf2 password file:

[user@disp4632 ~]$ openssl enc -chacha20 -pbkdf2 -pass file:backup.key -d -in backup.tar.gz.enc -out backup.tar.gz
Extract the backup
tar xf backup.tar.gz
Verify the keyring is in tact
[user@disp4632 ~]$ gpg -k
pub   rsa4096 2021-05-08 [C]
uid           [ultimate] Anthony J. Martinez <>
sub   rsa4096 2021-05-08 [S]
sub   rsa4096 2021-05-08 [E]
sub   rsa4096 2021-05-08 [A]

Remove the original Librem Key

In dom0:

qvm-usb detach disp4632 sys-usb:2-1
Insert the new Librem Key again

In dom0:

qvm-usb attach disp4632 sys-usb:2-1
Export the signing, encryption, and authentication subkeys to the Librem Key

Edit the key in expert mode:

[user@disp4632 ~]$ gpg --expert --edit-key FCBF31FDB34C8555027AD1AF0AD2E8529F5D85E1

In the gpg> prompt select each subkey and use the keytocard command.

Example, using the signing key (key 1):

gpg> key 1

sec  rsa4096/0AD2E8529F5D85E1
     created: 2021-05-08  expires: never       usage: C   
     trust: ultimate      validity: ultimate
ssb* rsa4096/A2206FDD769DBCFC <-- NOTICE THE * HERE - this key is selected
     created: 2021-05-08  expires: never       usage: S   
ssb  rsa4096/6BE6910237B3B233
     created: 2021-05-08  expires: never       usage: E   
ssb  rsa4096/FD94BDD7BED5E262
     created: 2021-05-08  expires: never       usage: A   
[ultimate] (1). Anthony J. Martinez <>
[ultimate] (2)  Anthony J. Martinez <>

gpg> keytocard
gpg> key 1 <-- this is to deselect key 1

Repeat the above for key 2 and 3.

Verify the card status
[user@disp4632 ~]$ gpg --card-edit

Reader ...........: Purism, SPC Librem Key (000000000000000000009BB1) 00 00
Application ID ...: D276000124010303000500009BB10000
Application type .: OpenPGP
Version ..........: 3.3
Manufacturer .....: ZeitControl
Serial number ....: 00009BB1
Name of cardholder: [not set]
Language prefs ...: de
Salutation .......: 
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: rsa4096 rsa4096 rsa4096
Max. PIN lengths .: 64 64 64
PIN retry counter : 3 0 3
Signature counter : 0
KDF setting ......: off
Signature key ....: C9ED 41D4 EB62 80BB E61F  0E59 A220 6FDD 769D BCFC
      created ....: 2021-05-08 11:43:52
Encryption key....: 335D C8BC E4A6 8FFF B9B5  CBEF 6BE6 9102 37B3 B233
      created ....: 2021-05-08 11:44:54
Authentication key: D157 68B9 CCCF 4FB5 6FC2  971E FD94 BDD7 BED5 E262
      created ....: 2021-05-08 11:45:39


From here the new Librem Key is configured, and the disposable VM is of no further use. Since disposable VMs are destroyed when the application they were created to run is stopped, the only cleanup necessary is to close the terminal to the disposable VM.

Additional Notes

On my system, I also have vault and personal-gpg qubes. These are both network-isolated and function much the same way the physical key does. The personal-gpg qube holds the very same subkeys as both Librem Keys, and through the use of Split GPG allows for a smartcard-like use of the qube from my other qubes. In a later post, I will detail how I use QubesRPC in personal-gpg to also serve as my ssh-agent for using the authentication subkey in things like my admin qube to prevent me from needing dozens of copies of my SSH private keys everywhere. The vault qube is home to the master secret key, and as such never has any data fed in to it.

The process used to decrypt data can be reversed to encrypt data as well. I will leave that as an exercise for the reader, but the short version is that instead of the decrypt option(s) for the openssl tools use their encrypt counterparts. If you wish to generate a random secret to use with pbkdf2 the following should do the trick:

openssl rand -base64 -out secret.key 32

Another week with the Librem 14

Another week has passed, and I am liking the Librem 14 quite a lot overall. Having now mostly pounded the keyboard into submission it is much more tolerable. The A and S keys are the most disobedient of the bunch, and the oddly located R Shift results in some random profanity when I end up a line higher in my menu or code when that is not at all what I wanted.

Most of what I have worked on was simplifying my home network. Thanks to the presence of a physical LAN port, and some of the finer points of Qubes OS NetVMs it was easy to setup multiple VMs each assigned my wired interface. This allowed me to verify tagged and untagged VLAN settings on a switch the configuraton of which I forgot long ago. All of this took place on battery with me sitting on the floor of my closet without much fear that I would soon need to race across the house to get my charger.

For much of the last week, I also tested out using awesomewm with Qubes OS. Basic tiling was fine, and helped me handle the lower 1080p resultion offered by the Librem 14. In the end, I went back to XFCE with some tiling enabled by keyboard shortcuts.

No buyer regret here. This machine does everything I need it to do and it does it much better than the machine it replaced. When I return to The Netherlands that machine will change roles and continue life, but my main machine will definitely be this Librem 14 for several years to come.

Librem 14 Battery Life While Running Qubes OS

Here is a first look at the kind of battery life one might expect while using a Purism Librem 14 as shipped with the 4-cell battery while running a normal workload in Qubes OS.

TL;DR - The Chart


Given a steamy day in Texas, one Librem 14 running Qubes OS, and a few general tasks to accomplish I set out to find out how long I might expect to spend away from a wall outlet if I:


Once the system was running, I started a loop to give me battery stats every 60s:

while true; do
    upower -i /org/freedesktop/UPower/devices/battery_BAT0 > $(date +%s)_bat-info.txt
    sleep 60

From here, I just went about my business. TemplateVMs on my system are primarily based on Fedora. As a result very nearly every boot means there are updates available. Updating was probably the most strenuous task executed during the test run. In fact, I do not recall the fans turning on for anything else throughout the day except for maybe the one time I started one of my more substantial VMs I use for development. For the most part I had at least (7) VMs running, one of which maintained a WireGuard VPN connection to my cloud environment. General tasks amounted to:


This machine lasts at least as long with light use as my previous system. No one runs towards Qubes OS with the hopes of marathon battery life, and I am pretty happy with near 5hrs of battery runtime on WiFi with a persistent VPN to my cloud resources. Next time, I may shut off WiFi and see how much purely local heavy use I can squeeze out before the battery dies.

First Impressions - Librem 14

For my birthday last year, I ordered the Purism Librem 14 to serve in place of my aging Lenovo T460s. Slightly less than a year later I got my new laptop, and a few weeks after delivery I was able to fly home on vacation to finally get started using it. To most of my friends, waiting a year for a laptop to be delivered is utter madness, but for me there are almost no new laptops I am even willing to consider. Having a physical RJ45 jack is a hard requirement for me, and today it seems this typically requires a willingness to carry a "laptop" that weighs more than a healthy newborn human. Give me power, give robust networking options, and give me the RAM I need to run Qubes OS to its fullest. The Librem 14 offers this all, on paper, and over the coming month I will find out exactly how that all pans out in reality.

First Boot

My machine was ordered with the following specs:

Since there are not any good defaults for installing Qubes OS on an OEM device one needs to install it on their own. Knowing this I performed my first boot into Pure OS using encryption passphrases and user passwords no one should ever use on system they plan to actually use. The process for using PureBoot was pretty clear and straight forward, and the Librem Key that shipped with the Librem 14 flashed green a expected when I made my first boot. It also flashed red, as expected, when I booted the second time having updated the kernel.

My time in Pure OS was limited to two tasks:

  1. Generating new GnuPG keys to store on the Librem Key
  2. Making a USB boot drive from the latest Qubes OS release

The Next 20 Boots

My initial installation of Qube was mising one critical point: an encrypted root partition. Use of PureBoot requires an unencrypted /boot and that is fine given that I am notified of any changes and can opt to sign them if they were expected after an update. Lacking encryption of the rest of the disk is not an acceptable scenario for me, so I reinstalled. And reinstalled again. And again. And again.

Since I am writing this on vacation, I will have to come back to a determination of the root cause later. For now, just know that selecting the defaults in the Qube OS installer - which regrettably includes wasting 15GB of disk on swap I will never use - was the only partition scheme that resulted in an encrypted root partition. To be clear, I do not think this has anything to do with the Librem 14. When I do find the root cause, the appropriate project will get a detailed bug report. At any rate, once I was up and running the next task was restoring my qubes (VMs) from a backup taken on my T460s right before shutting it down last. This was done from a USB3 SSD, and the speed was outstanding. The restore completed much faster than the backup was made, likely owing to both faster USB controllers and the gap between Intel's i5-6300U and i7-10710U.

Normal Use

So far, I have not done much more than restore my qubes and fix a few remote issues I caused late last week that made SELinux angry. Everything is running smoothly, and the few times I have run a CPU intensive task like Rust compilation I have been very pleased with the performance of the machine. The only thing I do not like is the keyboard. I am far from the only person who regards older Thinkpad keyboards as being top of class, and coming from such a keyboard to this does not please me much. While typing this, my backspace key (and vim motions) have been flexed. My a, s, b, and l seem to particularly hate their existence. For some, this could be grounds for a return, but I have had other machines in the past with keyboards I found infuriating at first. Usually, the force of my typing tends to smooth things out in short order. If that is the case here all will be will. If not, it will probably also be fine since most of my use is on a desk plugged in to a Drop ALT.

Qubes and extended battery life are rarely thoughts one has concurrently, and I am pleased to note that the Librem 14 managed more than 4 hours on WiFi doing a mixed load of tasks. These included spawning at least 40 Disposable VMs, upgrading all of my Template VMs, compiling two different versions of the engine rendering and serving this page several times, and for reasons I still do not understand, playing the official music video for Enya's "Only Time" which I have not linked lest anyone else inexplicably end up with it stuck in their head.

So far so good. Tomorrow, I head for the mountains. When I return, I will put the Librem 14 through its paces as I work on some personal Rust projects.

For Next Time

I will try to: