Adding SSH Agent Support to Split GPG
Split GPG is a very cool feature of Qubes OS but it leaves out one critical feature: enabling SSH support so the GPG backend qube can make use of an authentication subkey. There are a few different ways to solve this, and this guide provided some of the inspiration for what follows.
The Landscape
Here are the requirements for what follows:
- A system running QubesOS
- A network-isolated
vault
qube configured for Split GPG. Mine is calledpersonal-gpg
. - One or more qubes wanting to use a GPG authentication subkey with SSH clients. Ex:
admin
.
Qubes RPC Policy
The first step is to configure an appropriate Qubes RPC Policy. A basic, and generally sane option, is to use a default configuration that asks the user to approve all requests and allows any qube to target any other qube with such a request. In my own configuration there are explicit allow rules for specific qubes where I use SSH frequently for admin purposes.
In dom0
create /etc/qubes-rpc/policy/qubes.SshAgent
:
admin personal-gpg allow
@anyvm @anyvm ask
Actions in the Split GPG VM
The following actions all take place in the qube configured to act as the GPG backend for a Split GPG configuration.
Enable SSH support for gpg-agent
:
$ echo "enable-ssh-support" >> /home/user/.gnupg/gpg-agent.conf
Update .bash_profile
to use the gpg-agent
socket as SSH_AUTH_SOCK
by appending:
unset SSH_AUTH_SOCK
if [ "${gnupg_SSH_AUTH_SOCK_by:-0}" -ne $$ ]; then
export SSH_AUTH_SOCK="$(gpgconf --list-dirs agent-ssh-socket)"
fi
export GPG_TTY=$(tty)
gpg-connect-agent updatestartuptty /bye >/dev/null
Create /rw/config/qubes.SshAgent
with the following content, and make it executable:
#!/bin/sh
# Qubes Split SSH Script
# Notification for requests
notify-send "[`qubesdb-read /name`] SSH Agent access from: $QREXEC_REMOTE_DOMAIN"
# SSH connection
socat - UNIX-CONNECT:$SSH_AUTH_SOCK
Update /rw/config/rc.local
appending the following:
ln -s /rw/config/qubes.SshAgent /etc/qubes-rpc/qubes.SshAgent
Sourcing .bash_profile
and /rw/config/rc.local
should put the qube in a state where,
if available, a GPG authetication subkey will be available to ssh-agent:
Example from my system:
[user@personal-gpg ~]$ ssh-add -l
4096 SHA256:V2KMVlJjPOn86z6a2srEcnMQj78OujEXJ597PJ6+wyY (none) (RSA)
Template VM Modifications
For my tastes it made the most sense to make a systemd service available to all
qubes using my f33-dev
template, and then start that service from /rw/config/rc.local
on qubes I want to use the new feature.
In the approprite Template VM create a service similar to the following, but replace
personal-gpg
with the name of your Split GPG backend qube.
/etc/systemd/system/split-ssh.service
:
[Unit]
Description=Qubes Split SSH
StartLimitIntervalSec=500
StartLimitBurst=5
[Service]
Type=simple
User=user
Group=user
Restart=on-failure
RestartSec=5s
WorkingDirectory=/home/user
Environment="AGENT_SOCK=/run/user/1000/SSHAgent" "AGENT_VM=personal-gpg"
ExecStart=socat "UNIX-LISTEN:${AGENT_SOCK},fork" "EXEC:qrexec-client-vm ${AGENT_VM} qubes.SshAgent"
[Install]
WantedBy=multi-user.target
Once this has been added run the following, and shut the template qube down:
sudo systemctl daemon-reload
The Client Side
In the actual SSH client qubes, there are a few actions required to complete the loop.
Append the following to .bashrc
- make sure this matches the AGENT_SOCK
in your systemd service:
### Split SSH Config
export SSH_AUTH_SOCK="/run/user/1000/SSHAgent"
In /rw/config/rc.local
append the following to start the service:
systemctl start split-ssh
Source .bashrc
and /rw/config/rc.local
and with the split GPG backend qube running
test that your key is available:
[user@admin ~]$ ssh-add -l
4096 SHA256:V2KMVlJjPOn86z6a2srEcnMQj78OujEXJ597PJ6+wyY (none) (RSA)
Since my Qubes RPC policy allows the admin
qubes to reach personal-gpg
without my confirmation, a system notification appears stating:
[personal-gpg] SSH Agent access from: admin
Conclusion
With a few simple steps the power of Split GPG can be extended to include SSH Agent
support. As a result, network-attached qubes used for administration of remote assets no
longer directly store the private key material used for authentication and the attack
surface is that much smaller. There are a few ways to get the pubkey to add to remote
~/.ssh/authorized_keys
but the easiest way is probably ssh-add -L
.