tazjin's blog

Reverse-engineering WatchGuard Mobile VPN

Update: WatchGuard has responded to this post on Reddit. If you haven't read the post yet I'd recommend doing that first before reading the response to have the proper context.

One of my current client makes use of WatchGuard Mobile VPN software to provide access to the internal network.

Currently WatchGuard only provides clients for OS X and Windows, neither of which I am very fond of. In addition an OpenVPN configuration file is provided, but it quickly turned out that this was only a piece of the puzzle.

The problem is that this VPN setup is secured using 2-factor authentication (good!), but it does not use OpenVPN's default challenge/response functionality to negotiate the credentials.

Connecting with the OpenVPN config that the website supplied caused the VPN server to send me a token to my phone, but I simply couldn't figure out how to supply it back to the server. In a normal challenge/response setting the token would be supplied as the password on the second authentication round, but the VPN server kept rejecting that.

Other possibilities were various combinations of username&password (I've seen a lot of those around) so I tried a whole bunch, for example $password:$token or even a sha1(password, token) - to no avail.

At this point it was time to crank out Hopper and see what's actually going on in the official OS X client - which uses OpenVPN under the hood!

Diving into the client

The first surprise came up right after opening the executable: It had debug symbols in it - and was written in Objective-C!

Debug symbols

A good first step when looking at an application binary is going through the strings that are included in it, and the WatchGuard client had a lot to offer. Among the most interesting were a bunch of URIs that looked important:

Some URIs

I started with the first one


and just curled it on the VPN host, replacing the username and password fields with bogus data and the filename field with client.wgssl - another string in the executable that looked like a filename.

To my surprise this endpoint immediately responded with a GZIPed file containing the OpenVPN config, CA certificate, and the client certificate and key, which I previously thought was only accessible after logging in to the web UI - oh well.

The next endpoint I tried ended up being a bit more interesting still:


Inserting the correct username and password into the query parameters actually triggered the process that sent a token to my phone. The response was a simple XML blob:

<?xml version="1.0" encoding="UTF-8"?>
  <chaStr>Enter Your 6 Digit Passcode </chaStr>

Somewhat unsurprisingly that chaStr field is actually the challenge string displayed in the client when logging in.

This was obviously going in the right direction so I proceeded to the procedures making use of this string. The first step was a relatively uninteresting function called -[VPNController sslvpnLogon] which formatted the URL, opened it and checked whether the logon_status was 4 before proceeding with the logon_id and chaStr contained in the response.

(Code snippets from here on are Hopper's pseudo-Objective-C)


It proceeded to the function -[VPNController processTokenPrompt] which showed the dialog window into which the user enters the token, sent it off to the next URL and checked the logon_status again:

(r12 is the reference to the VPNController instance, i.e. self).


If the logon_status was 1 (apparently "success" here) it proceeded to do something quite interesting:


The user's password was overwritten with the (verified) OTP token - before OpenVPN had even been started!

Reading a bit more of the code in the subsequent -[VPNController doLogin] method revealed that it shelled out to openvpn and enabled the management socket, which makes it possible to remotely control an openvpn process by sending it commands over TCP.

It then simply sent the username and the OTP token as the credentials after configuring OpenVPN with the correct config file:


... and the OpenVPN connection then succeeds.


Rather than using OpenVPN's built-in challenge/response mechanism, the WatchGuard client validates user credentials outside of the VPN connection protocol and then passes on the OTP token, which seems to be temporarily in a 'blessed' state after verification, as the user's password.

I didn't check to see how much verification of this token is performed (does it check the source IP against the IP that performed the challenge validation?), but this certainly seems like a bit of a security issue - considering that an attacker on the same network would, if they time the attack right, only need your username and 6-digit OTP token to authenticate.

Don't roll your own security, folks!


The whole reason why I set out to do this is so I could connect to this VPN from Linux, so this blog post wouldn't be complete without a solution for that.

To make this process really easy I've written a little tool that performs the steps mentioned above from the CLI and lets users know when they can authenticate using their OTP token.

Make Object <T> Again!

A few minutes ago I found myself debugging a strange Java issue related to Jackson, one of the most common Java JSON serialization libraries.

The gist of the issue was that a short wrapper using some types from Javaslang was causing unexpected problems:

public <T> Try<T> readValue(String json, TypeReference type) {
  return Try.of(() -> objectMapper.readValue(json, type));

The signature of this function was based on the original Jackson readValue type signature:

public <T> T readValue(String content, TypeReference valueTypeRef)

While happily using my wrapper function I suddenly got an unexpected error telling me that Object is incompatible with the type I was asking Jackson to de-serialize, which got me to re-evaluate the above type signature again.

Lets look for a second at some code that will happily compile if you are using Jackson's own readValue:

// This shouldn't compile!
Long l = objectMapper.readValue("\"foo\"", new TypeReference<String>(){});

As you can see there we ask Jackson to decode the JSON into a String as enclosed in the TypeReference, but assign the result to a Long. And it compiles. And it failes at runtime with java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Long. Huh?

Looking at the Jackson readValue implementation it becomes clear what's going on here:

@SuppressWarnings({ "unchecked", "rawtypes" })
public <T> T readValue(String content, TypeReference valueTypeRef)
    throws IOException, JsonParseException, JsonMappingException
    return (T) _readMapAndClose(/* whatever */);

The function is parameterised over the type T, however the only place where T occurs in the signature is in the parameter declaration and the function return type. Java will happily let you use generic functions and types without specifying type parameters:

// Compiles fine!
final List myList = List.of(1,2,3);

// Type is now myList : List<Object>

Meaning that those parameters default to Object. Now in the code above Jackson also explicitly casts the return value of its inner function call to T.

What ends up happening is that Java infers the expected return type from the context of the readValue and then happily uses the unchecked cast to fit that return type. If the type hints of the context aren't strong enough we simply get Object back.

So what's the fix for this? It's quite simple:

public <T> T readValue(String content, TypeReference<T> valueTypeRef)

By also making the parameter appear in the TypeReference we "bind" T to the type enclosed in the type reference. The cast can then also safely be removed.

The cherries on top of this are:

a) that @SuppressWarnings({ "rawtypes" }) explicitly disables a warning that would've caught this

b) that the readValue implementation using the less powerful Class class to carry the type parameter does this correctly: public <T> T readValue(String content, Class<T> valueType)

The big question I have about this is why does Jackson do it this way? Obviously the warning did not just appear there by chance, so somebody must have thought about this?

If anyone knows what the reason is, I'd be happy to hear from you.

PS: Shoutout to David & Lucia for helping me not lose my sanity over this.

Fully automated TLS certificates with Kubernetes

Recently one of my favourite ways to tackle an infrastructure issue has been to write a Kubernetes controller that deals with the issue.

The idea behind a controller in Kubernetes is quite simple. Your Kubernetes API server contains a description of a desired target state. To get to that target state, a set of controllers constantly run reconciliation loops to take care of whatever small bit of that state is their responsibility.

Recently I've wanted to have a fully automated way of retrieving TLS certificates from Let's Encrypt. This seemed like a perfect fit for a Kubernetes controller, so I got to work and am now presenting release 1.1 of the Kubernetes Letsencrypt Controller.

One feature of Let's Encrypt is their support for DNS-based challenges. To verify your domain ownership you add a specific TXT record which is validated by Let's Encrypt.

My controller makes use of that feature and currently implements validation support for both Google Cloud DNS and Amazon Route53. Head over to the repository's README for details on how to set it up.

Basically the process to get a certificate is now as simple as:

  1. Add an annotation acme/certificate: www.mydomain.com to any of your Service resources.
  2. Wait a few minutes until you find your certificate in a Secret resource called www-mydomain-com-tls.
  3. That's it!

This way you don't have to deal with routing temporary challenge URLs on your webserver or any of that stuff. It just works!

Feedback (and contributions!) are very welcome.