Skip to content
 

SSL implementations compared

I reviewed several SSL implementations for coding style: OpenSSL, NSS, GnuTLS, JSSE, Botan, MatrixSSL and PolarSSL. I looked at how buffers are handled in parsers and writers. Of all of them, I think only JSSE, i.e. pure Java, can be trusted to be free of buffer overflows. It suggests that a good webserver for security-critical applications would be Tomcat, without native extensions.

In OpenSSL, the Heartbleed patch itself is a good example of what not to do:

    /* Read type and payload length first */
    if (1 + 2 + 16 > s->s3->rrec.length)
        return 0; /* silently discard */
    hbtype = *p++;
    n2s(p, payload);
    if (1 + 2 + payload + 16 > s->s3->rrec.length)
        return 0; /* silently discard per RFC 6520 sec. 4 */
    pl = p;

Bounds checks are rolled up into an obscure calculation, then the code proceeds to memcpy() straight out of the buffer.

NSS has a similar style. For example, ssl3_ComputeDHKeyHash() has:

    bufLen = 2*SSL3_RANDOM_LENGTH + 2 + dh_p.len + 2 + dh_g.len + 2 + dh_Ys.len;

In GnuTLS, a similar example of precalculation can be found in this Heartbleed-inspired patch:

 response = gnutls_malloc(1 + 2 + data_size + DEFAULT_PADDING_SIZE);

PolarSSL was also similar in style.

An embedded, open-core SSL library called MatrixSSL shows an alternative style, which proves that there are alternatives to precalculation. It has bounds checks distributed throughout the parser. Before each read, the remaining length is calculated from the cursor position, and compared with the read length:

    if (end - c < SSL2_HEADER_LEN) {
        return SSL_PARTIAL;
    }
    if (ssl->majVer != 0 || (*c & 0x80) == 0) {
        if (end - c < ssl->recordHeadLen) {
            return SSL_PARTIAL;
        }
        ssl->rec.type = *c; c++;
        ssl->rec.majVer = *c; c++;
        ssl->rec.minVer = *c; c++;
        ssl->rec.len = *c << 8; c++;
        ssl->rec.len += *c; c++;
    } else {
        ssl->rec.type = SSL_RECORD_TYPE_HANDSHAKE;
        ssl->rec.majVer = 2;
        ssl->rec.minVer = 0;
        ssl->rec.len = (*c & 0x7f) << 8; c++;
        ssl->rec.len += *c; c++;
    }

This makes auditing easier, and should be commended. However, it’s still a long way short of actually following CERT security guidelines by using a safe string library. And since this is an embedded library, it would have been nice to see a backend free of dynamic allocation, to allow verification tools like Polyspace Code Prover to provide strong guarantees.

Botan is written in C++ by a single author who claims to be aware of security issues, which sounded very promising. In C++, implementing bounds checks should be trivial. However, the code didn’t live up to my expectations. It does pass around std::vector<byte> instead of char* (often by value!), but the author makes extensive use of a memcpy() wrapper to actually read and write those vectors. For example:

std::vector<byte> Heartbeat_Message::contents() const
   {
   std::vector<byte> send_buf(3 + m_payload.size() + 16);
   send_buf[0] = m_type;
   send_buf[1] = get_byte<u16bit>(0, m_payload.size());
   send_buf[2] = get_byte<u16bit>(1, m_payload.size());
   copy_mem(&send_buf[3], &m_payload[0], m_payload.size());
   // leave padding as all zeros

   return send_buf;
   }

The functions are short, with short code paths between length determination and buffer use, which gives it similar auditability to MatrixSSL. But the potential security advantages of using C++ over C were squandered, and frequent copying and small dynamic allocations means that performance will not be comparable to the C libraries.

So it seems that in every case, authors of C and C++ SSL libraries have found unbounded memory access primitives like memcpy() to be too tempting to pass up. Thus, if we want to have an SSL library with implicit, pervasive bounds checking, apparently the only option is to use a library written in a language which forces bounds checking. The best example of this is surely JSSE, also known as javax.net.ssl. This SSL library is written in pure Java code — the implementation can be found here. As I noted in my introduction, it is used by Tomcat as long as you don’t use the native library. The native library gives you “FIPS 140-2 support for TLS/SSL”; that is to say, it links to a library that probably has undiscovered buffer overflow vulnerabilities.

7 Comments

  1. Steffen Ullrich says:

    You should probably change your titel to “..compared regarding buffer overflows”. There were lots of flaws in the last time because of wrong certificate verification (e.g. Apples implementation and GnuTLS) which were also fatal flaws, even if they did not have this mass impact.

    • Tim says:

      Logic errors in cryptographic libraries have the potential to have greater impact than logic errors in other network software — for example, attacks on software distribution systems may lead to arbitrary execution. But I think it’s fair to say that a trivially remotely exploitable buffer overflow like Heartbleed is more severe than a certificate verification error — the attacker owns the endpoint, rather than just the communications between endpoints.

  2. wilx says:

    > … and frequent copying and small dynamic allocations means that performance will not be comparable to the C libraries.

    While at first sight it seems that the return value will be copied, it reality, optimized builds will not have the copy there because the `send_buf` will be constructed in the stack frame of the caller thanks to the return value optimization. IME, people who know a bit about C++ often code with RVO in mind. Also, with C++11, the vector can be moved, so no additional allocation and copying will happen even without RVO.

    • Tim says:

      Thanks for that wilx, I stand corrected. I’ve read about RVO and move constructors, but it hadn’t quite filtered down to a coding style recommendation. But it is probably still correct to say that Botan has more allocator calls than OpenSSL, since Botan’s small functions allocate small buffers for output. Also, when the function allocates memory instead of the caller, there is less opportunity for buffer sharing and reuse.

  3. Kevin Riggle says:

    I reviewed several SSL implementations for coding style: OpenSSL, NSS, GnuTLS, JSSE, Botan, MatrixSSL and PolarSSL. I looked at how buffers are handled in parsers and writers. Of all of them, I think only JSSE, i.e. pure Java, can be trusted to be free of buffer overflows. It suggests that a good webserver for security-critical applications would be Tomcat, without native extensions.

    But Java without native extensions can’t be trusted not to swap your unencrypted in-memory secrets out to disk.

    • Tim says:

      That would be very concerning for a laptop, but I think it is not so concerning for a server. On a server, it’s usually possible to disable swap entirely. And if the server is colocated, there is less risk of malicious physical access, so the privilege separation boundary preventing direct access to swap is comparable to the boundary preventing direct access to RAM.

  4. Will Sargent says:

    There are a couple of reasons why Java never became popular for TLS termination. JSSE is set up for compatibility first and security second, and as such it did things like prefer the client’s choice of cipher suite. Then, there’s the use of JKS over PKCS12, the odd export policy, the SPI interfaces that meant the actual encryption code was Sun specific and hidden until it was finally open sourced, the hardcoded 1024 bit length on DHE, the DHE bug that would cause a small percentage of connections to just randomly fail… It’s only now with JDK 1.8, with the modern day support for ECC and preferring the server’s cipher suite, that Java’s really suitable.