ECDSA and RSA certificate in parallel with NGINX and Let's Encrypt

On Fri, 03 Jun 2016 18:19:19 +0200 by Falco Vennedey - Write a comment

With version 1.11.0 of NGINX it is now possible to serve content via https using RSA and ECDSA certificates in parallel.

ECDSA is another approach to cryptographically sign messages and comes with some advantages compared to RSA. According to a comparison made by the BSI an ECDSA key with a key length of 256 bit provides about the same security as a RSA key of 2048 to 3072 bit (what is about 128 bit symmetric key size). The smaller key length of the ECDSA key results in less computing power needed for generating message signatures during TLS connections. This advantage becomes measurable on systems with thousands of concurrent TLS sessions.

Happily Let's Encrypt issues certificates for ECDSA keys since 10th of February so that it is possible to get a free ECDSA certificate accepted by all major browsers. Unfortunately these certificates are still signed by the Let's Encrypt Authority X3 intermediate certificate, which is a RSA certificate. The ability to get the end-entity certificates signed with an ECDSA intermediate certificate by Let's Encrypt is scheduled for this year. But since the certificate chain is validated on the client, this does not have an impact on the servers TLS performance (except the larger intermediate certificate sent to the client).

An ECDSA key and CSR can be generated similarly to its RSA equivalent using OpenSSL.

user@host:~$ openssl ecparam -name <curve> -genkey -noout -out service.ecdsa.key
user@host:~$ openssl req -new -sha256 -key service.ecdsa.key -subj "/" -out service.ecdsa.csr

The <curve> parameter specifies the elliptic curve used for the key. openssl ecparam -list_curves will print a list with all curves built into OpenSSL.

According to the Let's Encrypt forum and some tests I ran against the Let's Encrypt staging server, only certificates for keys using the curves prime256v1 and secp384r1 are issued. secp521r1 seemed to be in discussion for some time but is not enabled at the moment. This is not a very satisfying situation, since the supported curves seem to have some flaws and are not assumed to be totally secure. Hopefully more secure curves will be supported by Let's Encrypt soon.

The generated CSR can be submitted to Let's Encrypt the same way as done when using RSA keys. Here is an example using acme-tiny.

user@host:~$ --account-key letsencrypt.key --csr service.ecdsa.csr --acme-dir acme-challenge/ > service.ecdsa.crt

If the domain validation was successful, this will leave you with a signed ECDSA certificate in service.ecdsa.crt.

user@host:~$ openssl x509 -noout -in service.ecdsa.crt -text
        Subject Public Key Info:
            Public Key Algorithm: id-ecPublicKey
                Public-Key: (384 bit)
                ASN1 OID: secp384r1

To add multiple certificates to your NGINX configuration, the ssl_certificate and ssl_certificate_key directives can be specified multiple times.

server {
        listen [::]:443;


        # RSA
        ssl_certificate /etc/nginx/tls/service.rsa.crt;
        ssl_certificate_key /etc/nginx/tls/service.rsa.key;

        # ECDSA
        ssl_certificate /etc/nginx/tls/service.ecdsa.crt;
        ssl_certificate_key /etc/nginx/tls/service.ecdsa.key;


Make sure that the Let’s Encrypt Authority X3 intermediate certificate is appended in one of the files referenced by the ssl_certificate directive. If the intermediate certificate is present in both files, this will cause NGINX to send it twice what might result in errors on the client site.

Restart NGINX and check if your can connect with ECDSA ciphers enabled.

user@worksation:~$ openssl s_client -connect -status -tlsextdebug -tls1_2 -cipher ECDHE-ECDSA-AES128-SHA256 </dev/null
Certificate chain
 0 s:/
   i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
 1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
   i:/O=Digital Signature Trust Co./CN=DST Root CA X3
Server public key is 384 bit
    Cipher    : ECDHE-ECDSA-AES128-SHA256

Try the same with a RSA cipher

user@worksation:~$ openssl s_client -connect -status -tlsextdebug -tls1_2 -cipher ECDHE-RSA-AES128-SHA256 </dev/null
Certificate chain
 0 s:/
   i:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
 1 s:/C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3
   i:/O=Digital Signature Trust Co./CN=DST Root CA X3
Server public key is 4096 bit
    Cipher    : ECDHE-RSA-AES128-SHA256

Which of the both certificates is actually used depends on the order of the ciphers listed within the ssl_ciphers directive and the capabilities of the connecting client. For example the cipher recommendations given in the Mozilla wiki prefer ECDSA ciphers over their RSA counterparts.

After configuration, go ahead and test your whole TLS configuration with public tools like Qualys SSL test or How to improve your TLS configuration for NGINX is also documented in my article „Secure webspaces with NGINX, PHP-FPM chroots and Let's Encrypt“.

Temporary trashmail addresses with postfix

On Mon, 09 May 2016 21:32:36 +0200 by Falco Vennedey - Write a comment

Warning: The information given in this article is outdated. See the project's website for up-to-date information.

If you are in need of temporary mail addresses (trashmail addresses) in postfix to avoid spam in your inbox you might be interested in the solution I found for this problem.

The idea is to generate an e-mail address from a date and a secret. Everyone who knows the secret (you and your mail server) will be able to generate the e-mail address for a given date. This e-mail address can then be used to sign up for a service or generated dynamically and published on your website and will only be valid for the current day.

Here is an example PHP code

$secret = "<Your secret>";
echo substr(md5(date("Ymd").$secret),0,8).'';

This will print a different e-mail address like for every day.

One problem with this technique is that if someone starts writing an e-mail at 23:59, he likely won't be finished before the address he uses expires. To solve this, two temporary addresses will always be valid: The one generated from the current date, and the one generated from yesterday's date.

To make this work with postfix, we need a way to map the temporary addresses to a permanent address. Looking at the Postfix Lookup Table Overview I decided to write a small script utilizing socat (since the traditional netcat package has its flaws and the OpenBSD version of netcat did not perform very well) to implement a simple TCP/IP lookup table for postfix. This is probably not a solution for high frequented mail servers, but sufficient for personal use.

It's just about 40 lines of code to do the job, and can be found on GitHub. On Debian 8 you will need the socat package.

root@mailhost:~# apt-get install socat git
root@mailhost:~# git clone

You have to open the actual script postfix-trashmail to configure some parameters

SECRET="<Your secret>"


SECRET sets the secret as described above and will be used to generate the temporary e-mail addresses consisting out of LENGTH hexadecimal characters followed by @ and DOMAIN. So the above configuration will generate e-mail addresses like

MAILDROP sets the address that is mapped to the temporary addresses. This is where mail sent to temporary addresses should be delivered to.

INTERFACE and PORT set the IP address and the port for the TCP lookup table. If this script is run on the same machine as the postfix daemon, a local interface should be used.

To test the configuration, run the script without arguments first. This will print the currently valid temporary addresses.

root@mailhost:~/postfix-trashmail# ./postfix-trashmail

Now start the script in listen mode with -l.

root@mailhost:~/postfix-trashmail# ./postfix-trashmail -l

and from another terminal use the postmap command to query for the real address.

root@mailhost:~# postmap -q tcp:

If you query for a currently valid temporary address, the permanent address as specified in MAILDROP should be returned. Querying for any other address should return an empty result.

To install the script and start it automatically on boot before postfix, copy the script to some commonly used path and install the systemd configuration. On Debian 8 you can do this with

root@mailhost:~# cp postfix-trashmail/postfix-trashmail /usr/local/bin
root@mailhost:~# chown postfix:postfix /usr/local/bin/postfix-trashmail
root@mailhost:~# chmod 700 /usr/local/bin/postfix-trashmail
root@mailhost:~# cp postfix-trashmail/postfix-trashmail.service /etc/systemd/system
root@mailhost:~# systemctl daemon-reload
root@mailhost:~# systemctl start postfix-trashmail
root@mailhost:~# systemctl enable postfix-trashmail
Created symlink from /etc/systemd/system/postfix.service.wants/postfix-trashmail.service to /etc/systemd/system/postfix-trashmail.service.

Then edit your /etc/postfix/ to add the lookup table to the virtual_alias_maps.

virtual_alias_maps = ... tcp:

Reload postfix and test if you receive mails to the addresses printed by the postfix-trashmail command.

root@mailhost:~# postfix reload && postfix-trashmail

NSCD socket in a PHP-FPM chroot leaks user database

On Wed, 20 Apr 2016 20:54:08 +0200 by Falco Vennedey - Write a comment

If you configure your web server to serve PHP scripts with PHP-FPM you have the option to execute the interpreter in a chrooted environment to prevent scripts to access files outside a given directory tree. While this is a great security feature, it comes with some drawbacks if the PHP scripts need resources from outside the chroot to do their work.

A common example is DNS resolution. If you try to resolve a hostname from inside a PHP-FPM chroot, it won't succeed until you make certain locations of the file system available inside the chroot.

A common workaround suggested by some tutorials is to bind the NSCD socket into the chroot.

root@webhost:~# mkdir -p /home/www/u000/chroot/var/run/nscd
root@webhost:~# touch /home/www/u000/chroot/var/run/nscd/socket
root@webhost:~# mount --bind /var/run/nscd/socket /home/www/u000/chroot/var/run/nscd/socket

This will solve the problem, and DNS resolution will work from inside the chroot.

But since NSCD does not just provide name resolution for domain names, but also for usernames and groups of the system, this configuration might also reveal sensitive information to the chroot.

As a proof of concept I wrote some code to demonstrate the possibility of querying user information from NSCD out of a PHP-FPM chroot.

	# PoC code to demonstrate querying user information from NSCD
	# out of a PHP-FPM chroot.
	header('Content-Type: text/plain');
	$u = isset($_GET['u']) ? $_GET['u'] : 'root';
	$s = fsockopen('unix:///var/run/nscd/socket',-1,$en,$es);
	$s || die("Err $en:$es");
	$r = fread($s,2048);
		if($r{$i} === "\0") {
			if($r{$i+1} === "\0") break;
			echo strrev($b)."\n";
		} else
			$b .= $r{$i};

Upload this script into a PHP-FPM chroot with an available NSCD socket and open it in your browser. Optionally, you can pass ?u=<username> to the script to get information on a certain user. If no username is supplied, root will be used.

This will output the passwd entry of the user:


To close this leak, you can configure NSCD to not serve any database except the hosts database by setting enable-cache to no for most entries in /etc/nscd.conf. However, this might have a negative impact on your overall system performance since user information has to be retrieved from the database directly.

A better alternative is to bind and /etc/resolv.conf to the PHP-FPM chroot instead.

For Debian 8:

root@webhost:~# mkdir -p /home/www/u000/chroot/lib/x86_64-linux-gnu /home/www/u000/chroot/etc
root@webhost:~# touch /home/www/u000/chroot/lib/x86_64-linux-gnu/ /home/www/u000/chroot/etc/resolv.conf
root@webhost:~# mount --bind /lib/x86_64-linux-gnu/ /home/www/u000/chroot//lib/x86_64-linux-gnu/
root@webhost:~# mount -o "remount,ro" /home/www/u000/chroot/lib/x86_64-linux-gnu/
root@webhost:~# mount --bind /etc/resolv.conf /home/www/u000/chroot/etc/resolv.conf
root@webhost:~# mount -o "remount,ro" /home/www/u000/chroot/etc/resolv.conf

This will allow the interpreter to resolve domain names from inside the chroot and won't reveal the user database.


On Tue, 15 Mar 2016 15:17:48 +0100 by Falco Vennedey - Write a comment

Today, I spent some time rolling out DNSSEC for some of my zones to enable clients to verify DNS responses from my nameservers. I also added TLSA records (see RFC 6698) for some domains and for my mail server to provide an additional indication that the TLS certificates sent to the client come from me.

To benefit from DNSSEC and DANE in daily internet life, you can install the DNSSEC/TLSA Validator add-on in your Browser. It will provide you with two additional indicators beside your address bar. It will give you information on

  • Whether the domain name of the URL you are visiting is secured by DNSSEC.
  • Whether there is a TLSA record available for the Domain, and if the TLSA record corresponds to the TLS certificate or public key of the web server.

In the extension's preferences you can enable the option “Cancel HTTPS connection when TLSA validation fails”. This might protect you from a man-in-the-middle attack even if the attacker was able to provide a certificate of the domain from a trusted authority.

If you like to secure your zone with DNSSEC, you first have to find out if your registrar allows you to upload your KSK to the registry. Unfortunately, this is not common nowadays. If you don't find an option in the registrar's control panel, open a support ticket and ask for it.

Then, you need to generate your keys, sign your zones and supply TLSA records for your certificates. Since the keys used along with DNSSEC are comparatively short, you are well-advised to rotate them regularly.

Here are some recommendations on how to achieve all this with bind9 and its „inline signing“ feature.