Mounting NFSThrough SSHTunnel

From Biowiki
Jump to: navigation, search

How to Mount a NFS Through a SSH Tunnel

(actually, two SSH tunnels connected end-to-end)

(more specifically, how to mount an NFS that is on a separate subnet, behind a gateway/router/NAT box)

(even more specifically, this applies to an NFS server running 64-bit Cent OS, and the clients running 32-bit Debian)

NOTE TO SELF: some parts of this, such as using 1 tunnel instead of 2, should be rewritten now that I have a clue as to what I'm doing.

BEFORE YOU BEGIN: maybe a virtual private network (VPN) is more to your liking? See the write-up here: How To Configure VPN.

Introduction

Let's say you want to mount a NFS on your machine (the NFS client), but are facing the problem that the NFS server is behind some sort gateway, router, or some other sort of network address translation (NAT) deal. You can't mount the NFS directly because it isn't visible to your network, so what are you going to specify to your mount -t nfs command, huh? Not the usual, that's what. You see, you have to go through the gateway first, and have the gateway forward the NFS traffic between your client machine and the NFS server.

But how do we set that up? Well, assuming that your gateway/router/NAT box is actually a Linux box that is set up as a gateway/router/NAT using iptables (such as I described in Linux Gateway And Router), we can establish two SSH tunnels - one connecting your client machine and the gateway, and another connecting the gateway and the NFS server. There are probably other ways to do this, but this one is mine.

I will illustrate this with the power of an ASCII diagram with some "fake" IP addresses:


		/-----------------------------\
{| border="1"
|-
		||  your machine (NFS client)  
|}

{| border="1"
|-
		||	 IP = 72.14.207.99		  
|}

		\-----------------------------/

						  /\		  
						 /  \
{| border="1"
|-
						  |
|}

			TUNNEL 1	||		 the Internet (or some
						  ||			  other network)
{| border="1"
|-
						  |
|}

						 \  /
						  \/

 /-------------------------------------------\
{| border="1"
|-
 ||  externally-visible IP = 169.229.131.109  
|}

{| border="1"
|-
 ||														 
|}

{| border="1"
|-
 ||					  gateway						 
|}

{| border="1"
|-
 ||														 
|}

{| border="1"
|-
 ||		subnet-visible IP = 192.168.1.1		
|}

 \-------------------------------------------/

						  /\		  
						 /  \
{| border="1"
|-
						  |
|}

			TUNNEL 2	||
						  ||		  the 192.168.1.0/24 subnet
{| border="1"
|-
						  |
|}

						 \  /
						  \/

			 /---------------------\
{| border="1"
|-
			 ||		NFS server	  
|}

{| border="1"
|-
			 ||  IP = 192.168.1.24  
|}

			 \---------------------/

Fancy, huh? OK, let's roll...

Configure the NFS server (note the assumption it's running Cent OS)

First of all, I'm assuming the NFS server is at least working properly on the 192.168.1.0/24 subnet. That is, you can mount the NFS on the subnet machines. If you haven't even gotten that far, check out the section on how to set up a NFS in Cluster RAID.

Let's say we are going to share the directory /nfs/ on our NFS server. We must add the following to /etc/exports:

  /nfs/  127.0.0.1(rw,sync,insecure)

and feel free to append other NFS options that are necessary (you might want to mount read-only, for example).

No, you're not seeing things. We are actually telling the NFS server that it should be able to mount one of its own local directories onto itself (the address 127.0.0.1 is localhost, accessible through the lo network interface, aka loopback) via the NFS. Why? Oh, you'll see...

Also note that it's mounted insecure, which means mount requests originating from ports higher than 1024 will be honored. Why? Once again, you'll see. This is a bad practice in general, but because this is for mount requests coming from localhost only, we are relatively safe. After all, if a hacker is already on localhost (or managed to tunnel into it via SSH, like we're about to), we have much worse problems.

Make sure the NFS service reloads the newly updated /etc/exports file by doing:

$ exportfs -ra

You must also add the following permissions to the /etc/hosts.allow file:

  portmap: 127.0.0.1
  lockd: 127.0.0.1
  mountd: 127.0.0.1
  rquotad: 127.0.0.1
  statd: 127.0.0.1

Yeah, most of those aren't necessary... yet... but maybe will be later.

One important note: personally, I never let the gateway mount the NFS (so the gateway subnet-visible IP, 192.168.1.1, should not be in the /etc/hosts.allow file on the NFS server). If I did, we could just use one tunnel instead of two. But it also means that if anyone broke into the gateway, they could mount and screw with the NFS, and I really don't want that. The whole point of putting in a gateway to separate out your subnet is to protect it from the Internet and make the NFS inaccessible by casual means.

OK, now open up the config file /etc/sysconfig/nfs (or create it if it doesn't exist) and change variable MOUNTD_PORT to some high, static value, such as 32323. This is necessary so that, when the /etc/rc.d/init.d/nfs script starts or restarts the NFS service, it will not use portmap (or some such thing) to assign it a dynamic value to the rpc.mountd port. We really need rpc.mountd to listen on a specific, fixed port, because SSH tunnel 2 will go to that specific, fixed port on the NFS server. The nfsd port is usually 2049, so we don't have to worry about manually setting that.

Let's restart the portmap and NFS services to make them reload all the latest changes:

$ service portmap restart

$ service nfs restart

Open SSH tunnel 2 (from gateway to NFS server)

As any user that can SSH into the NFS, do this:

$ ssh !username@192.168.1.24 -L 2222:localhost:2049 -f sleep 600m

$ ssh !username@192.168.1.24 -L 3333:localhost:32323 -f sleep 600m

This will open up an SSH tunnel between local ports 2222 and 3333 and remote ports 2049 and 32323, respectively, on the NFS server. What does this mean? Well, you have just opened a regular SSH connection between the gateway and the NFS server. Except your local ssh demon is now listening on ports 2222 and 3333 - so, if a local service sends traffic to those ports, ssh will encapsulate it (meaning also encrypted) in the SSH protocol, and send it to the remote host (the NFS server). When the sshd daemon on the NFS server, to which you are connected, receives this traffic, it will change it to make the NFS server appear that it came from itself on ports 2049 and 32323. So, whatever you send to ports 2222 and 3333 on the gateweay will go through SSH tunnel 2 and come out on ports 2049 and 32323 on the NFS server, except to the NFS server, it will appear to have come from localhost. And that is why we specified localhost in the /etc/exports file on the NFS server! Because it will think that the mount attempt is coming from itself (via the loopback interface, 127.0.0.1). The reason for the insecure option is because the mount attemts will originate from sshd on the NFS server on a high port. Currently, I have no idea how to fix that, but it would be nice.

The -f sleep 600m is to just keep the SSH connection open and without a terminal until we can mount something through it, which will then stay open for as long as the NFS is mounted.

Open SSH tunnel 1 (from Debian box, i.e. NFS client, to gateway) and mount the NFS

Just as above, we use (from the client, of course):

$ ssh !username@169.229.131.109 -L 2222:localhost:2222 -f sleep 600m

$ ssh !username@169.229.131.109 -L 3333:localhost:3333 -f sleep 600m

So, just as SSH tunnel 2, any local connection we make on the client machine to ports 2222 and 3333 will go through the above tunnel to the gateway. sshd on the gateway will receive it, and change the TCP headers to make it appear as if the connection originated on the gateway itself, and send it to ports 2222 and 3333 on the gateway via the loopback interface... which, remember, is being forwarded through SSH tunnel 2 to the NFS client.

Now we mount the NFS to local directory /mnt/nfs/ using:

$ mount -v -t nfs -o port=2222,mountport=3333,tcp localhost:/nfs/ /mnt/nfs/

Make sure you specify tcp as the protocol (of course, we assume your NFS server is recent enough to do TCP), otherwise you may get an error such as this, which made my life a living hell:

  [user@nfs_client]# mount -v -t nfs -o port=2222,mountport=3333 localhost:/home/ nfs/
  mount to NFS server 'localhost' failed: possible invalid port.
  RPC Error: 15 ( Program not registered )

The filesystem should be mounted now. Of course, nothing is stopping you from adding this to /etc/fstab as well to make the mount occur at boot time.

NOTE TO SELF: add note about pinging to make sure the tunnel doesn't collapse!

But... why TWO tunnels?

Well... we could just have one tunnel, coordinated by sshd on the gateway, by means of running something like this from the client machine:

$ ssh !username@169.229.131.109 -L 2222:192.168.0.24:2049

To the NFS server, this would appear that the gateway (192.168.1.1 on the subnet side) is trying to mount the NFS, because the gateway sshd daemon is now forwarding traffic from port 2222 on the client machine to the NFS server. But there are two problems with that: (1) that means you must add the gateway to /etc/hosts.allow, which is less secure because the NFS can be mounted on an Internet-visible host, and (2) you must add the insecure option for it as well (because the outgoing port from the gateway will be a high port number), which means that any user on the gateway will be able to mount the NFS (since only root can bind low port numbers), including one that might not be allowed to. The combination of 1 and 2 makes one want to use two tunnels.

Stephen Carrier says: I think two-tunnels can be avoided if you use IPtables to support a direct ssh connection from the client to the server-- see my note IPTablesTunnel.

NOTE TO SELF: yeah, Steve's right, there's gotta be an easier and equally secure way to do this... must think about it.

Can you mount a NFS through a SSH tunnel on Mac OS X (or another BSD derivative)?

NO! You can't! So don't waste your time trying. Use a VPN instead, such as, for example, written up here: How To Configure VPN.

Why can't you? Well, because Mac OS X (and as far as I know, all BSD-derived operating systems) require using RPC to get the mountd port from portmap. So you can't, for example, use the port and mountport options in the command:

$ mount -v -t nfs -o port=2222,mountport=3333,tcp localhost:/nfs/ /mnt/nfs/

because apparently that would just be toooo hard for Mac OS X, wouldn't it? I could not find any evidence that there is a way around this, but if there is, please let me (Andrew Uzilov) know.

But all kidding aside, there's probably a good reason why Mac OS X does this. I would just love to know what that is.

---

-- Andrew Uzilov - 28 Mar 2006