File access in ESXi 4.1 (nfs and tcp)

I’ve had numerous occasions were i’ve needed to upload files to the actual file systems on an esxi system, the only ‘proper’ method is using the abysmal virtual infrastructure client and working mainly on a mac means I need to use VMware Fusion for windows to run the client to connect to the server (overkill). So it’s possible to enable ssh access to the server using the tech support menu, which allows access to the underlying hypervisor and it’s file systems and therefore it’s possible to scp files to the filesystems again this is quite slow and overkill due to the encryption being used. Also due to dropbear being used for the ssh it doesn’t use sftp, which means that you can’t mount the filesystems ala. FUSE and sshfs.

I should say at this point, the goal of all this was to allow me to keep all my ISOs on one server and be able to access them from everywhere also, I wanted a PXE server to be able to access the ISOs and loopback mount them and then present the contents via NFS to the installers started by PXE.

So looking around I found some ftp binaries that should work on ESXi, given that the console access for ESXi is done with busybox there is no file command to determine what binary type the files are so I was unaware of what binaries I could run inside ESXi. This all worked fine following the instructions located on the vm-help.com website here however a few of the instructions are a little bit incorrect such as the path to tcpd is incorrect in inetd, however i’ll leave you to fix that. So on the PXE server using FUSE again and curlftpfs to mount the filesystem and this revealed a glaring bug as soon as I loop back mounted the first ISO. Unfortunately the problem lies in the fact that curlftpfs will use memory to store the file as it downloads it for access by FUSE, so trying to open a 4GB DVD ISO quickly exhausted my PXE servers memory and then it became unresponsive, great.

Further research turned up a blog post about some guy trying to use unfs to enable nfs sharing between two ESXi boxes, more specifically it was mentioned that linux binaries would work fine in the ESXi service console. One thing that was slightly confusing was that ESXi is x86_64 (64bit) however binaries that you need for the service console have to be 32bit otherwise you’ll get a confusing error that the binaries can’t be found when you try and run them due to busybox’s odd handling of errors. I present below the binaries required for nfs in ESXi :-

nfs binaries for x86

These are pretty easy to use, scp the file over to ESXi and untar it in /tmp al that’s left is to place the files in /tmp/sbin into /sbin and the files in /tmp/etc into /etc. The /etc/exports contains one entry to give access to /vmfs/volumes, which means that accessing the nfs share will give you the UUID paths for the disks containing VM’s and ISOs. To start the nfs server, start portmap first and then start unfsd which should be started the following way (unfsd -d &), this is due to unfsd not being able to disconnect from console on start up (something to do with busybox I assume).

One final note, is that once another machine connect to the nfs share portmap will go start using 50%-70% cpu and will need stopping and starting for other nfs clients. I’m still looking into this, however having a cron job to restart the process every few minutes should do the job.

SSH tunnels

Sometimes getting to various servers especially virtualised systems, can be a nightmare due to various firewall rules restricting the physical machine or just down to the network architecture itself. For this example we’ll use two virtual machines which are located behind nat’d firewalls on two different physical hosts the firewalls permit SSH access out that is it.

[PHYS_A [VM_A:5901]]  <–/–>  [PHYS_B [VM_B]]

VM_A needs to run a VNC Server that will bind to VM_A:5901, however will no access to the firewall etc.. there is no way that there can be any port forwarding to this internal VM. We could use IPtables on the VM_A and then again use IPtables on PHYS_A to bind 5901 from VM_A’s IP to PHYS_A, however we are still behind a firewall.

To accomplish this sharing a server running SSH is required, the location of this server is completely irrelevant as long as it’s accessible with a standard user account. This server will be called SSH and both machines can access it through the firewall.

[PHYS_A [VM_A:5901]]  <—> [SSH] <—>  [PHYS_B [VM_B]]

The next step is to push the port on VM_A to the SSH server using the following command:

[user@VM_A]$ ssh -R5901:127.0.0.1:5901 -C user@SSH

This will open a session that will create the port 5901 on the server SSH, this can be confirmed by running a netstat -a on the server SSH and seeing that 5901 is now listed as a TCP4 listening port.

[PHYS_A [VM_A:5901]]  <—> [SSH:5901] <—>  [PHYS_B [VM_B]]

The next step is to pull the port on SSH to VM_B where we have the client software (vncviwer). The following command is used to pull the port from an IP address and bind it to a local port in VM_B.

[user@VM_B]$ ssh -L5901:127.0.0.1:5901 -C user@SSH

There will now be the port created on VM_B that tunnels through SSH to VM_A.

[PHYS_A [VM_A:5901]]  <—> [SSH:5901] <—>  [PHYS_B [VM_B:5901]]

The user on VM_B can now use the service as if it was actually running on the host itself.

[user@VM_B] vncviewer localhost:5901

Notes for SSH flags:

-R   [port to bind to on remote host] : [local host IP] : [localhost port]

-L   [local port to use] : [remote IP] : [remote port]

-C (adds compression)