This is a 32bit binary, which I think needs some pretty old kernel version. Hence it only works on 4.0, I will try and get an updated release for 4,1 (*note) ESXi 5 comes with sftp-server already.
I came across something interesting while fiddling earlier, after spending about 2 hours building a static release of openssh server that was going to replace dropbear. I’d gotten to a point where I could build a i386 release of the binaries with no random library requirements and sshd would start and listen on a port defined in /etc/ssh/sshd_config. unfortunately starting ssh in debug mode allowed me to see numerous glibc errors during connections and explain why I couldn’t connect. At this point I don’t think there is any real way of replacing dropbear with a complete openssh solution even statically linking. Even testing the openssh sftp binary that had been compiled showed that it wasn’t coping with a system call not returning UIDs correctly meaning that it would report a FATAL error and close continually.
Given openssh wasn’t going to be replaced I researched about dropbear and if there was a newer version perhaps with sftp, unfortunately not. Eventually I came across notes on a blog mentioning that dropbear “supports” openssh sftp. After restoring ESXi back to its default filesystem settings (ssh enabled) it appears the attempting to sftp to esxi returns the following error.
After compiling a slightly older version of openssh (static) I found a release of sftp-server that will once placed in /sbin on ESXi allows full usage of sftp (including sshfs mounting) binary below.
I’ve had numerous occasions were i’ve needed to upload files to the actual file systems on an esxi system, the only ‘proper’ method is using the abysmal virtual infrastructure client and working mainly on a mac means I need to use VMware Fusion for windows to run the client to connect to the server (overkill). So it’s possible to enable ssh access to the server using the tech support menu, which allows access to the underlying hypervisor and it’s file systems and therefore it’s possible to scp files to the filesystems again this is quite slow and overkill due to the encryption being used. Also due to dropbear being used for the ssh it doesn’t use sftp, which means that you can’t mount the filesystems ala. FUSE and sshfs.
I should say at this point, the goal of all this was to allow me to keep all my ISOs on one server and be able to access them from everywhere also, I wanted a PXE server to be able to access the ISOs and loopback mount them and then present the contents via NFS to the installers started by PXE.
So looking around I found some ftp binaries that should work on ESXi, given that the console access for ESXi is done with busybox there is no file command to determine what binary type the files are so I was unaware of what binaries I could run inside ESXi. This all worked fine following the instructions located on the vm-help.com website here however a few of the instructions are a little bit incorrect such as the path to tcpd is incorrect in inetd, however i’ll leave you to fix that. So on the PXE server using FUSE again and curlftpfs to mount the filesystem and this revealed a glaring bug as soon as I loop back mounted the first ISO. Unfortunately the problem lies in the fact that curlftpfs will use memory to store the file as it downloads it for access by FUSE, so trying to open a 4GB DVD ISO quickly exhausted my PXE servers memory and then it became unresponsive, great.
Further research turned up a blog post about some guy trying to use unfs to enable nfs sharing between two ESXi boxes, more specifically it was mentioned that linux binaries would work fine in the ESXi service console. One thing that was slightly confusing was that ESXi is x86_64 (64bit) however binaries that you need for the service console have to be 32bit otherwise you’ll get a confusing error that the binaries can’t be found when you try and run them due to busybox’s odd handling of errors. I present below the binaries required for nfs in ESXi :-
These are pretty easy to use, scp the file over to ESXi and untar it in /tmp al that’s left is to place the files in /tmp/sbin into /sbin and the files in /tmp/etc into /etc. The /etc/exports contains one entry to give access to /vmfs/volumes, which means that accessing the nfs share will give you the UUID paths for the disks containing VM’s and ISOs. To start the nfs server, start portmap first and then start unfsd which should be started the following way (unfsd -d &), this is due to unfsd not being able to disconnect from console on start up (something to do with busybox I assume).
One final note, is that once another machine connect to the nfs share portmap will go start using 50%-70% cpu and will need stopping and starting for other nfs clients. I’m still looking into this, however having a cron job to restart the process every few minutes should do the job.Read More