Using esxtop? Being presented with this:
Then change your TERM type
# esxtopRead More
This is a 32bit binary, which I think needs some pretty old kernel version. Hence it only works on 4.0, I will try and get an updated release for 4,1 (*note) ESXi 5 comes with sftp-server already.
I came across something interesting while fiddling earlier, after spending about 2 hours building a static release of openssh server that was going to replace dropbear. I’d gotten to a point where I could build a i386 release of the binaries with no random library requirements and sshd would start and listen on a port defined in /etc/ssh/sshd_config. unfortunately starting ssh in debug mode allowed me to see numerous glibc errors during connections and explain why I couldn’t connect. At this point I don’t think there is any real way of replacing dropbear with a complete openssh solution even statically linking. Even testing the openssh sftp binary that had been compiled showed that it wasn’t coping with a system call not returning UIDs correctly meaning that it would report a FATAL error and close continually.
Given openssh wasn’t going to be replaced I researched about dropbear and if there was a newer version perhaps with sftp, unfortunately not. Eventually I came across notes on a blog mentioning that dropbear “supports” openssh sftp. After restoring ESXi back to its default filesystem settings (ssh enabled) it appears the attempting to sftp to esxi returns the following error.
After compiling a slightly older version of openssh (static) I found a release of sftp-server that will once placed in /sbin on ESXi allows full usage of sftp (including sshfs mounting) binary below.
I’ve had numerous occasions were i’ve needed to upload files to the actual file systems on an esxi system, the only ‘proper’ method is using the abysmal virtual infrastructure client and working mainly on a mac means I need to use VMware Fusion for windows to run the client to connect to the server (overkill). So it’s possible to enable ssh access to the server using the tech support menu, which allows access to the underlying hypervisor and it’s file systems and therefore it’s possible to scp files to the filesystems again this is quite slow and overkill due to the encryption being used. Also due to dropbear being used for the ssh it doesn’t use sftp, which means that you can’t mount the filesystems ala. FUSE and sshfs.
I should say at this point, the goal of all this was to allow me to keep all my ISOs on one server and be able to access them from everywhere also, I wanted a PXE server to be able to access the ISOs and loopback mount them and then present the contents via NFS to the installers started by PXE.
So looking around I found some ftp binaries that should work on ESXi, given that the console access for ESXi is done with busybox there is no file command to determine what binary type the files are so I was unaware of what binaries I could run inside ESXi. This all worked fine following the instructions located on the vm-help.com website here however a few of the instructions are a little bit incorrect such as the path to tcpd is incorrect in inetd, however i’ll leave you to fix that. So on the PXE server using FUSE again and curlftpfs to mount the filesystem and this revealed a glaring bug as soon as I loop back mounted the first ISO. Unfortunately the problem lies in the fact that curlftpfs will use memory to store the file as it downloads it for access by FUSE, so trying to open a 4GB DVD ISO quickly exhausted my PXE servers memory and then it became unresponsive, great.
Further research turned up a blog post about some guy trying to use unfs to enable nfs sharing between two ESXi boxes, more specifically it was mentioned that linux binaries would work fine in the ESXi service console. One thing that was slightly confusing was that ESXi is x86_64 (64bit) however binaries that you need for the service console have to be 32bit otherwise you’ll get a confusing error that the binaries can’t be found when you try and run them due to busybox’s odd handling of errors. I present below the binaries required for nfs in ESXi :-
These are pretty easy to use, scp the file over to ESXi and untar it in /tmp al that’s left is to place the files in /tmp/sbin into /sbin and the files in /tmp/etc into /etc. The /etc/exports contains one entry to give access to /vmfs/volumes, which means that accessing the nfs share will give you the UUID paths for the disks containing VM’s and ISOs. To start the nfs server, start portmap first and then start unfsd which should be started the following way (unfsd -d &), this is due to unfsd not being able to disconnect from console on start up (something to do with busybox I assume).
One final note, is that once another machine connect to the nfs share portmap will go start using 50%-70% cpu and will need stopping and starting for other nfs clients. I’m still looking into this, however having a cron job to restart the process every few minutes should do the job.Read More
As a point release i’m not sure why VMware decided to completely change the layout of files on the CD ISO along with change various system files, however they have. I suppose one change is beneficial as it improves the method for creating a USB stick which for previous versions of ESXi is documented here. They have also changed console access and ssh access to the hypervisor, which now can be enabled from the orange console screen under TSM (technical support mode) settings.
Writing to a USB stick:
Now the iso contains a simple file called imagedd.bz2 that is found in the root of the iso, which just need bunzip to decompress it and the dd’ing to a USB stick as documented before.
At the moment there is nothing about this on the internet so it was a case of going through a few files to find it, but previously /etc/pam.d/common-password contained all of the password complexity requirements as documented on the VMware KB. However now all of the password requirements are located in the file /etc/pam.d/system-auth, so this file will need editing if you don’t want insane password requirements for all users.Read More
As previously mentioned i’ve always been a XEN advocate for the hypervisor sitting on the physical machine, given the ready availability of a paravirtualised kernel for my Linux VMs. However a requirement to get to grips with VMware has led me to deploy ESXi on my systems so that I can have a proper look around at the OS and how it manages virtual machines. I’ve got disks all over the place, however my server I use for all my testing has a set up (and has reached capacity) meaning that i can’t use those disks. I found an old IDE disk that I installed in there, however the fiddling around with the oem.tgz(explained another time) never seemed to work for me at this point. So I picked up a USB key for €8 and decided to do a USB boot with the hypervisor on there.
This is pretty straightforward task to do and can be accomplished in two methods of either botching the install halfway through or pulling the image from the install CD and doing a raw write to the USB device. I opted for pulling the image from the CD and dd’ing this image onto my USB key by doing the following methods:
1. Acquire VMware ESXi 4.0 from vmware
2. Mount the CD (in linux by mount -o loop <path to ISO> <mount point>, or double clicking in OSX )
3. Copy install.tgz from the CD and extract in a working location, which should eventually give you a directory structure.
4. bunzip /usr/lib/Vmware/install/VMware-VMvisor-big-164009-x86_64.dd.bz2 (or equivalent file)
5. dd if=<path to .dd file> of=<path to USB device>
6. Change BIOS settings to boot from USB and boot up.
7. Set IP address, download VSphere client and off you go.
Refer to http://www.vm-help.com for any issuesRead More
This may seem a truly useless idea to a lot of people, however I’ve always found having a ‘lab’ at home capable of building pretty much every system scenario very useful. Dealing daily with VMware ESX servers and VMs in a production environment means that I can never “fiddle” around and get to grips with whats under the hood or deal with the unsupported or hidden functionality. My Xen server has allowed me to create pretty much every scenario I may need Oracle RAC clusters, interoperability between various operating systems and various development environments. When I first received the server that I use for my environments my first choice of setup was going to be a VMware ESX setup, however the hardware requirements restrict most installations to a subset of hardware configurations meaning I couldn’t install it. Originally it would have been impossible to install it under a xen HVM on the basis that the virtualised network adapters are unsupported by ESX, however luckily from 3.4.0 onwards the xen-tools have been updated and allow the use of the e1000/e100 network.Read More