This is section from my web pages Musings/Experiments With A Virtual Data Center
Installing The Virtual Hosts
I have been using vmware for about 10 years. When it was first introduced I jumped on it as a way to build a class demonstration network that lived inside my laptop. I used it on the road when I taught my Sendmail classes. Back then laptops had small 2 Gb disk drives so I could not have multiple hosts with even minimal Linux installs. With my background with Sun diskless clients I made a custom Red Hat install that split the root and user file systems. Each vmware host had its own root partition of about 100 Mb and they shared an NFS mounted /usr partition. Now, I could not install these clients in the normal way. I had to clone them taking a generic vmware host from a tar image, renaming the directory and files, and mucking around a bit in the host.vmx file. I then had a script installed in the generic host that would reconfigure the host with a specific hostname and network configuration. This was useful for my class since it allowed me to build new hosts in a matter of minutes.
I decided to use this approach with this configuration. I started with a single Linux host with 64 Mb of memory, a 1.2 Gb root partition, and a 128 Mb swap partition. Not a lot of memory you say? Well all of my servers are minimal web servers, no X windows, no un-needed services they should run fine with 64 Mb of memory. I did a minimal CentOS software install, de-selecting everything. This became the vmhost that I cloned.
A note about OS installs
With almost 25 years of experience with managing UNIX/Linux servers I have come to the conclusion that most/all server installs have to much software included. I am a fan of the security philosophy that "You are immune to bugs in software you do not run". I am also a fan of its corollary "You are even more immune to bugs in software that is not even installed".
Why do you need X windows on a server? How often do you walk up to a servers console and log in to do extended work? We manage these servers over the network. I find that when I go to a data center one of the first things I do is fire up my laptop and connect to the network. Yes some work needs to be done on the physical console, but most of the time this is command line work, often in single user mode. By-by X windows.
I used to install a bunch of server software packages "Just in case I might need it". With yum or some other package manager installing additional software is a snap: yum package install. So now I deselect all of the software packages I can and then use yum to add back what I really want. The problem with allowing the install process to include Apache or MySQL is that quite often the install process will add a bunch of other stuff "Just in case". Yes it is an annoyance manually adding individual packages, but yum does a good job of automating the installation of other packages that are required. At times you have to futz around when things are missing like a perl module, but that is part of the development process. You figure it out once, document it, and then you know it. It gives you a much better understanding about what is what in your OS. A side benefit of a minimal install is that it cuts down the number of security/bug fixes you have to install. I find that my servers only need about 1/4 of the software patches that my Linux desktop/laptop hosts need.
Note: This minimal installation of CentOS only needed 22 patches a year after Red Hat RHEL 5.2's release. As more server packages go up this number will increase.
Now, I am disgusted with a minimal Red Hat OS install. I deselected everything and the OS install is over 500 Mb. What I really disgusted about is that this "minimal" OS install no longer includes perl. What is in that 500 Mb of stuff?
Now one of my goals with this project is to get cfengine to install the packages for me using yum. But we will get to that when we get there.
Cloning the servers
Now that I have my generic CentOS 5.2 server image installed it is
time to start cloning. The first step is to tar up the vmhost:
tar cvfz LinuxCentOS5.2.tar.gz LinuxCentOS5.2
- Procedure:
- Un-tar up the vmhost:
tar xvfz LinuxCentOS5.2.tar.gz - Rename the directory and the .vmx and .vmxf files:
mv LinuxCentOS5.2 server_name
mv server_name/LinuxCentOS5.2.vmx server_name/server_name.vmx
mv server_name/LinuxCentOS5.2.vmxf server_name/server_name.vmxf - Edit the server_name/server_name.vmx file changing the displayName and extendedConfigFile settings to reflect the server_name
- In the VMware console you need to change the network hardware configuration to connect it to the right networks
- Start the new virtual server
- On the VMware host console log in as root
- Edit the /etc/sysconfig/network file to change the hostname and the local network gateway (as needed)
- Edit the ifcfg-eth# files in the /etc/sysconfig/network-scripts directory to change the BOOTPROTO to static and the IPADDR and BROADCAST settings to the correct IP addresses
- If you want each host to have a separate set of ssh keys you
need to remove the existing ones:
rm /etc/ssh/*key*
They will be regenerated when the host is rebooted - Reboot the puppy
- Check that you can reach it over the network
From this point on, I am going to do all configuration remotely over the VMware host only networks. I will use the VMware web GUI only to start, stop and monitor the virtual hosts
Odds and Ends with installing the VMware hosts
When I went to install the VMware tools, I discovered that their setup script requires perl, and if you have deselected everything when you installed Linux, it did not include perl. So I mounted the CentOS ISO image and installed perl using rpm. My VMware hosts are not routing to the Internet, so I could not just use yum.