harker at harker dot com
2010072501
Puppet is a configuration management tool
Puppet decouples the syntax of the configuration management tool from the syntax of the underlying OS and applications
This allows you to define a high level idea like user, application, or service and puppet will translate that in to the commands required by the client OS
Configuration information is specified in "recipes"
Recipe files end with .pp
The recipes define what the system should look like (be configured as)
Puppet then issues OS/applications specific commands to change the configuration to match the desired result
Recipes are written in ruby
Puppet has a passive server called the "puppet master":
Runs the RPC-XML/HTTPS server listening on port 8140
Acts a the certificate authority for the puppet clients
Has recipes the clients can download
Has repositories of files the clients can download
Puppet clients pull configuration information from the puppet master
Client first collects local host configuration information using factor
Client then requests a master recipe to configure the client
This master recipe then pulls in additional recipes based on the client's configuration
The puppet client then translates this information into host specific commands to run
Configuration components are organized into resources
Resources are grouped into collections
Resources are made up of a type, title and a series of attributes:
file { "/etc/hosts": owner => "root"; group => "root: }
Type is file
Title is /etc/hosts
Attributes define the owner and group is root
I use RHEL/CentOS so I use the Fedora Project EPEL repository
rpm -ivh http://download.fedora.redhat.com/pub/epel/5/x86_64/epel-release-5-3.noarch.rpm
I then use yum on the puppet master:
yum install puppet puppet-server
And on each client I install just the client:
yum install puppet
Make sure the puppet user exists:
id puppet
The puppet master is configured in two places:
/etc/puppet/puppet.conf /etc/puppet/manifests/site.pp
You do not typically need to change the puppet.conf file
The site.pp pulls in all of the puppet recipes
The simplest site.pp file is:
file { "/etc/hosts": owner => "root", group => "root", mode => "644", }
chkconfig puppetmasterd on service puppetmasterd startLook for error in:
/var/log/puppet/masterhttp.log
Normally the puppet client runs as a daemon
You can run it manually
puppetd -v --test
If we break the hosts file:
chmod 664 /etc/hosts
And run puppet in verbose mode:
puppetd -v --no-daemonize
We should see:
notice: Starting Puppet client version 0.25.5 info: Caching catalog for puppet.kvm.harker.com info: Applying configuration version '1279238728' notice: //sysfiles/File[/etc/hosts]/mode: mode changed '664' to '644' notice: Finished catalog run in 0.02 seconds
import "classes" import "modules" import "nodes" # The filebucket option allows for file backups to the server filebucket { main: server => 'puppet.kvm.harker.com' } # Set global defaults - including backing up all files to the main # filebucket and adds a global path File { backup => main } Exec { path => "/usr/bin:/usr/sbin/:/bin:/sbin" }
Puppet has three types of recipe files:
Puppet classes define how to install and configure files, applications, services, etc...
A class is defined with:
class Title { }
Resources are then added to the class
A class can have multiple resources:
#/etc/puppet/manifests/classes/sysfiles.pp class sysfiles { file { "/etc/hosts": owner => "root", group => "root", mode => "644", } file { "/etc/passwd": owner => "root", group => "root", mode => "644", } }
This class is then included in a node definition with:
include sysfiles
The nodes defines what classes and modules are applied to which hosts
Typically the node definitions are put in a node.pp file:
/etc/puppet/nodes.pp
The default node is used if there is no match for a specific host
A simple site.pp file that includes the sysfiles class:
# /etc/puppet/manifests/nodes.pp node default { include sysfiles }
A complex node can be configured by inheriting a simpler node
I start with a basenode that all hosts inherit
This includes things I want done on all nodes:
# /etc/puppet/manifests/classes/ntpd.pp class ntp { package { ntp: ensure => present } file { "/etc/ntp.conf": owner => root, group => root, mode => 444, backup => false, source => "puppet:///files/etc/ntp.conf", require => Package["ntp"], } service { "ntpd": enable => true , ensure => running, subscribe => [Package[ntp], File["/etc/ntp.conf"],], } }
A puppet module is a portable collection of classes, configuration resources, templates and files that configures a particular application or function
You typically make a modules sub-directory:
mkdir -p /etc/puppet/modules
Then a sub-directory for each module
mkdir -p /etc/puppet/modules/sudo
In a manifest sub-directory puppet related files get added:
mkdir -p /etc/puppet/modules/ntpd/manifests
You would then add the ntpd.pp file:
mv /etc/puppet/manifests/classes/ntpd.pp /etc/puppet/modules/ntpd/manifests/ntpd.pp
You can also have a module install files:
/etc/puppet/modules/ntpd/files
Modules can also edit files on the fly:
/etc/puppet/modules/ntpd/templates
In /etc/puppet/modules/ntpd/templates/ntp.conf:
# /etc/ntp.conf, configuration for ntpd . . . fudge 127.127.1.0 stratum <%= local_stratum %> . . . includefile /etc/ntp.server.conf includefile /etc/ntp.client.confThe ntp.pp recipe file:
. . . $local_stratum = $ntp_local_stratum ? { '' => 13, default => $ntp_local_stratum, } config_file { "/etc/ntp.conf": content => template("ntp/ntp.conf"), require => Package[$ntp_package]; } . . .
Why I dislike Kickstart/Jumpstart
The real problem:
OK, I have kickstarted the system. Now I want to:
And the answer is: We then run [ssh|script|rsync|cfengine|puppet]...
If you are going to maintain the system with a tool after you kickstart the server then why don't you simply use that tool for all of the configuration after a minimal core kickstart install?
One of the problems with configuration management systems is that they are not always used
You deploy the server with puppet, then so time later in an emergency you ssh to the box and make a change.
Unless you update the puppet configuration, you can no longer use puppet to rebuild the box if the server crashes.
Lay out partitions so the box can be rebuilt:
Rebuild your servers periodically
This is painful the first dozen times because people cheat
By doing this regularly, it keeps you honest
The wrong time to discover that puppet can not rebuild your server is not after that server has crashed
Much easier to do it in a planned maintenance window
By giving developers and staging the same environment as production you avoid gremlins like missing packages, different libraries, different kernel versions, different patches
Development and staging may have additional requirements than production.
Add them to the puppet configuration
Rebuild Staging and development build/test servers before each (major?) code deploy.