Coffee & Beer

Rantings and Ravings of the technical sort

Building a Brand New Multi-master Puppet4 Infra

I’d say one of the “hardest” pasts of deploying puppet is building out the inital infrastructure. I say “hardest” because its really not that hard, but, if its a dlena room environment, with no config managment to use as a jumping off point, it can involve lots and lots of manaul config, trial and error, and who really wants any of that when you’re trying tobuild something to automate everything else?

I recently, after some testing/playing around in a multi-vagrant envionment, came up with what I’ve found to be a pretty good recepie, and since there doesn’t seem to be a TON out there for details, from scratch hwotos, I figured I’d write one up.

What I ended up with looks, at the high level, like this:

  • Centralized PuppetCA system (master of masters one might say)
  • Git control-repo, roles/profiles/hiera heavy, no in-house modules at this point.
  • Consul, for service registration/health checks, clients use this for the DNS for finding masters (scales and is health check based)
  • Consul, for deployment (since we can deploy to all masters in paralell, and consul knows what systems are masters)
  • 99% automated spinning up of new masters (dns_alt_names are really all thats not automatable)

Odds are this is going to get pretty long, and might even get split into multiple posts, but, the high-high level steps are:

  • Build your git (control)repo
  • Build first master/ca from ^^^
  • Build consul server(s) from ^^^
  • Build Jenkins from ^^^
  • Setup Jenkins build/deploy via consul exec
  • Build a new master
  • Build puppetdb

Tmux + Ssh-agent

So I just wanted to get a quick one out there, before I forgot about this little hack I came up with.

Problem At work we’ve got a ssh key we use to get into most things. Its got a nice, beefy passphrase on it no one can remember, so the normal operating mode is to fire up ssh-agent, load the key (which involves some hackery since its owner by root and we need to laod it into user environments), and then connect with that key. I tend to operate out of a long running remote tmux session, so I wanted the abiloty to fire up tmux, load the key, and have ssh agent use that on any and all new windows I open within that tmux session. If I close the tmux session, sure, I need to relaod the key, but so long as I keep it open, I can come and go as I please and open/close windows while keeping the key loaded.

Solution So this involved a could hacky bits to get going.

First, in .tmux.conf I set:

1
2
set -g update-environment -r
setenv -g SSH_AUTH_SOCK $HOME/.ssh/ssh_auth_sock.$HOSTNAME

So, copy my environtment on new sssions, and, set an environment variable of SSH_AUTH_SOCK to $HOME/.ssh/ssh_auth_sock.$HOSTNAME. Thats because we’re going to force ssh-agent to use that when we load it up for the first time in a given tmux instance

In my .zshrc, I created a function:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
sa () {
#if we have a pid AND are working
  if [ `pgrep -u $USER ssh-agent` ];then
    export SSH_AGENT_PID=`pgrep -u $USER ssh-agent`
    export SSH_AUTH_SOCK=$HOME/.ssh/ssh_auth_sock.$HOSTNAME
  else
    `ssh-agent -k`
    rm -rf $SSH_AUTH_SOCK
    unset SSH_AUTH_SOCK
    unset SSH_AGENT_PID
    export SSH_AUTH_SOCK=$HOME/.ssh/ssh_auth_sock.$HOSTNAME
    `killall -u $USER ssh-agent`
    ssh-agent -a $SSH_AUTH_SOCK -s
    export SSH_AGENT_PID=`pgrep -u $USER ssh-agent`
  fi
}

Told you it was hacky. Basically, if ssh-agent is already running, fix/make sure the env vars are set correctly. Otherwise, make sure ssh-agent is really dead, unset all its vars, sest them to what WE want, and fire it back up, forcing it to use them.

When starting a new tmux session, I run the sa function in the first window. Any windows after that will get those env vars and I get to keep my ssh-agent!

Bonus Per-host history

1
2
3
export HISTFILE="$HOME/.zsh_history_$HOSTNAME"
setopt inc_append_history
setopt share_history

Back From the Dead!

So what, 4 years or something close to it without updates? YEASH.

A lot has changed, but I’m hoping to pick this back up a little bit with some updates on thigns I’ve been upto. But first figured I’d do the quick quick quick update of whats happened the last few years:

  • Worked @ Harvard FAS Research Computing (was there ‘06 -> '14) as Research Computing Specialist

  • Left, went to the Broad Institute as Senior Systems Admin, about ~5 months in, DevOps “team” was formed

  • Spent a little over 1 year @ the Broad, working on:

    • Openstack (mostly Nova) for a scalable VM compute farm (lsf+SGE8+Univa)
    • Puppet (Torn down and rebuild mis-used PE environment into Puppet OpenSource, buildt out Puppet in GCE)
    • Docker, lots and lots of docker, integrated with Puppet onsite and in GCE.
    • Jenkins and software dev-ci in general (lots of docker builds/publishes/deploys)
    • Consul, build a few datacenter cluster and connected them, useed for service registration
  • Left the Broad to return to Harvard, this time as Senior DevOps HPC Engineer @ Harvard Medical School

Now that I’m back at Harvard, we’re wokring on a bunch of things, mainly focused around a new HPC cluster build, but lots and lots of other small and big projects as well.

The most exciting part is getting to do a bunch of stuff I’ve done before, but this time “right” or, at least, more planned out.

Stay tuned. I’ve got some posts planned already:

  • Bootstrapping a Puppet 4 (puppetserver 2.x) environment using almost nothing but puppet itself
  • Our version of roles+profiles
  • Why i’m not a fan of ansible, at least for config mgmt(deploys and structured remote exec/orch though are great!)
  • Gems and puppet moduels I’ve started publishing

and hopefully more!

Rc_whatis, Finger for Systems

As I’ve mentioned, at work, we’ve got a lot of systems, physical and virtual. All sorts of different hardware, specs, ages, etc. Its hard to keep track of, and harder to quickly say “This is what that is” with confidence. Puppet and the ability to collect and query facts from systems has been a HUGE help with this, of course. We use Foreman to provide a nice shiny web interface to this, but, its not the fastest, and most of us live in the CLI dayin a day out. So, I wanted a way to quickly, from any system, find out about any other system in our infrastructure. So, rc_whatis was born.

Its really just a hacky bit of ruby. I’ve opened up the /facts REST endpoint on our puppet massters so that any of our systems can get the facts of any other system, ssl cert or no. We don’t have any secrets in this info.

1
2
3
4
5
6
[root@nichols2tst ~]# rc_whatis --help
Usage: whatis [options] <hostname>
-j, --json                       JSON output
-y, --yaml                       YAML output
-p, --pp                         Pretty Print output
-a, --all                        Use all facts

As you can see, its pretty staight forward to call. Even provides a few serialized forms of output so other scripts can call this (more on that in a few!). Output looks like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@nichols2tst ~]# rc_whatis nichols2tst
Hostname: nichols2tst
Born_on: 2012-08-24
Manufacturer: Red Hat
Productname: KVM
Serialnumber: Not Specified
Operatingsystem: CentOS
Operatingsystemrelease: 6.3
Architecture: x86_64
Processor0: QEMU Virtual CPU version (cpu64-rhel6)
Processorcount: 1
Memorytotal: 996.77 MB
Kernelrelease: 2.6.32-279.5.2.el6.centos.plus.x86_64
Ipaddress: 10.X.X.X
Macaddress: 00:16:3E:XX:XX:XX
Vlan: 375
Location_row: virtual
Location_rack: virtual
Location_ru: virtual
Uptime: 10 days
Virtual: kvm
Hypervisor: kvm03a

Thats the default output, of a select # of facts. -a, --all would get everything, of course. I’ve already mentioned born_on in another post. location_* comesfrom a hacky little interfacts (yaml) to racktables we have (this systems is virtual, so, no physical location). And hypervisor is a conditional query based on a fact we populate on production hypervisors that marks them as such, as well as a list of vms running on a given hypervisor.

The coolest bit is it now exists in roots $PATH on ALL of our systems, so this info, for any host, is now a few keystrokes away all the time!

Even better, our nagios alerts now call this when crafting their emails to send us, so when a system drops, there is no question as to what it is/where/etc. Its all right in the email, along with a like to its full Foreman page and entry in Nagios, of course along witht he normal alert info!

Source? Heres your source!

(rc_whatis.rb) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
#!/usr/bin/ruby

require 'optparse'
require 'yaml'
require 'puppet'
require 'puppet/node'
require 'puppet/node/facts'
require 'pp'
require 'json'
require 'yaml'


#Setup
#
#List of values we want, unelss we call w/ --all
values = ["hostname","born_on","manufacturer","productname","serialnumber","operatingsystem","operatingsystemrelease","architecture","processor0","processorcount","memorytotal","kernelrelease","ipaddress","macaddress","vlan","location_row","location_rack","location_ru","uptime","virtual"]

#Puppet Server Info:
puppet_server="puppet"
puppet_port="8140"
puppet_url="https://" + puppet_server + ":" + puppet_port

options={}
OptionParser.new do |opts|
        opts.banner = "Usage: whatis [options] <hostname>"

        opts.on("-j","--json","JSON output") do |j|
                options[:json] = j
        end
        opts.on("-y","--yaml","YAML output") do |y|
                options[:yaml] = y
        end
        opts.on("-p","--pp","Pretty Print output") do |p|
                options[:pp] = p
        end
        opts.on("-a","--all","Use all facts") do |a|
                options[:all] = a
        end
end.parse!

if ARGV.length != 1
        puts "Please pass a hostname, see --help"
        exit
else
        host = ARGV[0]
end

if host.match(".edu")
        fqdn = host.to_s
else
        fqdn = host.to_s + ".domain.edu"
end

fact_url = puppet_url + "/production/facts/" + fqdn
fact_cmd = "curl -s -k -H \"Accept: yaml\" " + fact_url
rawfacts = `#{fact_cmd}`
if rawfacts.match("Could")
        puts rawfacts
        exit
end
rawfacts = rawfacts.sub("!ruby/object:Puppet::Node::Facts","")
rawfacts = YAML::parse(rawfacts)
rawfacts = rawfacts.transform

#We can now access things like:
# rawfacts["values"]["virtual"]

facts = Hash.new
rawfacts["values"].each_pair do |a,b|
        facts[a] = b
end

#Okay, we have a hash or all facts.
#Make second hash of specific facts


facts2 = Hash.new
if options[:all] == true
  facts2 = facts
else
values.each do |val|
  facts2[val] = facts[val]
end
end

#Lets see if it is virtual so we can add a fact about where it is running...
if facts2["virtual"] == "kvm"
        hypervisor_url = puppet_url + "/production/facts_search/search?facts.kvm_production=true"
        hypervisor_cmd = "curl -s -k -H 'Accept: YAML' " +  hypervisor_url
        hypervisor_yaml = `#{hypervisor_cmd}`
        hypervisors = YAML::load(hypervisor_yaml)
        hypervisors.each do |hyp|
                hyp_facts_url = puppet_url + "/production/facts/" + hyp
                hyp_facts_cmd = "curl -s -k -H \"Accept: yaml\" " + hyp_facts_url
                hyp_facts = `#{hyp_facts_cmd}`
                hyp_facts = hyp_facts.sub("!ruby/object:Puppet::Node::Facts","")
                hyp_facts = YAML::parse(hyp_facts)
                hyp_facts = hyp_facts.transform
                vms = hyp_facts["values"]["kvm_vms"].to_a
                vms.each do |vm|
                        if vm.match(facts2["hostname"])
                                facts2["hypervisor"] = "#{hyp}"
                        end
                end
        end
        #Add "hypervisor" to the list of values we care about
        values.push("hypervisor")
end

#output time
if options[:json] == true
        puts facts2.to_json
elsif options[:yaml] == true
        puts facts2.to_yaml
elsif options[:pp] == true
        pp facts2
else
  if options[:all] == true
        pp facts2
  else
        values.each do |val|
                puts "#{val.capitalize}: #{facts2[val]}"
        end
  end
end

could it be written better? yep. But its quick and its a start!

Born on Dates for Systems

Wrote this fact a while ago but though it was worth throwing up here.

We’ve got a lot of systems. Our inventory is slightly lacking, and many were build a long long time ago. Many time we’ve found ourselves asking “When the hell was X system built?” or maybe “rebuilt”. Thus, for RHEL/CentOS systems at least, we can get a fact for that:

(born_on.rb) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#!/usr/bin/ruby
require 'facter'

begin
          Facter.operatingsystem
rescue
          Facter.loadfacts()
end
os = Facter.value("operatingsystem")
if os.match(/CentOS|RedHat/) then
  unless  `rpm -q --last basesystem`.empty?
      Facter.add("born_on") do
          setcode do
              date = `rpm -q --qf '%{INSTALLTIME}' basesystem`
              born_on = `date --date=@#{date} +%F`.chomp
              born_on
          end
      end
  end
end

Giving us:

1
born_on => 2011-11-03

Updated: Puppet Facts for Puppet Classes

Just the other day on Google+ I got a comment form someone who had found my old “Puppet facts about puppet classes” post and had used it. Sadly, I had gone through a few revisoins after that post and never followed up. There as a bit of a memory leak, and I decided I wanted things done a little different. Instead of creating a fact per-class (and having n fact if the clas wasn’t used), I’d rather have a list of the classes, as one fact, I can regex/etc on. Our group has recently started creating facts like this as json arrays so we can prase the data easy later, and its a bit more readable even if not.

(puppet_classes_2.rb) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
#!/usr/bin/ruby
#Get puppet classes, from /var/lib/puppet/classes.txt
#
require 'facter'
require 'json'
begin
        Facter.hostname
rescue
        Facter.loadfacts()
end
hostname = Facter.value('hostname')

classes_txt = "/var/lib/puppet/classes.txt"

if File.exists?(classes_txt) then
        f = File.new(classes_txt)
        classes = Array.new()
        f.readlines.each do |line|
                line = line.chomp.to_s
                line = line.sub(" ","_")
                classes.push(line)
        end
        classes.delete("settings")
        classes.delete("#{hostname}")
        Facter.add("puppet_classes") do
                setcode do
                        classes.to_json
                end
        end
end

thus, a node would have a fact like:

1
puppet_classes => ["base","salt::minion","ssh::service"]

Getting Racktables Location Info Into Puppet

At work we have had Racktables ((http://racktables.org/)) for a while for tracking where things are. Its…..okay. Its not the best,but, eh, it works. We need to do a better job with clena data etc, but, it works.

One thing we don’t like, however, is its current lack of an API. We can query the db directly, but thats kinda clunky. So, the other ngiht I had an idea. A few hours later, it was basically done. A YAML api (well, a cheap mans api) for racktables!

So, let me set this up. I want a yaml document for each host, with location info. This way, facter, or, anything else, can pull down that location info. When someone changes something in racktables, the ymal document should be updated. I don’t need real time, but liets say 30 minutes. Sounds like a cron job…

So, a script, running from cron, reading the racktables database and spitting out YAML documents of the data on a per host basis. Okay. What data? Well, I want Row (for us this is datacenter+row), Rack, RU, Height of the system (how many RU’s does it take up?). Since racktables does have some asset tags, might as well pull that so we can compaire to puppet/foreman while we’re at it.

So, a yaml document like:

(rackfact_example.yaml) download
1
2
3
4
5
6
---
ru: "16"
row: "DataCenterB Row 6"
rack: "1"
height: "1"
asset: "326859"

And I want things at a url like

1
http://server/rackfacts/systems/HOSTNAME

Also, at the request of a co-worker, just the endpoint /systems will return ALL of the systems.

So, after a BUNCH of digging into the racktables DB and dusting off my SQL, I came up with:

(rack2yaml.rb) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
#!/usr/bin/env ruby
require 'yaml'
require 'mysql'
path="/var/www/rackfacts/"

my = Mysql::new("racktables","rackuser","rackpass","racktables")

rackobjs=my.query("select distinct RackObject.name,RackSpace.unit_no,Rack.name,RackRow.name,RackObject.asset_no,RackObject.id from RackObject,RackSpace,Rack,RackRow where RackObject.id = RackSpace.object_id AND Rack.id = RackSpace.rack_id AND RackRow.id = Rack.row_id AND RackObject.objtype_id = 4;")

objects=Array.new
rackobjs.each do |row|
        obj=Hash.new
        obj["name"] = row[0].to_s.downcase.strip.delete(' ').delete('#').delete('/').delete('"')
        obj["ru"] = row[1].to_s.strip.delete('"')
        obj["rack"] = row[2].to_s.strip
        obj["row"] = row[3].to_s.strip
  obj["asset"] = row[4].to_s.strip
  obj["id"] = row[5]
        objects.push(obj)
end

#Need to get the height of a given system...
objects.each do |obj|
  height=my.query("SELECT COUNT(distinct unit_no) FROM `RackSpace` WHERE object_id #{obj['id']};")
  obj["height"] = height
end
#Writing Systems, so lets do this in /systems/

path = path + "systems/"

#Lets clean the existing ones, so stale thigns are removed.
clean = "rm -rf #{path} && mkdir #{path}"
%x[ #{clean} ]
objects.each do |thing|
        fpath = path+thing["name"]
        yobj=Hash.new
        yobj["ru"]=thing["ru"]
        yobj["rack"]=thing["rack"]
        yobj["row"]=thing["row"]
  yobj["asset"]=thing["asset"]
  yobj["height"]=thing["height"]
        f=File.open(fpath,'w')
        f.write(yobj.to_yaml)
end

allpath=path + "index.html"
all=File.open(allpath,'w')
all.write(objects.to_yaml)

So, this ruby script:

  • Sets up some stuff
  • connects to mysql
  • runs a query to get most (not height) of the system info.
  • Height is a second query, as racktables doesn’t know about height, but rather has a single object us multiple RU’s in a rack…
  • So, we query for that for each system, counting the times a given object is in a Rack, distinct on unit_no’s as racktables also has a front,back,middle format (so a 4U system that goes front to back might have 12 entries!)
  • We then merge all this data together in an array of hashes
  • Clean out our path
  • dump all the yaml documents
  • dumps out the whole array for the ALL systems bit

Ta Da!

Okay, so now we have those yaml documents, and every 30 minute sthis will get done so anythign we cleanup/remove will be available as well. Now what? Lets pull that in as some facts!

(rackfacts.rb) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
#!/usr/bin/env ruby

require 'facter'
require 'net/http'
require 'yaml'
require 'timeout'
begin
  Facter.hostname
rescue
  Facter.loadfacts()
end

if Facter.value('is_virtual') == "true"
  virtual = "virtual"
  Facter.add("location_ru") do
      setcode do
          virtual
      end
  end
  Facter.add("location_rack") do
      setcode do
          virtual
      end
  end
  Facter.add("location_row") do
      setcode do
          virtual
      end
  end
  Facter.add("location_height") do
      setcode do
          virtual
      end
  end
else   
  hostname = Facter.value('hostname')
  rackfact_host = "racktables"
  rackfact_dir  = "/rackfacts/systems/#{hostname}"
  unknown = "unknown"
  begin
      Timeout::timeout(2) {
          rescode=Net::HTTP.get_response rackfact_host,rackfact_dir
          if (rescode.code =~ /2|3\d{2}/ )
              rackfact = YAML::load(rescode.body)
              ru = rackfact["ru"]
              rack = rackfact["rack"]
              row = rackfact["row"]
              height = rackfact["height"]
              Facter.add("location_ru") do
                  setcode do
                      ru
                  end
              end
              Facter.add("location_rack") do
                  setcode do
                      rack
                  end
              end
              Facter.add("location_row") do
                  setcode do
                      row
                  end
              end
              if height != nil
                  Facter.add("location_height") do
                      setcode do
                          height
                      end
                  end
              end
          else
              Facter.add("location_ru") do
                  setcode do
                      unknown
                  end
              end
              Facter.add("location_rack") do
                  setcode do
                      unknown
                  end
              end
              Facter.add("location_row") do
                  setcode do
                      unknown
                  end
              end
              Facter.add("location_height") do
                  setcode do
                      unknown
                  end
              end
          end
      }
  rescue Timeout::Error
      Facter.add("location_ru") do
          setcode do
              unknown
          end
      end
      Facter.add("location_rack") do
          setcode do
              unknown
          end
      end
      Facter.add("location_row") do
          setcode do
              unknown
          end
      end
      Facter.add("location_height") do
          setcode do
              unknown
          end
      end
  end
end

Now, we can query the nice puppet/forman api’s for location data! Better yet, I can use these with storedconfigs to do things like add location info to Ganglia! Or have systems in 1 data center get specific configs (dns? puppet master? AD?)

Our config management system is now location aware!

Proper Kvm Xml Backups With Ruby

So, while kvm keeps the xml describing a running domain in /var/run/libvirt/qemu, it turns out this isn’t exactly the cleanest thing to backup. I HAD been doing this, with a simple rsync, but realizing this I decided we should do it propper, with libvirt runy bindings and all. So, here it is:

(dump_xmls.rb) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
#!/usr/bin/env ruby

require 'libvirt'
hostname=`hostname --short`.chomp
conn = Libvirt::open('qemu:///system')
vms = Hash.new
conn.list_domains.each do |domid|
        dom = conn.lookup_domain_by_id(domid)
        vms[dom.name] = dom.xml_desc
end

destination = "/n/kvm_stor01/xml/#{hostname}/"
vms.each do |dom,xml|
  file_dest = "#{destination}#{dom}" + ".xml"
 puts "writting xml for #{dom} to #{file_dest}"
 f=File.open(file_dest, "w")
 f.write(xml)
 f.close
end

Pretty simple. Figure out the hostname (this is deployed to multiple kvm hosts), get the list of domains running here, itterate on each one and get the name and xml, and populate a hash with these. Then, itterate that hash dumping the xml into a file on our shared storage, which is in turn checkpointed and backed up. Ta da.

Clean Puppet Up After a Rebuilt Automatically With Cobbler Triggers

Being a shop that is mostly hpc, our compute nodes are pretty disposable, so we rebuild them from time to time. We’re coming up on a push to normalize them a bit, and will be looking to rebuild a bunch in big batches. One of the headaches, that isn’t REALLY a headache, is cleaning up the puppet certs when a system is rebuilt. We autosign puppet certs, so the new ones will come in just fine, but you’ve got to remember to clean the old ones during/before the rebuild. Add storedconfigs to this, and salt minion keys, and there is a good bit of cleanup to get done during a rebuild.

So, first, I wrapped the 3 things we want to clean up, in a script:

(puppet_rebuild.rb) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
#!/usr/bin/env ruby

def printusage(error_code)
  puts "Usage: #{$0} [ list of hostnames as stored in hosts table ]"
  exit(error_code)
end

printusage(1) unless ARGV.size > 0

ARGV.each { |hostname|
        system("puppet cert clean #{hostname}")
        system("puppetstoredconfigclean.rb #{hostname}")
        system("salt-key -d #{hostname}")
}

So, pretty obviously, that cleans the puppet cert, the storedconfigs db entry, and the salt key (puppet master = salt master)

Okay, so, one stop shopping there, but I want this automatic. Wel, we use Cobbler to build systems/define kickstarts, and one of the last things in all of our kickstarts is:

1
wget "http://cobbler/cblr/svc/op/trig/mode/post/system/SOME_HOSTNAME" -O /dev/null

Which lets cobbler know the build is done. This can optioanlly trigger scripts in /var/lib/cobbler/triggers/install/post, so, I added one:

(clean_puppet.sh) download
1
2
3
4
5
6
7
8
9
10
11
12
#!/bin/bash

#$1 = type
#$2 = system name (NOT DNS/FQDN)
#$3 = IP

name=$2
hostname=`curl -s -x "" http://localhost:3000/hosts?format=yaml  | grep $name | sed -e 's/  - //g'`

hostname_fixed=${hostname//[[:space:]]}

/usr/bin/puppet_rebuild_host $hostname_fixed

So, its passed 3 arguments: The object type (system), the system name, and the IP. I take the name, and query out forman api for the fqdn (we have a few domains so I can’t assume hostname.my.domain.com), and the call teh script above to clean out everything for that host!

So, when it comes to puppet/salt certs, we don’t care now. New system are automatically accepted, and if you rebuild, the old ones are removed and new ones accepted, just like that!