RabbitMQ on Ubuntu via Puppet?

Ubuntu has a certain really annoying property. Alright it has several, but the one that I’m talking about right now is the insistence that it start services upon installation. While I’m never a fan, there are certain times when it chafes more than normal.

Here’s the deal. I’m using the PuppetLabs RabbitMQ module, and I’m trying to spin up an instance in Vagrant. The initialization is simple enough:

class { ‘rabbitmq’:
port                    => ‘5672’,
service_manage          => true,
environment_variables   => {
‘RABBITMQ_NODENAME’     => ‘server’,

This works – mostly. The package gets installed, but when puppet tries to manage the service, it fails:

Screen Shot 2014-02-07 at 5.07.30 AM


The reason for the failure is actually because the port is in use. If you connect into the machine and try to start the service manually, you get this:

[email protected]:~$ sudo /etc/init.d/rabbitmq-server start
 * Starting message broker rabbitmq-server
 * FAILED - check /var/log/rabbitmq/startup_\{log, _err\}

When you check the start_log, you find this:

Error description:

The reason it can’t start the listener is because that port is already in use!

[email protected]:~$ sudo lsof -P | grep LISTEN | grep 5672
beam.smp 2211 rabbitmq 16u IPv6 11556 0t0 TCP *:5672 (LISTEN)

Sure enough, ‘ps auxf’ shows this:

rabbitmq 2211 0.4 8.3 1073128 31160 ? Sl 10:07 0:02 \_ /usr/lib/erlang/erts-5.8.5/bin/beam.smp -W w -K true -A30 -P 1048576 — -root /usr/lib/erlang -progname erl — -home /var/lib/rabbitmq — -pa /usr/lib/rabbitmq/lib/rabbitmq_server-3.2.3/sbin/../ebin -noshell -noinput -s rabbit boot -sname [email protected] -boot start_sasl -kernel inet_default_connect_options [{nodelay,true}] -sasl errlog_type error -sasl sasl_error_logger false -rabbit error_logger {file,”[email protected]”} -rabbit sasl_error_logger {file,”[email protected]”} -rabbit enabled_plugins_file “/etc/rabbitmq/enabled_plugins” -rabbit plugins_dir “/usr/lib/rabbitmq/lib/rabbitmq_server-3.2.3/sbin/../plugins” -rabbit plugins_expand_dir “[email protected]” -os_mon start_cpu_sup false -os_mon start_disksup false -os_mon start_memsup false -mnesia dir “[email protected]

As it turns out, of course, starting the service upon installation is by design.

So here’s my question. Presumably, other people have done this. The service is documented as tested under Ubuntu 12.04, which is what I’m running. How do you make this work? The only thing I’ve seen that makes the puppet run not fail is to set “service_manage => false” and I don’t want that. I want the service to be managed and I want the initial default running-right-after-installation instance to die.

What’s the right way to do that? For what it’s worth, I searched on ServerFault and found someone else with the same problem, but there hadn’t been an answer after a month yet, which is why I brought it here.

Can you give me a hand? Thanks in advance!


Got it! I popped onto the #rabbitmq IRC channel on freenode and started asking questions, and I was going down the complete wrong line of thinking, when one of the local denizens, bob235, asked me a crucial question:

so when you install other services (e.g. webservers) they don’t start up automatically? any idea how they manage to do that?

Right. Good call. Why, when I install apache via puppet doesn’t it fail. Well, it’s because Puppet checks to see if Apache is running before it starts, by doing ‘/etc/init.d/apache2 status’ and learning that the service is already running. So why wasn’t that happening here?

I booted a generic vagrant instance, and manually installed rabbitmq using apt-get. It ran after installation, as I expected it would. Running ‘/etc/init.d/rabbitmq-server status’ showed that it was running:

[email protected]:~# /etc/init.d/rabbitmq-server status
Status of node [email protected] ...
                        {mnesia,"MNESIA  CXC 138 12","4.5"},
                        {os_mon,"CPO  CXC 138 46","2.2.7"},
                        {sasl,"SASL  CXC 138 11","2.1.10"},
                        {stdlib,"ERTS  CXC 138 10","1.17.5"},
                        {kernel,"ERTS  CXC 138 10","2.14.5"}]},
 {erlang_version,"Erlang R14B04 (erts-5.8.5) [source] [64-bit] 
[smp:2:2] [rq:2] [async-threads:30] [kernel-poll:true]\n"},
Then I killed the vagrant instance, turned on the puppet rabbitmq manifest, and started it up. Puppet failed during the run because it couldn't start the service, just as it always had failed. So I connected in and ran '/etc/init.d/rabbitmq-server status', and I got this:
[email protected]:~# /etc/init.d/rabbitmq-server status 
Status of node [email protected] ...
Error: unable to connect to node [email protected]: nodedown


nodes in question: [[email protected]]

hosts, their running nodes and ports:
- precise64: [{rabbit,45074},{rabbitmqctl2835,51120}]

current node details:
- node name: [email protected]
- home dir: /var/lib/rabbitmq
- cookie hash: ovNnSahEs2CWYKS80bcf5w==

Yeah, it definitely does't see it running. But wait, did you catch it? There's a subtle difference.

In the output from installing it manually, the first line was "Status of node [email protected]", but after the puppet run, it was "[email protected]". Puppet is changing the node name on disk, but the running instance is under the original name!

I edited /etc/rabbitmq/rabbitmq-env.conf to change the node name back to 'rabbit', then ran /etc/init.d/rabbitmq-server status, and sure enough, it showed identical output to when I installed it manually.

To finish it off, I edited the puppet config to change the node name back to 'rabbit', then restarted the vagrant box. It started perfectly:

Screen Shot 2014-02-07 at 10.28.13 AM

End result: If you want to use the RabbitMQ module from puppetlabs, my advice is to not change the node name on the server from 'rabbit'.

  • What’s the exit code when you run /etc/init.d/rabbitmq-server status? I always forget what the analogous property would be with Puppet, but at least with Chef, you can specify a status command if /etc/init.d/rabbitmq-server status returns non-zero for whatever reason (something like netstat -luptn | grep 5372 or ps auxwww | grep rabbitmq)

  • Alexander Fortin

    If I had to fix this today, I think I’ll go cloning the package into my internal APT repository and rebuilding the DEB package with no post-commit script (or whatever is doing the restart). Btw, I really love the Debian project and been a happy long time user, but the auto-start policy is a real PITA

  • I feel you , it’s not only limited to Ubuntu, RPM package screws you the same.
    For last week I’m trying to have chef configure rabbitmq cluster without any stupid errors.

  • Chris J

    The RabbitMQ startup script should have built in logic to not start rabbitmq if it’s already running. So, if someone manually ran the start script 5 times in a row, the script gracefully exits on runs 2 through 5.

    Whenever Puppet runs a startup script, it should actually test the service status beforehand to verify the state, and then determine whether or not the startup script should be involved.

  • Quick and dirty:

    class { ‘rabbitmq’:
    port => ‘5672’,
    require => Exec[“nc -l 5672”],
    before => Exec[“killall nc”],
    service_manage => true,
    environment_variables => {
    ‘RABBITMQ_NODENAME’ => ‘server’,

    or something like that. This way, rabbitmq won’t start right after the installation, because the port 5672 will be busy. After the installation, “nc” gets killed, and Puppet will be free to manage the service.

  • I think Ian P is on the right track. If the service is running, puppet should not try to start it again. Sounds like the wrong status code is returned.

    You can try setting hasstatus => false and using a hacky way to detect the process running with status =>

  • Atom Powers

    That Ubuntu/Debian always start services upon installation is a huge pet-peeve of mine. Why in the world would I want to start a service before it has been configured or secured?

    My research suggests that this will prevent services from auto-starting but I’ve been too timid to put it into production because I don’t know what other triggers might be disabled.

    file {'/etc/dpkg/dpkg.cfg.d/no-triggers': content => 'no-triggers' }