Conference News (LISA and PICC and more!)

This is apparently the “time to schedule your conference trips” part of the year, because there is news on the SysAdmin conference front.

First, and most pressing, the LISA10 conference schedule has been released! I’ve got to say, I’m digging the theme of the website, too. More important, though, is the content. Interestingly, all sessions and tutorials are available in half-day increments this year. This means that you can attend the first half of one session then migrate to another session after lunch. I’ve got mixed feelings about this, but I’m interested in how it will pan out. More flexibility is nice, though, and sometimes the first half of a session is really review (though there are a lot of arguments against that, too).

As always, there are discounts available for certain groups, and you do get a lower admission price if you’re a member of LOPSA, USENIX, or SAGE.

Check out the registration page for the fees. There’s an early-bird special going on until October 18th, so make sure you register soon. The return on investment for this conference is amazing.

I’m going to be there as a conference blogger, along with Matthew Sacks, Ben Cotton, and Marius Ducea. We’ll be publishing entries on the USENIX blog (which I’ll be linking to from here as well, of course).

Come to LISA and have a great time. And if you do decide to come, find me and say hello. I always love meeting readers.

Shifting gears a little bit, I’m sure you remember the PICC conference that LOPSA-NJ hosted. Well, we had a blast, and last year’s conference chair, William Bilancio, did an amazing job. It’s a bit much to do that twice in a row, though, so he was looking for someone to take the responsibility for this year’s conference, and after running it through my head a while, I decided that I’d take the job if he thought I’d do alright. Here’s his email announcing it:

It is with a great sigh of relief that Matt Simmons has decided to be
the Program Chair for PICC ’11.

Last year Matt was the head of the marketing team and did a great job
at getting the word out about the conference and was a key person in
making last years conference a success.

Tom and I feel that he will do a great job as the Program Chair and
will make PICC ’11 a great conference.

In other news I will be getting in contact with the hotel and get the
date locked in, in the next few weeks and then we can start really
working on the conference.

Please start thinking about sponsor ideas as well as any new people
you think will be able to help make PICC ’11 another great conference.

Again thank you Matt for taking PICC ’11 Program Chair job and good luck.


I want to thank William and everyone who was involved with last year’s conference. Everyone I’ve talked to had a great time and has been looking forward to this coming year. I’m going to work hard to try to improve on William’s example, and really grow the community of system administrators in New Jersey and the rest of the northeast. I’m going to need help, though, so if you helped out last year, I’ll be calling on you now. If you weren’t involved last year, now is a great time. Drop me an email or comment on this story to let me know that you’re interested in volunteering. We can definitely use the help.

In addition, I was talking to Lee Damon, who let me know about a SysAdmin conference called “Cascadia IT Conference” (aka “CasITConf”), and it’s happening in the Pacific Northwest. It’s being put on by SASAG, the Seattle-Area System Administrators’ Guild.

So there you go. Three sysadmin conferences in one post. It’s going to be a busy year for everyone, so get involved and lend a hand to someone in your area!

On the road again…

My datacenter migration (or renovation, as I’m referring to it) includes a fair amount of added virtualization. We’ll be maxing out the memory and processor power of three machines at each site, and those will act as a VMware HA cluster (we’re buying the vSphere Essentials Plus license kit for each site).

Of course, I’ve got to have some VMs to run. I could reinstall all of my machines using cobbler (which would invoke the gods of trial and error, not to mention incur Murphy’s Wrath), or I could convert the machines that already exist from physical to virtual (p2v). That second option sounds much less error prone.

That being said, converting a physical machine to a VM isn’t exactly a fast process. Hoping to get it done the weekend of the move would be foolish, so I need to get it done beforehand. That’s why I’m driving to Philadelphia today.

Last week, I threw a couple of terabyte SATA drives into a spare PowerEdge 1950 server, upped the RAM a bit, and installed a freshly minted copy of vCenter Hypervisor 4.1 (formerly known as ESXi). I’m trucking this machine down to our secondary data site today so that I can begin the p2v conversion process. I’ve got enough disk space that I won’t run out (I’m only putting the root partitions in the VMs, since all the data is stored on the SAN), and I don’t need to actually run the machines, so RAM won’t be a problem. This will just be a holding tank until I get the VM hosts setup during the conversion weekend.

The actual conversion will be done using VMware Converter, a free tool by VMware that I’ve been really impressed with. It does want an ESXi…err..vCenter Hypervisor server to connect to, but that’s free too.

Once this is down there, I’ve got some decisions to make. Namely, I need to decide how long to wait until I do the conversion. Not a lot of data changes on the root partition. It’s going to be limited to logs, really (since I haven’t gotten a centralized syslog server running yet). The exception to this rule is the domain controller at that site. That needs to be the absolutely last machine I convert, and once I do it, I’ve got to turn off the source, because if the image becomes too far out of sync, well…that’s sort of like crossing the streams.

So, has anyone else pre-converted VMs like this in preparation for a move? Any advice or caveats to watch for?

Fixed the mistaken Ghostbusters quote. Did I seriously say “crossing the beams”? I am disappoint.

My take on DevOps

Alright, several people have asked me why I haven’t weighed in on the current “devops” movement. Mostly because no two people can absolutely agree on what DevOps is. I’m outside of that particular community, although I read a lot of the blogs of the key members, so maybe I’m in a good position to comment on my perspective.

First, lets define DevOps. If you strip away all of the touchy-feely stuff that gets associated with the name, at its core, DevOps is an increased interaction and interdependency between developers and operations staff, whether that operations staff is specifically system administrators or whatever.

This means that the people who develop code no longer have willful ignorance of operational environments, and the people who operate the environments can’t do so in a vacuum of knowledge about the software itself. This increased communication and reliance IS DevOps. That’s it. Nothing more. It’s a methodology. It’s not a panacea and it’s not for everyone. How can you tell if it’s for you?

Let’s answer some questions…

  1. Does your organization have programmers?
  2. Developers are necessary for the DevOps relationship…otherwise you’ve just got Ops

  3. Do you provide Software as a Service?
  4. DevOps grew up in the web world, around places like Flickr, who provide applications over the web. Other people may just think of them like websites, but in actuality, they’re applications with incredibly large code bases. Since a solid application depends on well-developed code running in a known stable environment, it’s natural that this kind of biosphere would produce methods like DevOps

  5. Do you release software updates frequently?
  6. If you’re in an environment where something is broken and gets fixed immediately, then you can say yes here, but it’s not just bug fixes. Features get rolled out, pulled in, and switched around. Agility of this nature isn’t possible without everyone working from the same playbook. It’s also not possible with an environment that can’t change rapidly to match the code.

For the 90% of companies out there without that particular environment, then you probably aren’t using DevOps, and that’s fine, because there’s almost nothing it can do for you. Especially if you don’t have programmers. Because hey, no dev, right?

You’ll notice that nowhere in the preceding text did I mention the tools that DevOps uses. That’s because the tools are completely separate. Using “puppet” doesn’t mean you subscribe to the DevOps methods (or even the mentality), and although DevOps may not be necessary for your environment, you might find puppet extremely useful. Let me say that again, Using the same tools as DevOps shops use does not tie you to the DevOps methodology.

As alluded to in the last answer up above, the shops that run DevOps need environments that can change quickly and absolutely. They needed tools that could do it, because you can’t manually change hundreds of application servers. Because of their need to change that many machines, and have it happen nearly instantaneously, tools to automate this kind of change were developed and implemented.

Other technologies that get lumped into DevOps, cloud computing and virtualization, are also natural off-shoots of the type of environment where you have hundreds of application servers. Of course that kind of environment is going to be heavily into virtualization (if they’ve got an existing large infrastructure) or cloud computing (if they don’t).

Again, DevOps doesn’t “own” these technologies. They just use them (and advance them by writing tools to improve them, in many cases).

So there, that’s my take. For the people who can use it, DevOps is developing into an exciting methodology to ensure increased availability and stability of IT resources.

It’s not for everyone, but you owe it to yourself to take a look at the tools that too many people have been misbranded “DevOps”. There’s a lot of functionality there, and it can decrease the amount of time you spend slogging through administrative tasks.

It looks like I’m not the only one who’s been thinking about this, too. Benjamin Smith wrote his take as well, and it seems like we agree quite a bit.