View from the other side…

So, this whole “quitting my job” thing has definitely been an experience…I’ve got another month before the end, but I’ve got to say, it’s really given me a change of perspective, and it’s urging me to do things that I should have done a long time ago.

Automation is big for most sysadmins. We’re inherently lazy, so the idea of pushing a button and making programs work for us? Appealing. I always wrote scripts to do things for me, but only since I told everyone that I was quitting did I really start to re-evaluate what I did by hand. As it turns out, there was plenty that I had been taking for granted, when I should have been scripting.

The first sign of this was probably back when I started to change the way I added tablespace in Oracle. (Our version and DB config of Oracle is terrible, so please skip over the particulars in the next couple of paragraphs and just go with the flow). Adding tablespace was scripted, but I was having to do it more and more often…until recently, when I wizened up, monitored usage, graphed it over time, then set up crontabs to add space for me. As it is right now, it’s completely automated, which is the right way.

Then I realized that when the “maximum number of files exceeded” error popped up (around every 32 datafiles), I had to go through this big rigamarole which involved me manually turning off databases, syncing control files, etc etc. So I sucked it up and spent an entire day writing a script that can concretely do all of that for me.

Today I wrote a relatively brief script that iterates through users, adding clients’ public keys to their keyrings, which I had been doing manually each time up to now.

There’s no reason I couldn’t (or shouldn’t) have done these before, it’s just that my perspective changed, and maybe more than anything, I was ashamed that I hadn’t written the scripts, and I didn’t want to have to explain that to the next admin.

Not automating things because you’re lazy is the wrong kind of lazy.

A month or so of writing scripts and automating the boring stuff away is a month well spent.

Playing with a WAN Optimizer

2 years ago, I asked for advice on WAN Acceleration. Let it not be said that I never follow up on anything ;-)

Even with all of the happenings and goings on, I’m still doing actual work somehow, and one of my projects is to evaluate some WAN optimizers for our NYC office, since it seems likely that all of our employees will once again be in the same office before the end of the year, and even at the new site, 20-30 people browsing huge file shares, running huge SQL queries, and so on will eat up the available bandwidth without much effort.

To stem the flood of traffic, our CEO approved my suggestion that we look into WAN optimization strategies that would let us expand throughput without a matching growth of continual expenses on bandwidth. I looked at Riverbed’s Steelhead appliances, and they came in several versions of expensive, but the real deal breakers for me were that the appliances were licensed based on the number of sessions supported at one time. As it turns out, in addition to the network data deduplication, they also have application-specific enhancements.

What that last point means is that certain applications have protocol-specific enhancements to increase speed and responsiveness, so say, CIFS file shares fly, and Exchange mailboxes go really quickly. [edit] These TCP-level protocol enhancements use things like window sizes, as well as upper-layer protocol mimicry (if that’s the right word?) to leverage the fact that the clients are really making connections locally, rather than to remote hosts, like they think they are. By the way, thanks to Jwiz for setting me right in the comments. I had mistakenly believed that Riverbed didn’t do compression for non-enhanced protocols, but that is not the case.

So, after asking around on twitter, I got in touch with Silver Peak, who make several solutions for the space. Although they don’t do the TCP-protocol tricks I talked with the sales and engineering guys, and they agreed to send me out some demo units to see if I would be able to see any improvement in my networks. They came yesterday, so today I spent an hour or two on the phone with the technician getting them configured. I’m hoping to have them in place before the end of next week.

The whole idea of WAN optimization is pretty cool. It’s essentially network-based deduplication. You have a box at one end of a connection and a box on the other end of the connection, each functioning (in my case, anyway) as an invisible bridge between the switch and the gateway router. Here’s an overview of how the two will function:

A user requests something from a server. The server fills the request, and sends the data back to the user. Meanwhile, the WAN accelerators at each site are watching this happen, and caching the data to their hard drives. This way, when a second user requests the same data, the remote server receives the request, and if the data is unchanged, then instead of the server-side machine sending the whole file again, instead it sends a reference to it (or really, a reference to the byte streams that make up the file) with the understanding that the references are smaller than the contents they’re referring to. This means that, in effect, the optimizers are talking on their own, saying “Hey, Bob wants X, Y, and Z bytes, but you’ve already got them. Send them from your side instead”, which makes the data transfer the equivalent of two LAN transfers, rather than a WAN transfer.

If you’re unsure of why this is a good thing, make sure to read this Numbers Everyone Should Know post carefully.

As you can see, the WAN accelerators are in-line. When the engineers first described this to me, I was a bit concerned. I did not want a failed box to cause me downtime. He explained that the ethernet cards in the box are designed to fail open, and thus revert back to bridge mode. Very cool. I tested it, and sure enough, even without the power plugged in, traffic passes. There’s an audible click when the power is brought up or down, because apparently there’s a relay inside that triggers. In any event, it makes me feel better, even though I’m pretty sure that 100% of the failure modes aren’t known. I didn’t have time to talk with the engineer about that, but I’ll be asking more questions as I go through testing.

The one part of the experience that wasn’t great was the server that came with the boxes and collates statistics and the like. It’s loud. Loud, like, wow, I can’t be in the same room as this, loud. It’s a rebranded SuperMicro (at least, it’s virtually identical to a SiliconMechanics machine that I bought a while back, and they use SuperMicro, to the best of my knowledge). Anyway, it’s loud. Really loud. Unfortunately, the shared-space basement in that building isn’t the ideal place to put a multi-thousand-dollar machine you have on loan, but it looks like that’s where it’s going to have to live for now. I’m not happy about it, but I didn’t make the call. It should be fine until we actually buy it.

I’ll make sure to write about what I find. I’ve already been playing with the statistics, and there’s a ton of good information available from the machine, even if it isn’t actually doing any compressing yet. I can’t wait to see how it does when it actually starts improving our throughput.

“Developing” on the Mac

You know, developers aren’t the only ones who like to use compilers…also those of us who want to compile source code would like them.

Macs never really “came” with the development toolkit (things like autoconf, make, gcc, etc), but adding it used to be really easy. You would stick the OSX install disk in the machine, and browse to the Xcode directory, and double click the .pkg file. If you didn’t have the CD with you wherever you were, you just went to the Apple Developer Site and downloaded a 2+GB dmg file. That’s all in the past, sadly.

I found out today that it’s a lot harder to download the Xcode software (which has not only Apple’s proprietary software libraries, but replaces gcc with their version of LLVM (I can only assume that it’s derivative from the open-source LLVM project)) than it used to be.

If you go to the Xcode page, you’ll see a link that says “Download Xcode”, but what they really mean is, “Download Xcode (after you pay us $5 for the software, or pay us $99 a year to be an OSX or iOS developer)”. Of course, the only version that they offer on that site is Xcode 4. What they do not tell you is that Xcode 3.2 is still available, but you can’t just go download it.

No, you have to register as an Apple Developer to do that. Because we wouldn’t want those precious compilers just being installed willy-nilly, right? Right.

Thanks to @lusis on twitter for sending me the following link…it’s completely obvious, and I don’t know how I missed it: Go there, and register as a Free (I think they actually call it “undecided”) developer, and you can then have the privilege to download the 4.1GB tarball that contains the software you need to actually compile programs on your computer.

I don’t think I need to tell you that this is a huge pain in the ass. What I’m really surprised about is that they’re trying so hard to charge for it, though. Xcode should absolutely be available through the App store, but making people pay to download it? That’s ridiculous. Make people pay for the IDE. Make them pay for the the debugger. But don’t make them pay for the stupid compiler.

Of course, at this point it becomes easier to find a way around the onerous limitations artificially placed on the system. I’d suggest installing a compiler through an alternate method, but the two biggest alternative software repositories for OSX (Fink and MacPorts) both require the developer tools already be installed!

Honestly, I never thought that I’d be tempted to pirate a free compiler.

This goes a long way to re-enforce my belief that OSX is a fine operating system as long as you don’t treat it like Unix.

What are your thoughts on this? Is there any justifiable excuse for Apple’s behavior? I’m interested in hearing it, if there is.