Tag Archives: cloud

Spinning up a quick cloud instance with Digital Ocean

This is another in a short series of blog posts that will be brought together like Voltron to make something even cooler, but it’s useful on its own. 

I’ve written about using a couple other cloud providers before, like AWS and the HP cloud, but I haven’t actually mentioned Digital Ocean yet, which is strange, because they’ve been my go-to cloud provider for the past year or so. As you can see on their technology page, all of their instances are SSD backed, they’re virtualized with KVM, they’ve got IPv6 support, and there’s an API for when you need to automate instance creation.

To be honest, I’m not automating any of it. What I use it for is one-off tests. Spinning up a new “droplet” takes less than a minute, and unlike AWS, where there are a ton of choices, I click about three buttons and get a usable machine for whatever I’m doing.

To get the most out of it, the first step you need to do is to generate an SSH key if you don’t have one already. If you don’t set up key-based authentication, you’ll get the root password for your instance in your email, but ain’t nobody got time for that, so create the key using ssh-keygen (or if you’re on Windows, I conveniently covered setting up key-based authentication using pageant the other day – it’s almost like I’d planned this out).

Next, sign up for Digital Ocean. You can do this at DigitalOcean.com or you can get $10 for free by using my referral link (and I’ll get $25 in credit eventually).  Once you’re logged in, you can create a droplet by clicking the big friendly button:

This takes you to a relatively limited number of options – but limited in this case isn’t bad. It means you can spin up what you want without fussing about most of the details. You’ll be asked for your droplet’s hostname (which will be used to refer to the instance both in the Digital Ocean interface and will actually be set to to the hostname of the created machine),  you’ll need to specify the size of the machine you want (and at the current moment, here are the prices:)

The $10/mo option is conveniently highlighted, but honestly, most of my test stuff runs perfectly fine on the $5/mo, and most of my test stuff never runs for more than an hour, and 7/1000 of a dollar seems like a good deal to me. Even if you screw up and forget about it, it’s $5/mo. Just don’t set up a 64GB monster and leave that bad boy running.

Next there are several regions. For me, New York 3 is automatically selected, but I can override that default choice if I want. I just leave it, because I don’t care. You might care, especially if you’re going to be providing a service to someone in Europe or Asia.

The next options are for settings like Private Networking, IPv6, backups, and user data. Keep in mind that backups cost money (duh?), so don’t enable that feature for anything you don’t want to spend 20% of your monthly fee on.

The next option is honestly why I love Digital Ocean so much. The image selection is so painless and easy that it puts AWS to shame. Here:

You can see that the choice defaults to Ubuntu current stable, but look at the other choices! Plus, see that Applications tab? Check this out:

I literally have a GitLab install running permanently in Digital Ocean, and the sum total of my efforts were 5 seconds of clicking that button, and $10/mo (it requires a gig of RAM to run the software stack). So easy.

It doesn’t matter what you pick for spinning up a test instance, so you can go with the Ubuntu default or pick CentOS, or whatever you’d like. Below that selection, you’ll see the option for adding SSH keys. By default, you won’t have any listed, but you have a link to add a key, which pops open a text box where you can paste your public key text. The key(s) that you select will be added to the root user’s ~/.ssh/authorized_keys file, so that you can connect in without knowing the password. The machine can then be configured however you want. (Alternately, when selecting which image to spin up, you can spin up a previously-saved snapshot, backup, or old droplet which can be pre-configured (by you) to do what you need).

Click Create Droplet, and around a minute later, you’ll have a new entry in your droplet list that gives you the public IP to connect to. If you spun up a vanilla OS, SSH into it as the root user with one of the keys you specified, and if you selected one of the apps from the menu, try connecting to it over HTTP or HTTPS.

That’s really about it. In an upcoming entry, we’ll be playing with a Digital Ocean droplet to do some cool stuff, but I wanted to get this out here so that you could start playing with it, if you don’t already. Make sure to remember, though, whenever you’re done with your machine, you need to destroy it, rather than just shut it down. Shutting it down makes it unavailable, but keeps the data around, and that means you’ll keep getting billed for it. Destroy it and that erases the data and removes the instance, which is what causes you to be billed.

Have fun, and let me know if you have any questions!

Behold Microsoft, Harbinger of the Future

Yeah, that’s right. You heard me.

There are a ton of people who are very, very mad at Microsoft because of their recent TechEd announcement. Basically, Microsoft is concentrating, in a large way, in being a managed service provider. They’re still selling software, but they’re concentrating on honing their hosted service offerings, and in the mean time, a lot of system administrators have expressed concern that Microsoft is saying “Screw You” to sysadmins, because they feel like Microsoft is intentionally making it unnecessary to have sysadmins doing the same thing they have been for 20 years.

I hate to say it, but “suck it up”, because this is totally what’s happening, and it’s going to be better for (almost) everyone this way.

Look, I wrote over a year ago that the profession of system administration is fracturing. There are going to be the architectual people who do big-picture things, and there will be the physical infrastructure admins who deal with the lower layer issues. You’ll notice that I didn’t say anything in that article about mid-level Exchange admins, and that’s because, in the future, roles like that won’t exist like they do now. I’m not picking on Exchange. I could just as easily have said “MySQL admins”.

It’s certainly not that people will stop using the services, it’s just that the administration of them will be largely abstracted away, and any other administration of the services will be done in bulk by the Exchange Admin at the MSP, not by someone at your company. Sure, someone at your company might be responsible for writing the software that ties your local infrastructure into the MSP’s infrastructure, but it won’t be a “sysadmin” as you currently recognize them. It’ll be a programmer who knows operations, directed to do that by the infrastructure architect.

At best, infrastructures of a large enough scale will run their own internal abstractions (“clouds”), and administrators will use APIs to deploy instances, diving into the actual administration of services only when necessary, and only for extremely deep-knowledge related issues. If you don’t think so, take some time and research OpenStack or even Microsoft’s recent Azure Pack. The idea is that it becomes seamless for you to have infrastructure on your equipment, then migrate it to an external provider.

This model won’t ever contain 100% of the infrastructures out there. For a lot of people, it still makes sense to run their own physical infrastructures in-house and to have the “normal” IT staff supporting them, and it’ll continue to be that way for a long time. But as new companies come online, this new way of operating IT will become more and more common.

The bottom line is that it doesn’t make sense to build out a new generic physical infrastructure anymore, beyond whatever is necessary to support your users’ desktop machines. Using a hosted cloud provider for your business is better in almost every way, so if you want to future-proof yourself, learn AWS, learn Azure, learn Rackspace. Learn to code. Learn configuration management. Learn to let go of the way that things have always been, because they won’t always be like that in the future.

Reader Question: Cloud vs Virtual Farm?

I really enjoy getting mail from readers. I know that I sometimes don’t answer in what you might consider a timely fashion, but that’s just because you’re not thinking geologically.

On occasion, though, I do get some downtime to discuss things with my readers, and the other day, I got a good question that I don’t think I’ve written about, so I thought I’d share with everyone else.

Here’s the email:

Matt,

This might sound like a nooby question, but I will sally forth…

What is the difference between using a Private cloud solution and a Virtual Machine environment?

I am planning to update my test environment, and if possible use a personal Could based solution for testing various software platforms. Based on my limited knowledge of Cloud based platforms, The VM and Cloud based environments are pretty much the same, except when you shutdown an instance of cloud based machine, it loses it configuration.

I have a VM environment right now, but would like to be able to provision new machines at will (depending on hardware restrictions), but would like to keep some of these machines as configured.

Thanks in advance

Here was my answer:

Hi!

I don’t think it’s a “nooby” question. It’s one that a lot of people struggle with, because there isn’t a clear definition, and the marketing in the industry tries to confuse the matter at every turn.

I’ve thought about it pretty extensively over the past couple of years, and I actually have a definition in my mind of “Cloud” that I think is logical and based on the origination of the term. Remember that the word cloud came from network diagram icons that indicated, for the purposes of that particular diagram, it didn’t matter what specific machines or software filled that particular space, only that there was a resource available that provided a service. At its core, it is a term of abstraction.

In my mind, cloud is still a relative term. Amazon’s AWS cloud is absolutely not a ‘cloud’ to the people who work on the discrete components of it. It *IS* a cloud to you and I, though, because when we interface with it, we see no distinction between the particular constituant machines. We only see the interface to the API that controls the creation and management of the resources that are hosted within the cloud. We don’t know which actual machine or machines inside the Amazon cloud are handling our resources, and we don’t care.

It should also be noted that there are different *types* of Cloud, too, and it all depends on what you want to abstract. If you want to create entire machines and network devices, there is Infrastructure as a Service (IaaS). Sometimes, you just want to write software and not worry about administering the web server that it runs on…then you want a Platform as a Service (PaaS). It’s also possible to just want to rent a service, like hosted Email, in which case you want Software as a Service (SaaS). But since you mentioned building virtual labs, IaaS is what we’re talking about here.

So, this brings us to your question about a private, internal cloud. It IS possible to build an internal cloud such that you can interface with it in an abstracted way, never knowing or caring (other than for debugging purposes, since you’re not only running things on the cloud, you’re running the cloud itself) where the resources are hosted. But for the immediate future, its almost always overkill in a smaller shop.

The Eucalyptus Project is devoted to creating private cloud platforms that can be administered using an abstracted API – in fact, it’s the basis for the new HP Cloud that’s currently in beta.

That being said, is it worth setting up for a lab environment? I guess the answer is, “It depends”. It would be an interesting exercise to build a cloud, just for the sake of doing it, but only you know if you have the number of machines available with the (admittedly low) required resources.

It may be possible to configure your current virtual environment to deploy new images quickly and without much manual intervention, but without knowing more details, I can’t be sure of the implementation. I would, at the very least, research the API for your virtualization platform and check out something like Cobbler.

I hope this answered any questions you have. Please feel free to follow up if you have any other questions. I’m happy to help.

Thanks!

–Matt

Confusion and misconceptions about what a “cloud” is run rampant because, for a very long time, it was essentially an undefined marketing term. As the idea gels, we get a better idea of what to expect from the service providers.

Also, if you’ll note that I said that running a private internal cloud is overkill for the immediate future. That’s because I suspect more and more software will be developed with the inherent idea of “cloudiness” built-in. I’d be surprised if software like Tomcat didn’t have server abstraction built in during the next couple of releases, where you install Tomcat on a few machines, and then you deploy your application to a central controller, which then deals with the “cloud” instances. There are already commercial services for this, but there’s no reason it shouldn’t be added into the software itself.

Likewise with an abstraction layer above KVM, so you just launch new VMs onto an array of hardware, and you don’t know or care which machine it runs on. This is what Eucalyptus does, of course, but right now, Eucalyptus is exceptional for doing so, where as in a few years, it’s going to be par for the course and built in. More and more administration will be done through APIs, I’m telling you…

Of course, I could be wrong! Let me know what you think in the comments below.