Industry News Desk
Cloud Computing: The SmugMug Approach to Using Amazon's EC2 and S3
SkyNet Lives! (aka EC2 @ SmugMug)
By: Don MacAskill
Aug. 21, 2008 06:00 AM
Don MacAskell's Blog
The architecture basically consists of three software components: the rendering workers, the batch queuing piece, and the controller. The rendering workers live on EC2, and both the queuing piece and the controller live at SmugMug. We don’t use SQS for our queuing mechanism for a few reasons:
Our render workers are totally “dumb”. They’re literally bare-bones CentOS 5 AMIs (you can build your own, or use RightScale’s, or whatever you’d like) with a single extra script on them which is executed from /etc/rc.d/rc.local. What does that script do? It fetches intelligence.
When that script executes, it sends an authenticated request to get a software bundle, extracts the bundle, and starts the software inside. That’s it. Further, the software inside the bundle is self-aware and self-updating, too, automatically fetching updated software, terminating older versions, and relaunching itself. This makes it super-simple to push out new SmugMug software releases - no bundling up new AMIs and testing them or anything else that’s messy. Simply update the software bundle on our servers and all of the render workers automatically get the new release within seconds.
Of course, worker instances might have different roles or be assigned to work with different SmugMug clusters (test vs production, for example), so we have to be able to give it instructions at launch. We do this through the “user-data” launch parameter you can specify for EC2 instances - they give the software all the details needed to choose a role, get software, and launch it. Reading the user-data couldn’t be easier. If you haven’t done it before, just fetch http://169.254.169.254/latest/user-data from your running instance and parse it.
Once they’re up and running, they simply ping the queue service with a “Hi, I’m looking for work. Do you have any?” request, and the queue service either supplies them with work or gives them some other directive (shutdown, software update, take a short nap, etc). Once a job is done (or generated an error), the worker stores the work result on S3 and notifies the queue service that the job is done and asks for more work. Simple.
This is your basic queuing service, probably very similar to any other queueing service you’ve seen before. Ours supports job types (new upload, rotate, watermark, etc) and priorities (Pros go to the head of the line, etc) as well as other details. Upon completion, it also logs historical data such as time to completion. It also supports time-based re-queueing in the event of a worker outage, miscommunication, error, or whatever. I haven’t taken a really hard look at SQS in quite some time, but I can’t imagine it would be very difficult to implement on SQS for those of you starting fresh.
Controller (aka SkyNet)
For me, this was the fun part. Initially we called it RubberBand, but we had an ususual partial outage one day which caused it to go berzerk and launch ~250 XL instances (~2000 normal EC2 instances) in a single call. Clearly, it had gained sentience and was trying to take over the world, so we renamed it SkyNet. (We’ve since corrected the problem, and given SkyNet more reasonable thresholds and limits. And yes, I caught it within the hour.).
SkyNet is completely autonomous - it operates with with zero human interaction, either watching or providing interactive guidance. No-one at SmugMug even pays attention to it anymore (and we haven’t for many months) since it operates so efficiently. (Yes, I realize that means it’s probably well on its way to world domination. Sorry in advance to everyone killed in the forthcoming man-machine war.)
Roughly once per minute, SkyNet makes an EC2 decision: launch instance(s), terminate instance(s), or sleep. It has a lot of inputs - it checks anywhere from 30-50 pieces of data to make an informed decision. One of the reasons for that is we have a variety of different jobs coming in, some of which (uploads) are semi-predictable. We know that lots of uploads come in every Sunday evening, for example, so we can begin our prediction model there. Other jobs, though, such as watermarking an entire gallery of 10,000 photos with a single click, aren’t predictable in a useful way, and we can only respond once the load hits the queue.
A few of the data points SkyNet looks at are:
.. and the list goes on.
Our goal is to keep enough slack around to handle surges of unpredictable batch operations, but not enough so it drains our bank account. We’ve settled on an average of roughly 25% of excess compute capacity available when averaged over a full 24 hour period and SkyNet keeps us remarkably close to that number. We always err on the side of more excess (so we get faster processing times) rather than less when we have to make a decision. It’s great to save a few bucks here and there that we can plow back into better customer service or a new feature - but not if photo uploads aren’t processing, consistently, within 5-30 seconds of upload.
Our workers like lots of threads, so SkyNet does its best to launch c1.xlarge instances (Amazon calls these “High-CPU Instances“), but is smart enough to request equivalent other instance sizes (2 x Large, 8 x Small, etc) in the event it can’t allocate as many c1.xlarge instances as it would like. Our application doesn’t care how big/small the instances are, just that we get lots of CPU cores in aggregate. (We were in the Beta for the High-CPU feature, so we’ve been using it for months).
One interesting thing we had to take into account when writing SkyNet was the EC2 startup lag. Don’t get me wrong - I think EC2 starts up reasonably fast (~5 mins max, usually less), but when SkyNet is making a decision every minute, that means you could launch too many instances if you don’t take recent actions into account to cover startup lag (and, conversely, you need to start instances a little earlier than you might actually need them otherwise you get behind).
SmugMug is a profitable business, and we like to keep it that way. The secrets to efficiently using EC2, at least in our use case, are as follows:
Why No Web Servers?
I get asked this question a lot, and it really comes down to two issues, one major and one minor:
Let me be very clear here: I really don’t want to operate datacenters anymore despite the fact that we’re pretty good at it. It’s a necessary evil because we’re an Internet company, but our mission is to be the best photo sharing site. We’d rather spend our time giving our customers great service and writing great software rather than managing physical hardware. I’d rather have my awesome Ops team interacting with software remotely for 100% of their duties (and mostly just watching software like SkyNet do its thing). We’ll get there - I’m confident of that - we’re just not there yet.
Until then, we’ll remain a hybrid approach.
Reader Feedback: Page 1 of 1
Latest Cloud Developer Stories
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
SYS-CON Featured Whitepapers
Most Read This Week