Quantcast

Loadtesting on EC2 – in all cloud++ 2

Recently at work, had a need to rerun some load-testing numbers, but got stick since our internal servers all had builds we were looking at, or weren’t setup, or yada, yada… so we turned to EC2, with overall pretty positive results.

Some background – we build a web-based portal application that runs on a lightweight J2EE stack (Tomcat, Spring, Liferay) and do most of our testing w/jMeter. So, the process of setting up to loadteest on Amazon looked like:

  • Build AMIs using Centos + our Applications (this took a bunch of hours, coming up to speed with Amazon’s tools for images – mostly the usual self-signed cert woes) — this was much easier since we have a one-click build for probably 80% of our full-machine footprint (that last 20% is a doozy though)
  • Add a local LDAP server (OpenDS) to the image – was faster than troubleshooting some odd connectivity issues to our usual infrastructure
  • Fire up some Amazon instances
  • Grab one of the existing public AMIs w/Linux + Jmeter (Ubuntu in this case)
  • Upload our scripts and start running

Net cost: ~ $40

Thoughts on the experience:

  • Running jMeter inside EC2 is probably one of the things that makes this economically viable. Since our loadruns generate GBs, and GBs of traffic per minute in peak scenarious, paying per GB bandwidth can rapidly get expensive — but we were able to keep all the traffic in-cluster
  • The fact that AMIs don’t by default have persistent storage (though EBS or new AWS features get around this – for more $$$) is actually a plus. We can run a test, shut down the image, bring it back up, and we have a clean image again to run to re-verify
  • We were looking for stress/soak/smoke testing – so general benchmarks. For this need, we were not yet at the stage of verifying performance in a particular hardware or other environment, we just needed rough approximate numbers
  • We chose not to use Amazon’s loadbalancing infrastructure, but might in the future
  • EC2 actually let us run all our test scenarios (we have 3-4 main ones) in parallel, on spun-up instances – this cut our time to completion from something like 24-48 hours if we ran them serially to closer to 5hrs after setup was done – a big savings, especially for multiple runs)

All in all, it was positive experience, with some next steps we’re looking to incorporate:

  1. Automate building AMIs – we did this by hand, and it was far and away the most time-expensive part (8 hours) and when we got errors early on we were immediately suspicious of configuration errors
  2. Incorporate this into our Continuous Integration system (Hudson) – if construction of AMIs was automated (and Amazon’s tooling looks easy to automate) we could schedule this to run automatically, on a nightly/weekly basis – which would be nice, since right now this is a pretty heavily manual process – also, paying by the hour, the overall cost is a lot less than trying to manage with dedicated systems (and coordinate scheduling for the same)

2 thoughts on “Loadtesting on EC2 – in all cloud++

  1. jayshao Dec 31,2009 1:12 pm

    From Email – will def. check it out

    Hi Jay

    Just read your recent blog post (http://jay.shao.org/2009/12/20/loadtesting-on-ec2-in-all-cloud/) about load testing in the cloud, and noticed your “next steps” comment regarding automating the creation of images. For that, you might want to take a look at jclouds, which now supports creation and provisioning of images across a couple of clouds (incl. EC2) and a bunch of Ant tasks which should make integration with Maven and/or Hudson easier…

    http://code.google.com/p/jclouds/
    http://code.google.com/p/jclouds/source/browse/#svn/trunk/aws/core/src/main/java/org/jclouds/aws/ec2

    This is a link to the Terremark docs – not EC2 but should give you a flavour of what the API will be like.

    http://code.google.com/p/jclouds/wiki/QuickStartTerremark

    Regards

    Andrew

  2. Pingback: OpenDS helps load testing in the cloud. « Ludovic Poitou's Blog

Comments are closed.