Tag: cloud

AWS outage

AWS had another outage last week and they posted their analysis of the event. There were a few things that really jumped out at me. Granted, it’s easy to be a Monday morning quarterback, not to mention hindsight bias or the fact that there are far more complexities in that system that can be understood from outside. Still….

  1. The original issue was caused by a memory leak bug – that type of a thing is very difficult to catch. I can see how the bug went unnoticed, but it’s the things that followed that were the real problem.
  2. Failed DNS update – assuming that DNS is the “rug that ties the room together” (as it should be), updates and propagation is absolutely key to the stability of the system. That can and should be monitored. It’s not an emergent property of the system.
  3. Memory exhaustion – In retrospect, it’s easy to say that memory should have been monitored per-process. Though that’s not usually done as a standard and generally tends to not be a baseline monitor that people configure. However, given the fact that they said that EBS servers will dynamically consume all available memory, then effectively they had no memory monitoring at all, since all the servers would generally report close to maximum consumption at all time. Still, that’s probably something that’s easier to see in hindsight.
  4. The system was not able to find enough healthy servers for failover – to me, that’s one of the biggest failures during this outage, which should haven’t been that difficult to predict. This is what Kitchen Soap was talking about in their post on automation and something that I’ve posted about as well. What’s especially troubling, is that this was essentially the cause of the outage last year. The failover rate should have been throttled automatically, based on the availability of healthy volumes; with some break points. That’s how the thundering herd or cascading effect could have been avoided. They already have throttling in effect for the API calls, so they are clearly aware of the potential problems of this behavior.
  5. API Throttling – that was actually the right idea, with slightly the wrong policy. They were too aggressive with throttling, but it’s a fine line to tread and to get correctly.
  6. Multi-AZ RDS problem – another very difficult to predict bug in a complicated solution. This will happen in a complex system, but it’s yet another example that you should verify everything and not make assumptions about availability or reliability. If your system is critical, it should not rely on the magic of Multi-AZ RDS. Or at least it needs a contingency plan.
  7. ELB based on EBS – that was news to me. Perhaps it was a fair assumption beforehand, but I don’t recall Amazon explicitly stating that’s the case. Yet another reason to look at HAProxy instead.

Cloud Load Balancing, Part II

I wrote a post yesterday somewhat criticizing the statement by Radware. There was a very legitimate comment by the author asking about details, which I didn’t provide. This response is in a separate post, because it will get long. With a caveat that my knowledge of their product is minimal, here are the details:

  • key differences between a shared, cloud load balancer instance – offered by virtually all cloud providers (i.e. Amazon ELB, Rackspace CLB)” – that statement is misleading at best. AFAIK, Amazon didn’t disclose the architecture of their ELBs. It’s quite possible that they are shared, but that can’t be claimed for certain. In any case, it’s irrelevant. What matters is performance and features. ELB features are bare bones, whereas the performance is debatable and any argument in that regard should be test/data driven.
  • When a load balancer fails, a new one with an identical configuration takes over” – that really depends on the distinction between DR and high availability. Amazon doesn’t provide SLAs for their ELBs, but you could run a single ELB in multiple availability zones, multiple ELBs or ELBs in different regions. In these cases failover would typically be handled by DNS either by distributing multiple A records initially or updating DNS based on failure (albeit that depends on your RTO tolerance). There are other more obscure DNS methods as well. If you’re going with an HAProxy approach, then your failover method likely includes a monitoring daemon (for logs, service state, etc) and kicking off an API call that at a minimum includes DisassociateAddress and AssociateAddress.
  • “a failure induced by any tenant can cause a broader failure impacting multiple tenants, (i.e Amazon ELB failure June-29th, 2012)” – In theory this might be true, but in practice its false. In almost any major cloud you’re in a shared environment. Typically another tenant can affect your workload, but if someone’s failure can impact another tenant, then it’s a security breach of gigantic proportions. More importantly: Amazon’s ELB failure on June 29th had nothing to do with shared tenants whatsoever. At least as far as ELB is concerned it’s a false statement. It was a bug in AWS and that can happen with anyone’s offering.
  • “the need to redesign the application due to lack of advanced…functionality” – compared to ELB, that’s true (if you need these features). However, as with most AWS functions, they don’t claim more than what they do. If you use HAProxy, nginx or another load balancer you’ll get all of this functionality and more. If you’re willing to pay the price you could even run a Netscaler.
  • lack of control over the load balancer performance and capacity” – needs proof with tests/data. Again, with something like Haproxy you have full control, though your performance may be affected
  • inability to define custom health monitoring” – with ELB the functionality is limited, though you could hit an http page that executes a custom written and a more sophisticated health check. That does require more work. Again, I might sound like a broken record, but Haproxy and others will load balance whatever you want with very complex checks.
  • “inability to load balance and optimize application delivery across multiple data centers” – this goes back to an old debate about GSLB and all the issues associated with it. However, that is largely true. Balancing across regions can get complicated, but in my experience it’s driven more by the application data model rather than load balancing itself.
  • The ADC’s enterprise features alleviate all the shortcomings of cloud based load balancer” – honestly, I am not even sure what to say here. FUD * Marketing-talk.
To be perfectly fair here, I don’t want to come off as a staunch defender of AWS. It has a number of significant shortcoming, some of which I’ve written about before. By no means is it perfect for everyone and anyone considering deploying a significant presence in there (or any other cloud provider) should do their own research. Having said that, there are enough legitimate criticisms and no need to resort to FUD. I haven’t touched a Radware product in close to a decade, but I do recall that I had good impressions. You could take an Alteon VA and run it in a colo or your datacenter or on a virtualized platform and load balance your cloud presence. That’s a valid approach and may work for a lot of people. However, my guess is that most customers would probably be better off with some detailed technical analysis, performance tests & data and some thought through technical whitepapers and diagrams.

 

TripAdvisor’s architecture

A long and a very informative post about what TripAdvisor found out when they tested out AWS for their infrastructure. There are a lot of interesting tidbits in there, some of which are hard to analyze without seeing precise numbers. What I find interesting is that they essentially ported their existing datacenter setup to AWS. Granted, their stated goals were to really look at a cost/performance and not change the operational model. However, in my experience with AWS, simply reusing your datacenter architecture isn’t sufficient and will likely lead to a lot of disappointments. There are couple of things that stood out that would have likely improved their experience:

  • “Cloudwatch/monitoring was sufficient” – that was said with a caveat that it was enough for scaling decisions and detailed monitoring would be more helpful. I would disagree there. Even in their results they didn’t have enough visibility to figure out what was wrong with GC, so they couldn’t see inside the JVM. As far as scaling decisions go, it depends on the complexity of the application and underlying architecture. If you can make the decision simply based on the CPU load of a given instance, then CloudWatch is great. However, in a lot of cases you need far more detail to understand which tier to scale and if that’s even going to help. Also, depending on the availability tolerance of your application, 5 minute intervals might not be good enough.
  • Log collection – that seems to be done in pretty antiquated way and clearly it’s not real time and heavily dependent on local instance storage. Something like Graylog2/Logstack or Flume/Hadoop is far better.
  • Configuration management – they use a custom in-house solution with a naming database. That is usually very difficult to change for historical reasons, but something along the lines of puppet/chef/salt will give better results. The process is somewhat reversed too with an instance responsible for figuring out what it needs to be, though it’s arguable which is the better approach.
  •  Use of ELB – ELB is relatively cheap and pretty fast. Using something like HAProxy would give them far more granularity, visibility and better balancing overall.

In any case, it’s a worthy read if you’re considering AWS.

 

OSCON 2012

I’ve been reading through the presentations that have been posted and found a few pretty interesting:

Go Daddy Compute Cloud – light on details. somewhat interesting.

Comparing Open source Private Clouds – nod bad. It’s an overview of the major players in the space, like eucalyptus, openstack, etc. He does mention OpenNebula, but doesn’t include it the comparison.

MySQL advanced replication – from Oracle. Mostly focused on the newer versions of MySQL, so if you’re stuck on 5.0/5.1 for whatever reason you’re SOL. No mention of Tungsten Replicator, which can do awesome things.

Reliability and scale in AWS. – a very good presentation. Succinct and to the point.

Apache HTTPD 2.4.0 – overview of what’s new. Sounds intriguing, though I haven’t tried it out myself.

Cloud & Architectures

This is a great post about how the world of IT is changing. I would somewhat disagree in one area though. I don’t think it’s so black and white between datacenter and “cloud” approaches. Even if you’re running your own metal, it’s still moving in the direction of the cloud. Most organizations, even with their own DCs will still run a hypervisor of some sort and manage their IT infrastructure dynamically via APIs. The question is really going to be whether you’re going to run on a private or a public cloud or a hybrid version.

http://highscalability.com/blog/2012/5/7/startups-are-creating-a-new-system-of-the-world-for-it.html