Tag: systems

Do Things that Don’t Scale

I was reading the latest Paul Graham essay: “Do Things That Don’t Scale” and it got me thinking about devops, infrastructure automation and why “tools” teams fail. I am sure I am suffering through a bit of a confirmation bias here, but this article can easily be applied to a team or a culture within a company. If you see yourself as a startup, you have to think who are going to be the users/customers/stakeholders of your product and what you need to do to delight them. That’s a path to success.

The quote that jumped out to me the most was this:

If you can find someone with a problem that needs solving and you can solve it manually, go ahead and do that for as long as you can, and then gradually automate the bottlenecks. It would be a little frightening to be solving users’ problems in a way that wasn’t yet automatic, but less frightening than the far more common case of having something automatic that doesn’t yet solve anyone’s problems.

I think that ties right back into the devops philosophy of people over process over tools and why a simple “automate all the things” approach is sometimes not enough.

 

 

 

AWS outage

AWS had another outage last week and they posted their analysis of the event. There were a few things that really jumped out at me. Granted, it’s easy to be a Monday morning quarterback, not to mention hindsight bias or the fact that there are far more complexities in that system that can be understood from outside. Still….

  1. The original issue was caused by a memory leak bug – that type of a thing is very difficult to catch. I can see how the bug went unnoticed, but it’s the things that followed that were the real problem.
  2. Failed DNS update – assuming that DNS is the “rug that ties the room together” (as it should be), updates and propagation is absolutely key to the stability of the system. That can and should be monitored. It’s not an emergent property of the system.
  3. Memory exhaustion – In retrospect, it’s easy to say that memory should have been monitored per-process. Though that’s not usually done as a standard and generally tends to not be a baseline monitor that people configure. However, given the fact that they said that EBS servers will dynamically consume all available memory, then effectively they had no memory monitoring at all, since all the servers would generally report close to maximum consumption at all time. Still, that’s probably something that’s easier to see in hindsight.
  4. The system was not able to find enough healthy servers for failover – to me, that’s one of the biggest failures during this outage, which should haven’t been that difficult to predict. This is what Kitchen Soap was talking about in their post on automation and something that I’ve posted about as well. What’s especially troubling, is that this was essentially the cause of the outage last year. The failover rate should have been throttled automatically, based on the availability of healthy volumes; with some break points. That’s how the thundering herd or cascading effect could have been avoided. They already have throttling in effect for the API calls, so they are clearly aware of the potential problems of this behavior.
  5. API Throttling – that was actually the right idea, with slightly the wrong policy. They were too aggressive with throttling, but it’s a fine line to tread and to get correctly.
  6. Multi-AZ RDS problem – another very difficult to predict bug in a complicated solution. This will happen in a complex system, but it’s yet another example that you should verify everything and not make assumptions about availability or reliability. If your system is critical, it should not rely on the magic of Multi-AZ RDS. Or at least it needs a contingency plan.
  7. ELB based on EBS – that was news to me. Perhaps it was a fair assumption beforehand, but I don’t recall Amazon explicitly stating that’s the case. Yet another reason to look at HAProxy instead.

Automation

This post about automation drew my attention. It’s well written and tries to address some of the problems with automation and the general attitude with “automate all things”. However, I don’t think the problem is with automation itself. This goes back to the root problem of complex systems that develop emergent properties, resilience engineering and “black swan” events.  The author himself has a great post on the this topic.

When automating a repetitive task, the chance for error and more imporantly the chance for a disproportionately significant impact is very low. When you’re using automation to walk through a complex tree logic, the impact of an error increases considerably. The problem with automating for rare events that include multiple components are:

  1. Especially as it applies to complex systems, it is very difficult to predict every variation. Inevitably something will be missed.
  2. When your automation didn’t work as expected, the best case scenario is that you didn’t handle a particular condition. Worst case senario is that you’ve introduced another (significant) problem into the environment which exascerbates the original. The result is often a cascading failure or a domino effect. There are hundreds of examples with the Github outage and EC2 outage from last year being just a few of them. In my personal experience, I’ve seen dozens of cases like this.
  3. I would argue that with time the problem often gets worse. As automation logic evolves and gets more complex, you believe that it’s getting better. You start accounting for edge cases, you learn from experience and so on.  Unfortunately, as your timeline moves forward, the chance of a “black swan” event is getting higher and higher. And when it does happen, the imact will be proportionally magnified.

So, I think it’s the wrong way to talk about the problem. Automation is a secondary factor which amplifies existing problems with system complexity. These are some of the guidelines to follow to design around it:

  1. KISS. Can’t say that often enough. Too frequently the architecture discussions start too far down the complexity chain. Desire to do something off the charts on the “wickedly awesome” scale leads down the same path.  If your architecture and processes look like this, then you’re going in the wrong direction.
  2. Hire people who understand systemic thinking.
  3. Compartmenalize your application into self-sustaining tiers. If something fails, try to have enough resiliency to continue operating at reduced capacity/functionality.

 

A couple of relevant articles that are really talking about the same thing:

1. An example from aviation, which has been dealing with complexity and resilience for a long time. The title is very fitting: “Want to build resilience? Kill the Complexity”. Equally applicable in almost every field.

2. Architecture of Robust, Evolvable Networks. That’s an abstract and the actual paper is here. He talks about internet as a whole, but smaller networks are often a microcosm of the very same thing.