AutomaCon 2015 Recap
Back to the Blog Page

AutomaCon 2015 Recap

by Thursday Bram | Wednesday, October 28, 2015

Automation has a long way to go before it’s mainstream; during his keynote at AutomaCon, Luke Kanies said that less than 15 percent of organizations use automation. While that lack of buy-in may make many engineers feel like they’re just treading water, the reality is that we’re only just starting to see the opportunities that the right infrastructure can offer.

We need to get to the point where half of the organizations that can benefit from automation have implemented something, even if it’s just a tiny container testing out the concept — until we reach that point, shaking out what are the best options and which are merely average is tough. Selling the broader world on automation may be a challenge, but as Michael Stahnke said, “I’ve never seen anyone let go because automation worked.” Conferences like AutomaCon are the first step to bridging that gap.

Automation Must Be a Repeatable Process

In talk after talk at AutomaCon, we heard hints on the importance of development workflows — it’s practically a mantra when we’re meditating on the theme of Infrastructure as Code. Joe Damato went so far as to suggest that infrastructure as code may be literally impossible: because code does things outside of your reference frame, unless you’ve read every line all the way down. (You haven’t, noted Damato).

The reality most of us face is that infrastructure code decays quickly; as Fletcher Nichol put in his talk, “Infrastructure code must be executed frequently to prevent it from degrading over time.” We see symptoms of this decay constantly:

  • bloated /complex code
  • frequent change / code churn
  • widely dispersed changes
  • numerous interfaces or entry points

This sort of decay hinders the continuing process of development, forcing a sort of pain-driven development (hat tip to James Fryman for that phrase). We need to invest in creating not only development workflows that support automation, but also cultures that allow us to reap the most benefit from our work. Adam Jacob’s keynote covering dev ops kung fu showcased one approach to building these systems. Jacob was clear that this approach, especially as used at Chef, is only one approach — anyone can fork the dev ops kung fu materials, as well as start over from scratch, just like in martial arts.

Small changes in culture pay big dividends. Even creating a basic expectation of reviewing any code before it gets pushed to production — and sticking to it, even for code that doesn’t seem to need an extra pair of eyes, can have a fundamental difference. If your infrastructure is truly code, you need the same workflow philosophy you would apply to code produced for any other purpose, if not an even more rigorous approach.

Automation Shouldn’t be a Bespoke Process

Right now, your options for automating any part of your infrastructure requires at least some level of hand-crafting a custom solution — but getting to mainstream adoption requires that automation as a whole evolve into a more plug-and-play approach. That requirement is especially true for operational tools. Frankly, building tools and infrastructure that just scratch our own itches may be the best move we can make.

Consider Docker, as James Turnbull did during his talk. Turnbull pointed out that Docker has become successful so quickly because it “solved real problems relatively elegantly.” Docker wasn’t built for its developers and, as a result, there are a lot of things Docker isn’t capable of. Instead, Docker’s designers used empathy to discover what their customers really needed in order to create and manage better distributed applications. Honestly, everything under the hood doesn’t matter — if the best possible solution for this situation had been humans flipping switches by hand, Docker would have found a way to effectively build that tool instead. Creating a tool that customers can use has to take priority over just about everything else, if our goal is to see automation adopted by the majority of organizations out there.

Automation is an Evolving Process

As automation spreads out across more organizations, different concerns will become more important. In a lot of ways, we sit in an echo chamber right now, because most of the organizations already on board with automation have a heavy focus on development. But that’s not necessarily true of many organizations that could benefit from these tools.

Seeing examples from the real world really drove home some evolutionary necessities that are otherwise easy to miss. Dr. James Cuff walked the audience through how research computing is scaling. He builds computing clusters for Harvard, dealing with 60K+ load-balanced CPUs (and that number keeps climbing). Dealing with the energy needs of that sort of data center highlighted the need for both sustainable power sources and ways to green the underlying infrastructure. Energy efficiency is about to become very important across the board:

  • product longevity
  • algorithm design
  • data center design
  • resource allocation
  • operating systems
  • virtualization
  • configuration

Many of the speakers from AutomaCon have posted their slides online. You should definitely check out all the slide decks, but if you have to prioritize, start with these three:

Whether or not you were able to attend AutomaCon, you’ll be hearing about the ideas discussed at the conference for years to come.

NOTE: this is a guest post (the first ever on the Heavy Water blog \o/) by Thursday Bram, who received a free media pass to attend AutomaCon in exchange for her candid assessment of the event, above. No changes were made to Thursday’s post with the exception of replacing links to slide decks with links to the actual presentations, which we have finally made available (see: AutomaCon 1.0 Release Notes!).