Abstractions should be Extracted, not Designed

I attended RailsConf in 2019 and only the above quote has stuck with me since. David Heinemeier Hansson (DHH) said this in his keynote. Of all people, the creator of Ruby on Rails should have something to say about abstractions.

I’ve understood this to mean that when I’m writing code that will require an abstraction, I should write a first implementation without using any abstractions (even if this means repeating some code). Then once I’ve repeated myself once or twice or three times I should extract the abstraction from there in a refactor. This as opposed to designing the abstraction ahead of time and doing all the coding in this framework.

The logic here is that if I design the abstraction without digging into the implementations I’m very likely to get many things wrong. For example, I thought each implementation of the abstraction would need to configure a certain field, but really they can all share one. Or I thought some value would always be an enum, but really need the flexibility of a string. And now to make these changes I have to change the abstraction as well as the implementations.

I follow this pattern religiously now. The underlying insight is there’s no better way to research and design the abstraction than by getting your hands dirty with some implementations. The design is the extraction.

I realized yesterday this pattern works for building a company as well. One of my friends co-founded Rentroom. He described how they built their application by automating the painful parts of their brick-and-mortar real estate management company. They are their own first, best customer. And, naturally, the problems they were solving (easier online payments, repair task management, security deposit holding) were being experienced by other landlords too. They extracted their SaaS product from their implementation.

My former company Flexport is another great example of this. Flexport started as a traditional freight forwarder (and in many ways still is one) but focused relentlessly on automating and streamlining the painful parts of the shipping process. Now Flexport needs to turn these internal automations into sellable SaaS products.

Finally, AWS might be the canonical example. Amazon realized they were duplicating much of their DevOps/Infrastructure work across different business units; different teams each managing their own severs and deployments. So Amazon created a team to offer DevOps as a service: just ask DevOps for a server or a database, get back an endpoint and some credentials in return. This duplicated work was extracted from the implementers and now the details are abstracted away from them. And since it turns out just about every other web-connected person on the planet would like to abstract away these details, AWS now accounts for over 75% of the Amazon’s profit.

Deployment with Docker and AWS Fargate

I’ve intended to write about our deployment stack with AWS Fargate for a while but kept putting it off. We’ve gotten tremendous value from Fargate and there’s a serious dearth of approachable material online. I think these are related: Fargate is scary to write about. It’s an abstraction over your entire deployment so there’s necessarily a lot of magic going on under the hood. The documentation is filled with buzzy words like orchestration and serverless and – as with all AWS docs – self-referential to an exponentially increasing number of other AWS docs for acronyms like ELB, EC2, VPC, and EBS. But without being experts we’ve managed to use Fargate to setup continuous, rolling deployment of multiple applications. These have been running for two months now without any downtime. So what follows is a beginner’s guide to Fargate, written by a beginner. Let’s start by establishing some background.


Deploying is the process of getting the web applications you run locally during development running on the public internet (in this article, on AWS). This is harder than it sounds for a number of reasons.

  • Resources: When running locally you take for granted your computer’s resources like CPU, RAM, memory. These all have to be provisioned on some machine in the cloud. Traditionally this meant provisioning an EC2 instance.
  • Operating System: Again taken for granted locally, but your provisioned instance needs to have an operating system – usually some Linux distro. This OS needs to be compatible with the technologies your application is running
  • Publishing and running the code: you need to get your code onto the instance, either as the raw source or a compiled binary. Then you need to compile and run this application. And you want to seamlessly roll the new deploy over the old one, without any downtime. On top of all this you might have multiple applications you need to do this for.
  • Reliability: your production deployment needs to keep running indefinitely. If some intermittent error occurs that crashes one of your applications you need that process to restart automatically or you’ll have downtime.
  • Services: your application will almost certainly use some database like Postgres and maybe many others like Redis. These services need to be installed and run somewhere your instance can access them.
  • Networking: when running the code locally all of our processes are running on the same machine making communication trivial. This will not be the case in the cloud so we have to manage how they’ll talk to one another from different machines.
  • Security: a deployed application is accessible to the world. All of our processes’ endpoints and internal communication need to be secure.
  • Secrets: your applications will likely hold many API keys and tokens to authenticate with other services. This need to be available on each instance, but these are highly sensitive and so should not be transferred frequently or over insecure channels.

I’m sure there are many more that I’m missing but this is already a daunting list. Traditionally each of these steps involved configuring something in the AWS Console UI or CLI for each service. In addition to being a huge pain in the ass this is dangerous. This amounts to a huge amount of managed state. You have no easy way to track, let alone revert, changes made in the UI. There’s no way to test changes before making them. If you need to scale then you have to manually provision new machines, take offline the old ones, and do all the network and secrets configuration anew. Its almost impossible to do this without having some scheduled downtime.

Serverless Deployment

AWS Fargate uses a different paradigm called Serverless Deployment. This is a bit of a misnomer since plenty of servers are still involved. But what’s meant here is that no EC2 server instances are ever provisioned or configured manually. Instead you describe in code what infrastructure and configuration you want, pass this code to Fargate, and let AWS handle the provisioning and setup.

There are huge benefits to this arrangement. Because the configuration is now in code that lives in version control you can manage and audit changes through your normal PR review process. You can easily review and rollback any changes. You can setup tests to run in your CI to ensure nothing breaks.

More philosophically, we’ve switched from an imperative to a declarative paradigm. Instead of making a series of commands (imperatives) in the AWS Console that created a huge amount of state to manage, we’re now simply declaring once “This is the correct state of the world”. A new deployment (or a rolled back deployment) is as straightforward as declaring the old configuration.

The code that declares all of this lives in two places: one or more Dockerfiles and one or more Task Definitions.


Docker is an incredible tool; there have been entire books written about it. The short version is that Docker enables containerization: packaging your source code along with the requirements to run it. You declare in code in a file called a Dockerfile a virtual environment for your application to run in (for example Linux with Python installed), any service dependencies like Postgres, and the steps to build and run your application. Any containerized application can be run simply with docker exec. This is an extremely powerful abstraction. This enables container orchestration tools like Kubernetes and Fargate to run and manage deployments of multiple applications without knowing anything about the internals of those apps.

Practically speaking, here’s what one of our Dockerfiles to deploy our Rust backend looks like:

FROM rustlang/rust:nightly-stretch
WORKDIR /usr/src/sheets/server
COPY Cargo.toml ./
COPY server ./server
RUN cargo build --release -p server
RUN cargo install --path server
EXPOSE 7000 8000
CMD ["/usr/local/cargo/bin/server"]

In order:

  1. Use a container image with the latest Rust nightly build installed. This includes an Ubuntu install and other basic dependencies.
  2. Setup a working directory in the container
  3. Copy in the needed source code to the container
  4. Compile the Rust code to a binary
  5. Install the Rust binary
  6. Expose two ports (one for Websockets and one for HTTPS)
  7. Run the Rust binary

Now Fargate can deploy this application. Further, other developers can run this application themselves without worry about installing anything on their machines or having mismatched dependencies.

Task Definitions

We use a task definition to define how Amazon Elastic Container Service (ECS) should run our Docker containers. This means defining most of the deployment steps from our original list not handled by the Dockerfile.

You can find plenty of templates in the official documentation and I’ve uploaded a redacted version of ours (we use the NestJS framework, hence the names). Most of it is boilerplate, but to highlight the interesting parts:

  "containerDefinitions": [
      "portMappings": [
          "hostPort": 80,
          "protocol": "tcp",
          "containerPort": 80
      "environment": [
          "name": "NEST_PORT",
          "value": "80"
      "secrets": [
          "name": "ASM_GOOGLE_SECRET",
          "valueFrom": "arn:aws:secretsmanager:us-west-1:12345:..."
      "image": "12345.dkr.ecr.us-west-1.amazonaws.com/repo:latest",
  "memory": "512",
  "cpu": "256",

In order, we are:

  1. Defining how to map our container ports
  2. Setting environment variables
  3. Setting up our secrets using Amazon Secret Manager
  4. Defining what container image to use (using Amazon Elastic Container Registry)
  5. Defining what resources the machine we deploy to should have (CPU, memory, etc.)

These are the steps that, in the old deployment methodology, we’d have to do manually each time we wanted to setup a new machine. We would need to manually provision an EC2 instance, setup the networking, and copy over the secrets and environment variables to that machine. Instead we declare all these steps in code and Fargate handles them for us.

Additional Benefits

This level of automation is hugely valuable on its own. But Fargate also gives us plenty of additional benefits “for free.”

Because Fargate entirely understands how to deploy machines we can configure Fargate to provision additional machines automatically as necessary. So if our site suddenly comes under tremendous load (say because of a press push) Fargate can automatically add new resources to handle the scale. This is an incredible feature for preventing downtime and slowness.

Fargate also does safe, rolling deployments. When we deploy new code there is no downtime; Fargate handles taking down the old version and only does so once the new deployment is running safety. If the new deployment fails because the health check is not responding then the old code will stay up, again preventing major downtime.


Our Fargate experience has been amazing. We’ve been doing continuous deployment, including adding new resources and services, without any downtime for months. Our code deploys every time a change merges; deploys take twelve minutes. We’ve been saved from downtime multiple times by the Fargate guard rails. We deploy with confidence, even right before important demos.

I fully recommend this deployment stack to anyone, even novice AWS users. Though the setup seems daunting you derive a huge amount of value from the effort. To get all of the features mentioned above you would need to put in a huge amount of manual effort. With this upfront cost we’re ready to scale easily for the foreseeable future.

Sam Harris’ Waking Up and The Prestige

(Note: contains spoilers for the 2006 Christopher Nolan film The Prestige)

Currently I’m reading Waking Up by the terrific Sam Harris. I use his identically named app for meditation religiously – and was surprised to realize from reading the book he’s the same atheist author I read fervently in my edgy, nihilist youthful days.

Waking Up is about spiritualism. But Harris uses this word carefully and, unsurprisingly, approaches spiritualism and mysticism from a secular, rationalist perspective. He draws a careful line between Eastern and Western religions. While Western monotheistic religions are mythic, dogmatic, and irrational, Harris views much of Eastern spiritualism as fundamentally rational. “Religious” practices such as meditation are ways to understand the human mind through experimentation.

The great reveal at the end (or beginning) of such contemplation is that there is no self. What we think of as our “self” is in fact a very persistent illusion. Being able to “grok” this fact is enlightenment and seeing the one-ness of all things.

Harris interrogates this through examples in philosophy in science; for example, the split brain behavior exhibited by patients with severed left and right brain hemispheres. If a single person can have two selves then what can we do with our identification as one single self?

I’m enjoying working through more of these thought experiments on my own. For example, if we invented a machine that could teleport you from one planet to another by disassembling and re-assembling all the molecules that compose you, would you have “survived” that teleportation? To me the answer is clearly no. A copy has been created and the original destroyed.

This is the same question considered in Nolan’s The Prestige. A magician named Robert Angier (Hugh Jackman) is able to perform an incredible teleporting act. The reveal of the movie is that he has not been. teleporting. He’s been using a machine to create a clone of himself on the other side of the room; the original is dropped into a tank and drowned. Every time he’s performed the trick he has committed suicide. Reflecting, Angier asks that every time he wondered “Would I be the man in the box or the prestige?”

Of course the answer is: both. The two men would be identical down to the molecule. Since memories, personality, and the illusion of self are all themselves composed of molecules they would be identical, too. Both men would feel they had lived continuously until that moment, when one continued and one drowned.

This doesn’t feel right. We have a strong belief that we have lived one continuous narrative in our life. But this too is an illusion created by our brains. Consider dreaming. Each night we spend the better part of eight hours living as someone else. Yet we wake up and – usually – feel that our life narrative is continuing from when we finally drifted off the evening before. What was the self doing during that time? And what of the self that was living in our dreams?

Static Types are Non-Negotiable

The choice between using a statically typed or dynamically typed language faces every engineer designing a new system. This is a well-worn debate and one that reasonable people can disagree on. The correct choice frequently depends on the constraints of the project. However, having recently chosen Typescript for our latest project and used it for a few weeks I realized I’ve decisively come down on one side.

Statically typed languages are superior for any project larger than a script.

What follows is my highly opinionated argument for why.

Developer Experience

The developer experience is immeasurably better with static types.

IDE Features are much more powerful. Auto-refactors like “Rename” work reliably. The IDE will auto-complete variable and methods for you. The IDE will provide inline documentation when you are using an external library. It will tell you when you’re using a method with the incorrect variables. You can confidently find everywhere a method is used with “Find References.”

Compilation gives you free tests. Untyped languages like Ruby require more tests. This might seem good, but by necessity many of these tests check behavior that in statically typed languages the compiler gives you for free. In Ruby you have to write tests to confirm basic facts like:

  1. The class that I’m instantiating and the method that I’m calling exist
  2. I’m calling that method with the correct types and arity of arguments
  3. The method I’m calling is return the type I’m expecting

With STLs if the code compiles you can be sure of such truths with complete coverage. All without having written a single test. This belies the common idea that UTLs let you move more quickly. That might be true if you’re not writing any tests, but if you are then I’d rather write type annotations than an entire acceptance test suite.

Further, compiling is much faster than running the test suite; typically fast enough for your IDE to highlight your errors in real-time. This means that developers can deal with such bugs immediately, while still in the headspace of the code they’re working on, rather than later after running the test suite.

Code Quality

One can write shitty or beautiful code in any language. But STLs encourage better coding practices. By necessity they force developrs to think more about their interfaces. When you write poor code you have to stare it in the face.

Developers using UTLs like Python, Ruby, and Javascript often write methods that accept a single large dictionaries/hashes/objects as an input. Values are then pulled from this object by their keys. Sometimes there are options that affect the flow of the code. Sometimes this entire object is then passed to another method. And so on.

I can’t understate how horrible this pattern is. This makes the code almost impossible to reason about.

With an STL you could use this anti-pattern. But you would be forced to type that monstrosity into a struct. Once you’re staring that beast in the face you are almost certain to re-write the code. Which is why you rarely see this anti-pattern in STLs. But if you did, at the very least have documentation in the form of the type.

STLs encourage modularity. I’m not sure whether or not this is orthogonal to being dynamically typed, but UTLs like Ruby, Python, and Javascript do not have true private methods and variables. Typescript, Java, and Go do. This means the UTLs have no way to enforce the interface for a given abstraction; callers can always reach into the internals of the code. STLs have this mechanism. Further, static analysis in IDEs will suggest that members that can be private be made so. This leads to less coupled code in STLs.

STLs also promote cohesion. A given module might operate over a set of types. Some of these should be exported to consumers of the module, but most should be internal to the module and invisible to the caller. The classes that need to know about those types should be grouped together in the module – cohesion. If too many types are being exported you will see this issue: lots of imports outside the module, lots of exports within it. This code should be refactored. In UTLs you are just dealing with an increasingly diverse set of hashes. This problem is far less visible.

Again: one can write shitty or beautiful code in any language. UTLs can be written with succinct interfaces, low coupling, and high cohesion. However only STLs actively encourage this behavior.


How and how much to document your code is another great, ongoing debate. Generally we agree we need more documentation, but that documentation, once written, tends to get quickly out date as the code changes and is forgotten.

STLs have solved this problem. Types are documentation that live in-code and never get out of sync. If you have types and well-named methods and arguments I believe you have all the documentation you need.

Putting it Together

All of this leads to developers working faster and more confidently. This is not a minor point. Developer productivity is the main driver of success for a technology company. Increasingly developer productivity should be the primary goal of any director of engineering.

I’d also argue this all leads to developers being happier. No one enjoys writing tests. No one enjoys accidentally releasing, reverting, and retroing a bug. No one enjoys having to spend a day understanding someone else’s shitty code or interface. In my experience these are some of the primary drivers of burnout. The cost of losing a productive engineer is massive. But if you do, having self-documented and well-organized code will make the ramp up time for a new engineer much shorter.

Finally, when you’re scaling a large engineering team the name of the game becomes encouraging best practices. You might have great coding instincts, but imparting those on a team of 50 or 100 engineers is impossible without support from the language and your tools. STLs are, simply, the best way to encourage good coding at scale.

Ride of a Lifetime

The Ride of a Lifetime: Lessons Learned from 15 Years as CEO of the Walt Disney Company by Robert Iger

Fast, enjoyable read that mixes auto-biography with a high-level history of ESPN, Disney, ABC, Capital Cities and the chain of acquisitions that took Iger through each. The book also has a self-improvement aspect; there’s plenty of Dale Carnegie in here, as well as some Warren Buffet folksy wisdom and a lot of suggestions for how to be a great manager, thinker, and person. Iger seems like a very humble guy who willed himself through dedication, hard work, and open-mindedness to the top of successive multinational corporations in different areas (sports, entertainment, technology).

The pace gets pretty rapid in the modern era once Iger becomes CEO of Disney; acquisitions of Pixar, Lucasfilm, Marvel, BAMTech, and 20th Century Fox go off in a flurry as Iger tries to navigate Disney into the digital distribution era with Disney+ and ESPN+. Very interesting to hear the inside baseball from a guy who just closed over $70 billion worth of acquisitions, though naturally you get the feeling most of the details are left out. We’ve yet to see how this strategy plays out, but Iger clearly subscribes to the “innovate or die” maxim and I bought some shares when I was finished.

Overall a fine but maybe unmemorable read; I’d probably give a 3.5 if I could. I recommend starting with the Acquired episode that turned me on to this book first and opening Ride of a Lifetime if you’re looking for more.

View all my reviews

Balkan Ghosts

Balkan Ghosts: A Journey Through HistoryBalkan Ghosts: A Journey Through History by Robert D. Kaplan
My rating: 5 of 5 stars

Balkan Ghosts is an epic history of the part of the world that’s synonymous with fractious, violent internecine conflict. The breadth of the topic is massive, covering centuries of history in the lands that now constitute Serbia, Croatia, Macedonia, Kosovo, Romania, Moldavia, Bulgaria, Hungary, and Greece. The people were converted to Catholicism, Eastern Orthodox, Judaism, and Islam and subject to conquests by the Byzantine Empire, Ottoman Empire, Habsburg Empire, Nazi Germany, and Soviet communism. This book took me almost a year to finish; the book is massive, dense, and brutal.

The narrative foil of the book is a travelogue of the author’s trips throughout the region over twenty years; Kaplan befriends and interviews many colorful priests, politicians, and other characters in each nation. His goal, and the book’s, is to understand the historical and religious scars that underly the violence and instability of the Balkans. To summarize: in a region with a thousand-years history and a multiplicity of languages, religions, and ethnicities virtually every group has some rightful grievance against the others; times when their co-religious were forcibly converted, or their ethnicity was ejected from their land, or enslaved by a conquering empire, or murdered in a violent pogrom. Every group feels itself wronged, oppressed, the victim of history. Depressingly, no one interviewed even admits the possibility of ethnic healing and inter-racial co-existence; the only solutions they’ll admit amount to genocide.

Indeed, this book was published in 1993. Six years later NATO would intervene in Yugoslava to prevent the genocide of Kosovars by Serbs. I suppose the penetrating pessimism about the intractability of these problems is warranted. But in my news-watching lifetime the Balkans have not been in the news. Maybe this is cause for some optimism.

View all my reviews

Book Review: And Then All Hell Broke Loose

And Then All Hell Broke Loose: Two Decades in the Middle East by Richard Engel

My rating: 4 of 5 stars

Semi-autobiographical narrative history of the modern Middle East by the journalist Richard Engel who was embedded in Baghdad for the entirety of operation Shock and Awe, in Israel and Lebanon during the Second Intifada, and Egypt, Tunisia, Libya, and Syria during the Arab Spring. Some great accounts of life as a journalists embedded in a war zone (even being kidnapped and rescued in Syria by a pre-natal ISIS), a high-level overview of how the Middle East historically became such a disaster, and more recently how Bush and then Obama fucked up a fucked up situation even more.

Engel’s basic premise is that the Middle East is a powder keg of centuries-old blood feuds, with intermittent genocides and war crimes, between Shias, Sunnis, Kurds, Jews, and Christians. These groups have seemingly no chance of coming to peaceable terms with one another. After the fall of the Ottoman Empire in WWI the Middle East was “organized” into modern nation-states by the victorious Allied powers; these states often contained a volatile mixture of ethnicities, so to keep things stable and the oil flowing they’ve been run by a succession of “strong man” dictators in the mold of Saddam Hussein, Gadhafi, Assad, Mubarak, etc. To varying degrees these men run pseudo-Islamic ethno-police states with a strong cult of personality, disregard for human life and rights, and occasional internal purges and border wars. But most importantly (to the West) these leaders keep true Islamists suppressed, support a cold peace with Israel, and keep the oil flowing to international markets. In exchange we looked the other way on all but the most egregious abuses of power. After the Cold War the United States became the primary guarantor of Middle Eastern stability and the inheritor of this devil’s bargain.

George W Bush violated these terms by toppling Saddam on changing and tenuous pretenses. The war was easy, but he massively underestimated the pandora’s box he opened, naively expecting that by planting the “seeds of Democracy” he could grow a democracy in the desert and this would cure all evils. Instead, a recalcitrant Sunni minority launched a civil war against the new Shiite government that, for its part, seemed to have little commitment to its army or democratic institutions. The disaffected the delusional Sunnis, now out of power after years controlling the army and government, would become the foundation for ISIS and the destabilization of the region.

Obama muddled matters further by having no clear doctrine on the Middle East. He turned his back on longtime US ally Mubarak in Egypt, allowing him to be toppled by protestors in the Arab Spring. He went further in Libya, directing air power to defend rebels against another long-time US ally in Gadhafi. Citizens in Arab countries began believing they could count on US and NATO support in the event that they led a popular revolt against their own tyrannies. Yet when revels in Syria did exactly this, Obama blanched; later he would draw a red line, but when crossed he still did nothing. Engel contends this was a massive failure by the Obama administration, essentially encouraging a Syrian civil war that he would not then help to end. I found this convincing, though its sad the think that Obama’s idealism and values would lead to his biggest failure of foreign policy.

While these are what I’d call the “theses” of the book most of the content describes Engel’s life as a journalist or gives a high-level overview of Middle East and Islamic history. He covers topics like the early schism between Shiites and Sunnis, the Ummayad and Ottoman caliphates, the history of Israel, the founding of the House of Saud and Wahabiism, etc. He also describes working in a war zone and the work that entails: preparing safe houses, hiding $20k on your body, bribing police officers, getting smuggled over boarders. He even recounts being kidnapped for ransom in Syria by an ISIS precursor. The narrative is gripping; Engel lived through some incredible moments and mixes his history lessons in with his travels.

Lately I’ve been really enjoying this type of narrative history; I find them easy and fun to read while also learning plenty (I had about twelve pages of notes from this book). Highly recommend for anyone who likes the same.

View all my reviews

4/5: The Storm Before the Storm

The Storm Before the Storm: The Beginning of the End of the Roman RepublicThe Storm Before the Storm: The Beginning of the End of the Roman Republic by Mike Duncan

The first book from Mike Duncan, the creator of the terrific The History of Rome podcast. Describes the relatively unexplored period of the late Roman Republic between two far more famous events: the conquest of Cartage and the rise of the Caesars and the Roman imperium. Duncan explores how Rome fell from a republican power controlling the known world to one oscillating between extremes of popular demagoguery and aristocratic oligarchy, leading to civil war and ultimately paving the way for the previously unthinkable ascent of a tyrant.

If you enjoyed the podcast as I did you’ll enjoy this book; the tone is conversational and narrative with liberal sprinklings of dry humor and modern analogies. In terms of events this time period has everything: the rise of an oligarchy in the Senate, the popular reaction in the Assembly, the introduction of mob violence, the creation of cults of personalities and armies with personal loyalties, civil war, and finally tyranny being welcomed as a preferred alternative to anarchic chaos

Roman history is long and the Roman empire was large, so by necessity this book moves very quickly and often haltingly, with some eras or careers covered in a page where others take chapters. There’s also an unfortunate paucity of sources about some key events and Duncan does a great job of weaving a coherent enjoyable narrative out of the available sources. Even so, the number of names and events covered can be overwhelming and I have massive set of Kindle highlights to go through.

My main criticism is I wish the themes were fleshed out more. The first half of the book sets up and returns to important themes: the abandonment of political norms, the introduction of mob violence, fights over suffrage and the expanding definition of “Roman”, increasing disregard for laws by those in power. Duncan gives plenty of examples of these but never builds them into a coherent framework or thesis. The second half of the book is largely narrative describing Marius’ and then Sulla’s paths to power, mostly eschewing analysis altogether. Entirely absent is an analysis of why this happened or how it could have been avoided.

That said, this was a hugely pleasurable read giving the joy of a fictional narrative but with non-fictional learning. I knew little about this fascinating time period that has so much to tell us about our own empire and polity.

View all my reviews