The Hot Path

A few years ago I had a problem. An API, consuming a dozen microservices, needed to scale and not just by an order of magnitude or two. It needed to handle a load far greater than it was originally intended for. How to do it?

When a new developer comes along and asks that interminable question of “where should I start first” I always recommend a framework. Libraries are brilliant for understanding how to enact the functionality for a specific area, but when you’re putting together your first app/site/game/et al, nothing beats the order and structure of working within a framework.

Frameworks give you a lot and I’m not talking about the batteries-included philosophy. They give you a common bootstrap, ease of integration and, perhaps most importantly for a growing team, a consistent pattern that everyone can follow. Everyone knows where the migrations are stored, how the commands are executed and where the logic lives (for the most part).

But frameworks are not a silver bullet and the price you pay for that structure and inclusiveness is performance. Laravel, the current framework du jour in PHPland, is 15 times slower than raw PHP when handling multiple queries. The numbers change based on the work done, fewer queries and more processing can mean it gets just 1% of the possible maxima. Normally this is not an issue, optimising for developer time over server time just makes sense at the beginning, but eventually you need to look for how to make stuff happen faster.

There are many tricks. Pre-process things. Shunt tasks off to a queue. By the time you’re optimising opcaches and fine tuning kernel stacks about the only thing left to do is sacrifice the framework. It doesn’t have to be a nuclear option though.

A system with a dozen entities and multiple endpoints doesn’t need to be rewritten outside of the framework. When we looked at the traffic for our system we realised something; 99% of the requests were for a single endpoint and most of the heavy lifting for that request could be done beforehand.

We left the framework doing what it was good at; collecting the data, working with the ORM and storing the result in one of several cache buckets. But for that one hot route, we set-up a special endpoint in nginx pointing to a single PHP file.

This file was bare bones. As basic as you can get. It parsed the request, exiting as soon as an error was found. It grabbed the necessary payload from the cache and returned it as JSON. That was it. It didn’t have to check for environments or instantiate a container with cache drivers it would never use. It had no debug handler to disable nor route muxer or request/response pipeline to hop through.

It took a URL and spat out JSON.

And we went from serving 120 req/s to 8,000. Same hardware. Same data. Only now we could serve traffic using a fraction of the servers we’d needed before, saving the client thousands.

This isn’t a lesson about why you should ditch frameworks and write everything in assembly. It’s not even about optimising an API so it can handle more requests. No. This is about finding that balance between the amount of time you want to spend writing something (and maintaining it!) and how the real world is going to treat it.

The worlds best API doesn’t matter if you can’t afford to run it on anything less than a supercomputer and at the same time, the worlds fastest API doesn’t exist if you never finish writing it.

This isn’t an answer for everyone, but if you have a) the traffic to support it and b) the ability to effectively support two codebases (because that’s what you’re doing ultimately) it can give you a bigger performance increase than anything else (short of rewriting in another language which is an even bigger step).