We often talk about improving computing efficiency by optimizing hardware — faster GPUs, denser racks. But efficiency isn’t just about hardware. It’s about where and how computation happens, across three layers:
1. The compute layer: the software itself that performs the computation.
2. The orchestration layer: how computation is scheduled and distributed.
3. The physical layer: the hardware and data centers where it runs.
For decades, developers could rely on Moore’s Law to make software faster without changing a line of code. That era is over. Between 2010 and 2020, single-threaded CPU performance grew only about 2–3x. Compare that to ~30x from 1990 to 2000.
The orchestration layer, in contrast, has seen remarkable progress thanks to large open-source systems like Kubernetes. But the compute layer; i.e., the software itself remains underexplored. As Wirth’s Law famously put it: “Software is getting slower more rapidly than hardware becomes faster.”
Performance engineers who optimize code at this level have a difficult job. They need to dive deep, find bottlenecks, and implement optimizations manually. Tools like profilers and compilers help, but they can’t make higher-level algorithmic decisions — they won’t replace a Bubble Sort with a Quick Sort or automatically vectorize a complex loop.
Yet this is precisely where the largest performance gains lie. Rewriting or restructuring algorithms can lead to orders of magnitude improvements; sometimes tens of thousands of times faster, as seen in optimized matrix multiplication through techniques like vectorization and parallel divide-and-conquer.
And it’s not only about speed. As data centers approach near-perfect Power Usage Effectiveness (PUE ≈ 1), the software layer increasingly dominates total energy consumption. To continue improving efficiency, we must learn to automate software optimization at scale.
This — the automation of compute-layer efficiency — may well define the next frontier of computation. It’s not just an enabler of faster systems; it’s a prerequisite for sustainable and intelligent computing in the decades ahead.