skip to content
luminary.blog
by Oz Akan
datacenter sketch

We Need More Datacenters

One function. Four implementations. A 173-million-times performance difference.

/ 9 min read

Table of Contents

Every unoptimized code we ship silently taxes the world.

Billions of requests are processed every minute on the internet. For a billion requests, every millisecond of inefficiency turns into about 12 days of wasted compute per millisecond.

In other words, every day we waste 2730 years. That spans about 91 generations. In a single day, we waste the lifetime of all our grandchildren until the “89-times great-grandchild”. Looking backward, 2700 years ago was still the Iron Age. Humans had fire and the wheel, not much else. In a day, we waste most of the recorded history of humankind.

A very optimistic claim would be that we are paying for twice the data-center capacity, twice the energy, and twice the carbon just to stand still.

Our “good enough” code, is that bad. It costs us the planet.

Let’s run an experiment to test this claim with a function that finds the Nth Fibonacci number.

What Is the Fibonacci Series?

Let’s remind ourselves. The Fibonacci series is a sequence of numbers in which every term after the first two is the sum of the previous two. Formally, F(0) = 0, F(1) = 1, and F(n) = F(n-1) + F(n-2) for n >= 2. The opening terms look like:

  • F(0) = 0
  • F(1) = 1
  • F(2) = 1
  • F(3) = 2
  • F(4) = 3
  • F(5) = 5
  • F(6) = 8
  • F(7) = 13

Real-world appearances include branching plants, spiraling shells, and certain divide-and-conquer performance analyses.

The Baseline: TypeScript’s Naïve Recursion

I’m not surprised if you say recursion when I say Fibonacci. Great fit. Let’s write it.

export function fibNaive(n: number): number {
if (n < 2) return n;
return fibNaive(n - 1) + fibNaive(n - 2);
}

fibNaive implements the classical recursive definition: fib(n) = fib(n-1) + fib(n-2). It stops at the base cases (n < 2), returning n. Simple but recomputes overlapping subproblems, so it grows exponentially (O(2^n) time).

Benchmark:

fibNaive(50) -> 12586269025, time: 93773.58117 ms

One function call, 93 seconds of CPU time. That’s the exponential O(2^n) waste inflating your power bill. Surely we can do better.

The key to optimization here is dynamic programming (DP)—a technique that solves complex problems by breaking them down into simpler overlapping subproblems and storing their results to avoid redundant computation. Instead of recalculating the same values repeatedly, we compute each value once and reuse it.

1. Memoized Recursion

fibMemo uses recursion combined with a Map to cache values that have already been computed. On each call, it checks memo before recursing; only unseen states trigger deeper calls. When a new value is computed, it is stored in the map for future use. Cuts the time complexity down to O(n) at the cost of O(n) auxiliary memory.

export function fibMemo(n: number, memo = new Map<number, number>()): number {
if (n < 2) return n;
if (memo.has(n)) return memo.get(n)!;
const val = fibMemo(n - 1, memo) + fibMemo(n - 2, memo);
memo.set(n, val);
return val;
}

Benchmark: fibMemo(50)0.03188 ms. Caching gives us a 2.9-million times speedup.

2. Tabulation with a Dynamic Programming Table

fibTabArray uses bottom-up DP but materializes the whole table (table[i] = fib(i)).

export function fibTabArray(n: number): number {
if (n < 2) return n;
const table = new Array<number>(n + 1).fill(0);
table[1] = 1;
for (let i = 2; i <= n; i++) {
table[i] = table[i - 1] + table[i - 2];
}
return table[n];
}

Benchmark: fibTabArray(50)0.04342 ms. Slightly slower than memoization because of array allocation, but illustrates the textbook dynamic-programming build-up. Runs in O(n) time and O(n) extra space; handy when you want to inspect all intermediate Fibonacci numbers (e.g., for debugging or teaching).

3. Constant-Space Tabulation

This is also bottom-up DP, but it stores only the last two Fibonacci values (prev and curr) at each step. Keeps the runtime at O(n) while dropping auxiliary space to O(1). Preferred when the goal is just the final number and you want maximum speed with minimal memory.

// demo.ts:40-51
export function fibTab(n: number): number {
if (n < 2) return n;
let prev = 0, curr = 1;
for (let i = 2; i <= n; i++) {
const next = prev + curr;
prev = curr;
curr = next;
}
return curr;
}

Benchmark: fibTab(50)0.01692 ms. Same O(n) time but with O(1) space. That is the fastest we get with TypeScript.

Even in a garbage-collected language, algorithmic care transforms a minute-long run into a few microseconds. That’s the difference between spinning up extra instances and letting one server rest cool.

Who said TypeScript is the best language for this kind of problem? This is where Rust comes into play.

Rust Goes Further

The Rust rewrite (demo.rs) mirrors the same four strategies. The code looks familiar—HashMap for memoization, vectors for tabulation—but the compiled result is leaner:

fib_naive(50) -> 69916.58687 ms
fib_memo(50) -> 0.04804 ms
fib_tab_array(50) -> 0.00250 ms
fib_tab(50) -> 0.00054 ms

Even with identical asymptotic complexity, Rust’s constant factors are smaller: the fastest version runs ~30× faster than the fastest TypeScript one, and the DP array version lands in microseconds without JIT warmup.

The Benchmark Table

Baseline = Rust fib_tab(50) at 0.00054 ms

ImplementationTime (ms)Relative Cost
TypeScript fibNaive(50)93 773.58117173 654 780×
TypeScript fibMemo(50)0.0318859.04×
TypeScript fibTabArray(50)0.0434280.41×
TypeScript fibTab(50)0.0169231.33×
Rust fib_naive(50)69 916.58687129 475 161×
Rust fib_memo(50)0.0480488.96×
Rust fib_tab_array(50)0.002504.63×
Rust fib_tab(50)0.000541.00×

Relative cost = (time ÷ 0.00054 ms).

The spread is staggering: the slowest function is 173 million times more expensive than the fastest.

The fastest Rust function is ~30 times faster than the fastest TypeScript function. A C version would likely be even faster. (In my quick test, the C version took microseconds—it was so fast I thought I’d made a mistake.)

Takeaways

I understand the pressure to deliver features quickly and the need to use a dev stack that most team members are familiar with. That was my stance as well, but things have changed dramatically in the past few years. Now, improving a function is as easy as asking your favorite LLM. The time investment required to optimize beyond the bare minimum has drastically decreased. Even for critical operations, your LLM can easily convert your prototype in Python to a more performant language. If the team doesn’t know Rust, it’s not a big deal anymore; an LLM can explain the code to you line by line.

Do it, not to save money for your company, not to provide a faster experience for your customers but to do more with less.

Earth will thank you by letting your 89th grandchild enjoy real experiences of blue skies and green forests, not just virtual ones powered by massive server farms.

Appendix

TypeScript Code

// 1) Naïve recursion
// T(n) ≈ T(n-1) + T(n-2) => O(2^n) time, O(n) stack space
export function fibNaive(n: number): number {
if (n < 2) return n;
return fibNaive(n - 1) + fibNaive(n - 2);
}
// 2) Memoization (Top-Down DP)
// Stores subproblem results in a Map: O(n) time, O(n) space
export function fibMemo(n: number, memo = new Map<number, number>()): number {
if (n < 2) return n;
if (memo.has(n)) return memo.get(n)!;
const val = fibMemo(n - 1, memo) + fibMemo(n - 2, memo);
memo.set(n, val);
return val;
}
// 3) Tabulation with full table
// Builds the entire DP table: O(n) time, O(n) space, mirrors textbook DP
export function fibTabArray(n: number): number {
if (n < 2) return n;
const table = new Array<number>(n + 1).fill(0);
table[1] = 1;
for (let i = 2; i <= n; i++) {
table[i] = table[i - 1] + table[i - 2];
}
return table[n];
}
// 4) Tabulation (Bottom-Up DP, O(1) space)
// Iteratively builds from base cases but only keeps last two values
export function fibTab(n: number): number {
if (n < 2) return n;
let prev = 0, curr = 1; // fib(0), fib(1)
for (let i = 2; i <= n; i++) {
const next = prev + curr;
prev = curr;
curr = next;
}
return curr;
}
// Benchmark helper
function bench<TArgs extends any[], TReturn>(
label: string,
fn: (...args: TArgs) => TReturn,
...args: TArgs
) {
const start = process.hrtime.bigint();
const result = fn(...args);
const end = process.hrtime.bigint();
const ms = Number(end - start) / 1_000_000;
console.log(`${label.padEnd(22)} -> result: ${result}, time: ${ms.toFixed(5)} ms`);
}
// Quick comparison
if (require.main === module) {
const N_SMALL = 50; // manageable for all 3 versions
bench("fibNaive(50)", fibNaive, N_SMALL);
bench("fibMemo(50)", fibMemo, N_SMALL);
bench("fibTabArray(50)", fibTabArray, N_SMALL);
bench("fibTab(50)", fibTab, N_SMALL);
// Bigger n to show scalability (skip naïve!)
const N_BIG = 100; // tab can handle this
bench("fibTab(100)", fibTab, N_BIG);
}

Rust Code

use std::collections::HashMap;
use std::fmt::Display;
use std::time::Instant;
fn fib_naive(n: u64) -> f64 {
if n < 2 {
return n as f64;
}
fib_naive(n - 1) + fib_naive(n - 2)
}
fn fib_memo(n: u64, memo: &mut HashMap<u64, f64>) -> f64 {
if n < 2 {
return n as f64;
}
if let Some(&val) = memo.get(&n) {
return val;
}
let val = fib_memo(n - 1, memo) + fib_memo(n - 2, memo);
memo.insert(n, val);
val
}
fn fib_tab(n: u64) -> f64 {
if n < 2 {
return n as f64;
}
let mut prev = 0.0;
let mut curr = 1.0;
for _ in 2..=n {
let next = prev + curr;
prev = curr;
curr = next;
}
curr
}
fn fib_tab_array(n: u64) -> f64 {
if n < 2 {
return n as f64;
}
let mut table = vec![0.0_f64; (n + 1) as usize];
table[1] = 1.0;
for i in 2..=n as usize {
table[i] = table[i - 1] + table[i - 2];
}
table[n as usize]
}
fn bench<F, R>(label: &str, mut f: F) -> R
where
F: FnMut() -> R,
R: Copy + Display,
{
let start = Instant::now();
let result = f();
let elapsed = start.elapsed().as_secs_f64() * 1_000.0;
println!("{label:<22} -> result: {result}, time: {elapsed:.5} ms");
result
}
fn main() {
let n_small = 50;
bench("fib_naive(50)", || fib_naive(n_small));
bench("fib_memo(50)", || {
let mut memo = HashMap::new();
fib_memo(n_small, &mut memo)
});
bench("fib_tab_array(50)", || fib_tab_array(n_small));
bench("fib_tab(50)", || fib_tab(n_small));
let n_big = 100;
bench("fib_tab(100)", || fib_tab(n_big));
}

Benchmark Results

Terminal window
# TypeScript
sh-3.2$ ts-node demo.ts
fibNaive(50) -> result: 12586269025, time: 93773.58117 ms
fibMemo(50) -> result: 12586269025, time: 0.03188 ms
fibTabArray(50) -> result: 12586269025, time: 0.04342 ms
fibTab(50) -> result: 12586269025, time: 0.01692 ms
fibTab(100) -> result: 354224848179262000000, time: 0.00363 ms
# Rust
sh-3.2$ rustc demo.rs; ./demo
fib_naive(50) -> result: 12586269025, time: 69916.58687 ms
fib_memo(50) -> result: 12586269025, time: 0.04804 ms
fib_tab_array(50) -> result: 12586269025, time: 0.00250 ms
fib_tab(50) -> result: 12586269025, time: 0.00054 ms
fib_tab(100) -> result: 354224848179262000000, time: 0.00067 ms