ULIDs Are Faster Than Auto-Increment IDs. Yes, Really. I Benchmarked 46 Million Rows to Prove It.

9 min read

By Tiago Basilio

Your users can see /api/invoices/347 in their browser. They change it to 348. Now they’re looking at someone else’s invoice. Sequential IDs leak information, expose your data volume, and invite enumeration attacks. You already know you should switch to ULIDs or UUIDs.

But every time you bring it up, someone says: “String primary keys are slower. Your indexes will bloat. Just add a separate public_id column.”

So I stopped debating and started benchmarking. Identical schemas, one with bigint auto-increment IDs, one with char(26) ULIDs. Six data tiers from 1,700 rows to 46 million. MySQL 8, resource-constrained Docker containers matching real DigitalOcean droplet specs. 14 production-realistic Eloquent queries. k6 load tests with real traffic patterns.

ULIDs were up to 63% faster. Not at toy scale, where the conventional wisdom holds. At the scale where performance actually matters, with millions of rows and real buffer pool pressure, ULIDs crushed auto-increment IDs on nearly every query. So not only do ULIDs fix the security problem, they’re actually the faster choice once your data grows.

The Setup

Both database variants ran identical Laravel applications with identical data distributions. The only difference was the primary key type on four user-facing tables (users, profiles, and two content tables with parent-child relationships). Internal tables like usage logs and audit trails kept auto-increment IDs in both variants, following the hybrid approach.

Six data tiers, from tiny to massive:

TierUsersResponsesTotal Rows
T150~1,700~4K
T2500~67,000~145K
T32,000~540,000~1.1M
T45,000~3,100,000~6.5M
T510,000~10,400,000~21M
T625,000~46,000,000~94M

MySQL 8.0 with a 128MB buffer pool, 1-2GB container memory, 1-2 vCPUs. Production-matched config, not a lab fantasy.

At Small Scale, IDs Win (Slightly)

At T1-T2, auto-increment IDs had a minor edge on most queries. Dashboard loads were 33% faster, admin queries 13% faster, profile lookups 38% faster. This matches conventional wisdom. Integer comparisons are cheaper than string comparisons. Smaller keys mean more rows per index page.

QueryID p50ULID p50Difference
Dashboard load2.25ms2.99ms+33%
List with pagination1.02ms1.07ms+5%
Admin user list2.07ms2.33ms+13%
Profile with JSON0.18ms0.24ms+38%

At this scale, we’re talking about fractions of a millisecond. Nothing a user would ever notice. But if you stopped benchmarking here, you’d conclude “IDs are faster” and move on.

At Scale, Everything Reverses

Here’s where it gets interesting. Starting around T4 (5,000 users, 3.1M responses), ULIDs became faster on the heaviest queries. By T6 (25,000 users, 46M responses), ULIDs dominated on 11 of 14 queries.

The two most expensive queries in the application:

TierResponsesID p50ULID p50ULID vs ID
T1~1,7001.5ms1.7ms+13% (ID wins)
T2~67K4.3ms4.4ms+4% (tie)
T3~540K6.1ms5.9ms-3% (tie)
T4~3.1M16.3ms12.8ms-22%
T5~10.4M21.9ms18.0ms-18%
T6~46M88.8ms36.4ms-59%

At T6, the full picture is striking:

QueryID p50ULID p50Difference
Dashboard load17.2ms24.7ms+43%
List with pagination5.9ms3.4ms-41%
Detail view2.8ms1.5ms-45%
Latest revision88.8ms36.4ms-59%
Parent-child chain6.6ms3.1ms-53%
Usage dashboard4.9ms2.5ms-49%
Filtered with joins97.2ms37.9ms-61%

The crossover point was around T3-T4, somewhere between 500K and 3M rows. Beyond that, ULID’s advantage grew superlinearly.

Why ULIDs Are Faster at Scale

The answer is temporal locality in B-tree indexes.

A ULID starts with a millisecond-precision timestamp. When you insert rows, they’re ordered chronologically in the B-tree. This means a single user’s recent data lives in a tight cluster of adjacent leaf pages.

Auto-increment IDs don’t have this property. When User A creates a record, it gets ID 1,000,001. When User B creates one, it gets 1,000,002. User A’s next record gets 1,000,003. Everyone’s data is interleaved across the entire B-tree.

When you query “give me User A’s latest records,” with ULIDs the database reads a small cluster of adjacent pages. With auto-increment IDs, it has to chase pointers across the entire index.

At small scale, the B-tree fits in memory and random access is cheap. At large scale, when the working set exceeds the buffer pool, this locality difference becomes massive. The buffer pool was 128MB. The database at T6 was 65-74GB. Every cache miss means disk I/O, and clustered data means fewer misses.

The one query where IDs consistently won? The dashboard aggregation, which touches data across many users. Temporal locality doesn’t help when you need a cross-user view. That query stayed 43% slower on ULIDs at every tier.

The Real Bottleneck Was a Query Pattern, Not the ID Type

While benchmarking, I discovered something more impactful than the ID type choice. The application had a scope to find “leaf” records in a parent-child chain, using a NOT EXISTS subquery:

SELECT * FROM documents
WHERE user_id = ?
AND NOT EXISTS (
    SELECT 1 FROM documents AS children
    WHERE children.parent_id = documents.id
)

The subquery used a raw query builder that bypassed the application’s user scope. It scanned every row in the entire table looking for children, regardless of which user was querying. This was the single worst-performing query at every tier.

Three fixes, benchmarked:

FixDescriptionT6 ID p50T6 ULID p50
OriginalUnscoped NOT EXISTS88.8ms36.4ms
Scoped subqueryAdd user_id to subquery85.7ms33.7ms
is_latest booleanEliminate subquery entirely79.3ms29.5ms

The is_latest boolean column eliminated the subquery entirely. Instead of asking “find records with no children,” you just filter on WHERE is_latest = true. At T6, this saved 9.5ms on IDs and 7ms on ULIDs.

The lesson: your query patterns matter far more than your primary key type. Both variants suffered from the same bad query. Fix the query first, then optimize the key type.

Storage Overhead Is Real but Manageable

ULID char(26) keys are wider than 8-byte bigints. This compounds across millions of rows and their foreign key indexes.

TierResponsesID SizeULID SizeOverhead
T1-T4<3M~same~same~0%
T5~10.4M15.5 GB16.8 GB+8%
T6~46M65.6 GB74.3 GB+13%

At T6, ULIDs cost an extra 8.7GB. That’s real. But the query performance gains at this scale more than compensate.

Seeding was also slower with ULIDs: 64 minutes for IDs vs 87 minutes for ULIDs at T6 (36% slower). Wider key inserts and index maintenance on char(26) columns add up during bulk operations.

Load Testing Confirmed the Benchmarks

Individual query benchmarks are useful, but I also ran k6 load tests simulating real user traffic (dashboard loads, browsing, detail views, creates, filtered queries) across four DigitalOcean droplet configurations.

At T5 (10.4M responses), with 20 concurrent users on a 1 vCPU / 1GB RAM server:

  • ID: median 7.8ms, p95 26.6ms
  • ULID: median 7.8ms, p95 26.8ms

Essentially identical. The smallest tier handled 10 million responses with 20 concurrent users at p95 under 27ms. Both variants. All thresholds passed.

At T6 (46M responses), differences emerged in the tail:

  • 1 vCPU / 1GB tier peak p95: ID 71.2ms, ULID 83.7ms
  • 2 vCPU / 4GB tier peak p95: ID 55.5ms, ULID 55.3ms

The 2 vCPU / 4GB tier delivered the best peak performance, with ULID and ID converging at p95. Even at 46 million responses, the smallest server stayed under 85ms p95 with 20 concurrent users.

The Hybrid Approach

Not every table needs ULIDs. The rule is simple: if the ID appears in a URL, API response, or anywhere a user can see it, use a ULID. If it’s purely internal, keep auto-increment.

Use ULIDs on:

  • Users, profiles, and any content tables whose IDs appear in URLs or API responses

Keep auto-increment on:

  • Usage logs, audit trails, billing internals, and admin-only tables that are never exposed

In Laravel, this means applying the HasUlids trait selectively. Foreign keys pointing to ULID tables use foreignUlid(), but internal tables’ own primary keys stay as bigint.

What I’d Tell Someone Choosing Today

Use ULIDs for user-facing tables. The security benefits alone justify it, and the performance tax at small scale is negligible. At large scale, you actually win.

Don’t switch internal tables. Write-heavy logging tables with ULID foreign keys showed 5x overhead at scale during aggregation queries. Keep internal tables simple.

Fix your queries first. The biggest performance gain in my entire benchmark came from fixing a bad subquery pattern, not from choosing the right ID type. A well-indexed query with integer IDs will always beat a poorly-written query with ULIDs.

Don’t trust small-scale benchmarks. If I’d stopped at T2 (500 users), I’d have concluded ULIDs are 5-38% slower. The reversal doesn’t appear until you have enough data to exceed your buffer pool, which is exactly when performance matters most.

Storage is the real cost. At 46M rows, ULIDs added 13% storage overhead. Plan your disk accordingly. But if you’re at 46 million rows, you can probably afford the extra disk space.

The conventional wisdom that “string keys are always slower” is wrong. It’s more nuanced than that, and at the scale where performance actually matters, the conventional wisdom is backwards.