Dictionary vs FrozenDictionary: when does the extra overhead break-even?

FrozenDictionary offers faster reads but slower creation. Here’s when the trade-off makes sense.

The tl;dr-

Based on benchmark data, here are the break-even points where FrozenDictionary becomes worthwhile:

Collection Size Cache Hits Cache Misses
10 elements 276 reads 125 reads
100 elements 1,831 reads 804 reads
1,000 elements 22,157 reads 9,634 reads
10,000 elements 217,271 reads 104,890 reads

(Based on string keys and OrdinalIgnoreCase comparison.)

  • Cache misses break-even fasterFrozenDictionary is much faster for failed lookups (0.33ns vs 6-7ns), so collections with cache misses justify the switch sooner.
  • Creation cost scales ~linearly with size – Creation cost grows in a fairly linear way, so larger collections will need more reads to justify use.
  • Read performance is consistent – Once created, FrozenDictionary maintains good performance regardless of collection size.

Switch to FrozenDictionary when you expect:

  • Small collections (10-100 elements): Hundreds to low thousands of reads
  • Large collections (1K+ elements): 10K+ reads with cache misses, or 20K+ reads with all hits
  • A high percentage of misses: you can cut the number of reads in ~half if you expect a high percentage of key misses.

Instantiation costs

FrozenDictionary‘s creation penalty is substantial but decreases as collection size increases:

Elements Dictionary FrozenDictionary Overhead
10 90.30 ns 867.24 ns 9.6x
100 900.73 ns 6,285.94 ns 7.0x
1,000 10,597.66 ns 65,989.60 ns 6.2x
10,000 138,642.89 ns 781,551.17 ns 5.6x

Hit + miss costs

Elements dict Hit dict Miss Frozen  Hit frozen Miss
10 5.48 ns 6.54 ns 2.66 ns 0.33 ns
100 5.77 ns 7.04 ns 2.83 ns 0.34 ns
1,000 5.45 ns 6.08 ns 2.95 ns 0.33 ns
10,000 5.64 ns 6.49 ns 2.68 ns 0.36 ns

The code

Leave a Reply

Your email address will not be published. Required fields are marked *