Keep doing the reading

A growing phenomenon I have observed over the last year, is the use of frontier models to summarize differing points of view. This is fine, so long as the topic at hand is throwaway.

If you care about the topic, and you want it to become a durable part of your mental model, you need to wrestle with it. Struggle with the ideas. Sit with them, Reconcile them against each other in your own mind. “Update your Bayesian priors” as the Silicon Valley dweebs would say.

You can’t understand something by skipping over doing the work. Actually doing the work will set you apart in a world where competence becomes easier to fake.

(I also believe there’s an implied corollary: Know when mastery is NOT important.)

.NET struct performance degradation by size

I benchmarked equality, GetHashCode, and HashSet operations across different C# type implementations. The tests compared struct, readonly struct, record, and readonly record struct at various sizes (16-128 bytes).

Why?

The conventional wisdom has been “don’t have more than 16 bytes in a struct” with 32 bit CPU architectures, and no more than 32 bytes on 64 bit CPUs, but mostly the former. I wanted to see what the actual performance degradation was as struct size increases.

I like to use readonly record struct to model aspects of my domain knowing that the compiler will optimize the type away to just the properties it contains, while guaranteeing  domain correctness.

Setup

Tests were run on an Apple M3 using .NET 8 with BenchmarkDotNet’s ShortRun configuration.

  • 16 bytes: 4 int properties
  • 32 bytes: 8 int properties
  • 64 bytes: 16 int properties
  • 128 bytes: 32 int properties

record is a value type, so increasing the number of properties doesn’t change the performance characteristics much, but I did it to keep the basis for comparison as even as possible.

Equality Performance

Type 16-byte (ns) 32-byte (ns) 64-byte (ns) 128-byte (ns)
Struct 7.66 8.06 10.83 16.46
Struct + IEquatable 0.00 1.25 3.61 9.82
Readonly Struct 7.56 8.02 10.91 16.51
Readonly Struct + IEquatable 0.00 1.08 3.21 9.64
Readonly Record Struct 0.00 1.00 2.82 7.57
Record 0.94 1.27 2.55 6.32

GetHashCode Performance

Type 16-byte (ns) 32-byte (ns) 64-byte (ns) 128-byte (ns)
Struct 12.41 12.64 10.88 13.13
Struct + IEquatable 1.90 3.14 9.45 17.71
Readonly Struct 11.14 12.95 10.98 13.22
Readonly Struct + IEquatable 1.73 2.87 8.96 16.82
Readonly Record Struct 0.00 0.43 2.67 9.80
Record 1.33 1.79 4.31 10.82

HashSet.Contains Performance

Type 16-byte (ns) 32-byte (ns) 64-byte (ns) 128-byte (ns)
Struct 20.27 23.59 24.00 34.56
Struct + IEquatable 5.08 6.73 15.90 27.34
Readonly Struct 20.47 23.87 24.08 33.87
Readonly Struct + IEquatable 5.11 6.61 15.50 27.41
Readonly Record Struct 2.42 4.61 9.57 20.65
Record 6.14 6.89 10.51 19.20

Memory Allocations

Type 16-byte 32-byte 64-byte 128-byte
Struct (Equals) 64 B 96 B 160 B 288 B
Struct (GetHashCode) 32 B 48 B 80 B 144 B
Struct (HashSet) 96 B 144 B 240 B 432 B
Readonly Struct (Equals) 64 B 96 B 160 B 288 B
Readonly Struct (GetHashCode) 32 B 48 B 80 B 144 B
Readonly Struct (HashSet) 96 B 144 B 240 B 432 B
All others 0 B 0 B 0 B 0 B

Key Findings

  • Structs without IEquatable<T> use ValueType.Equals, which boxes and allocates,  causing 7-16ns overhead per operation.
  • Records consistently outperform structs for GetHashCode, likely due to compiler-generated optimizations.
  • Readonly record structs combine the benefits of value semantics with record optimizations, showing best overall performance for collection operations.
  • Struct size impacts performance linearly – 128-byte types take 3-5x longer than 16-byte types for most operations.

Recommendations

  • Always implement IEquatable<T> on structs. ValueType.Equals is 3-5x slower.
  • Use readonly record structs for small value types that need equality operations and collection membership tests.
  • Consider regular structs with IEquatable<T> for types larger than 64 bytes where mutability is needed.
  • Records (reference types) remain competitive for GetHashCode but have allocation overhead not shown in these benchmarks.

The most performant way to count substrings in .NET

I benchmarked 14 common approaches to counting substrings in .NET. The approaches differed up to 60-70x in execution time with memory allocations ranging from zero to over 160KB.

Setup

Tests searched for substring “the” in two strings on an Apple M3 using .NET 8 with BenchmarkDotNet’s ShortRun configuration. The big string was the first chapter of The Hobbit, and the small string was the first 100 chars of the big string.

Results

Approach Small (ns) Large (ns) Allocated
Span 17.71 8,227 0 B
IndexOf (Ordinal) 18.93 8,662 0 B
IndexOf (OrdinalIgnoreCase) 20.47 10,463 0 B
String.Replace 37.33 24,645 216 B / 87,963 B
Cached, compiled Regex 127.17 40,968 560 B / 162,880 B
Instantiating a Regex inline 416.44 49,698 2,528 B / 164,848 B
Static Regex (Regex.Match) 154.42 50,996 560 B / 162,880 B
String.Split 145.47 70,195 304 B / 111,058 B
IndexOf (InvariantCulture) 1,216.64 523,154 0 B / 1 B
IndexOf (InvariantCultureIgnoreCase) 1,314.57 534,426 0 B / 1 B
IndexOf (CurrentCultureIgnoreCase) 1,329.19 536,436 0 B / 1 B
IndexOf (CurrentCulture – default) 1,224.49 553,913 0 B / 1 B

Allocated column shows small/large text allocations.

Key Findings

  • Ordinal string operations are 60x faster than culture-aware operations.
  • Span and IndexOf with StringComparison.Ordinal both achieve zero allocations and optimal performance.
  • Regex approaches allocate 160KB+ for large texts despite reasonable performance.
  • Split creates an array of all segments, explaining its 111KB allocation.
    • With larger strings, this creates an object on the Large Object Heap, which has different garbage collection characteristics, and should be avoided.

Recommendation

If you’re a backend or line of business developer modeling your domain, you probably want IndexOf with the Ordinal or OrdinalIgnoreCase comparer, depending on domain semantics.

See also

Dictionary vs FrozenDictionary: when does the extra overhead break-even?

FrozenDictionary offers faster reads but slower creation. Here’s when the trade-off makes sense.

The tl;dr-

Based on benchmark data, here are the break-even points where FrozenDictionary becomes worthwhile:

Collection Size Cache Hits Cache Misses
10 elements 276 reads 125 reads
100 elements 1,831 reads 804 reads
1,000 elements 22,157 reads 9,634 reads
10,000 elements 217,271 reads 104,890 reads

(Based on string keys and OrdinalIgnoreCase comparison.)

  • Cache misses break-even fasterFrozenDictionary is much faster for failed lookups (0.33ns vs 6-7ns), so collections with cache misses justify the switch sooner.
  • Creation cost scales ~linearly with size – Creation cost grows in a fairly linear way, so larger collections will need more reads to justify use.
  • Read performance is consistent – Once created, FrozenDictionary maintains good performance regardless of collection size.

Switch to FrozenDictionary when you expect:

  • Small collections (10-100 elements): Hundreds to low thousands of reads
  • Large collections (1K+ elements): 10K+ reads with cache misses, or 20K+ reads with all hits
  • A high percentage of misses: you can cut the number of reads in ~half if you expect a high percentage of key misses.

Instantiation costs

FrozenDictionary‘s creation penalty is substantial but decreases as collection size increases:

Elements Dictionary FrozenDictionary Overhead
10 90.30 ns 867.24 ns 9.6x
100 900.73 ns 6,285.94 ns 7.0x
1,000 10,597.66 ns 65,989.60 ns 6.2x
10,000 138,642.89 ns 781,551.17 ns 5.6x

Hit + miss costs

Elements dict Hit dict Miss Frozen  Hit frozen Miss
10 5.48 ns 6.54 ns 2.66 ns 0.33 ns
100 5.77 ns 7.04 ns 2.83 ns 0.34 ns
1,000 5.45 ns 6.08 ns 2.95 ns 0.33 ns
10,000 5.64 ns 6.49 ns 2.68 ns 0.36 ns

The code

2 minute fix: Ventura + Logitech Brio + dock

My Logitech Brio stopped working after I upgraded from Monterey to Ventura. It’s always been connected to an OWC dock, along with a bunch of other peripherals. Maybe I can save you 15-20 minutes by sharing what I did:

  1. Download the Logitech firmware update for the Brio.
  2. Connect the webcam directly to the computer. In my case, I had to attach a USB-A to USB-C adapter because my MacBook Pro only has USB-C ports. (Contrary to some forum posts, I didn’t need a whole new cable, an adapter worked fine.)
  3. Run the firmware update tool. My firmware was at v1.1, and the latest was 2.9. The update took less than a minute to complete.
  4. Remove the USB-A to USB-C adapter, and reconnect the camera to the dock in the same place it was before.

And that was it. You can use Photo Booth to test at steps 3 and 4 to make sure it’s working along the way. I also rebooted, and everything stayed fixed after the restart.

Circuit breakers in stock market trading

Because I can never remember them.

Level 1 halt (7% drop)

  • Trading will halt for 15 minutes if drop occurs before 3:25 p.m.
  • At or after 3:25 p.m.—trading shall continue, unless there is a Level 3 halt.

Level 2 halt (13% drop)

  • Trading will halt for 15 minutes if drop occurs before 3:25 p.m.
  • At or after 3:25 p.m.—trading shall continue, unless there is a Level 3 halt.

Level 3 halt (20% drop)

  • At any time during the trading day—trading shall halt for the remainder of the trading day.

Backdoor, megabackdoor, and regular Roth IRA conversions

This is an honest-to-goodness note to my future self.

Backdoor Roth IRA

Definition: Making non-deductible contributions to a traditional IRA, then converting to Roth IRA.

Mechanics: Contribute up to $7,000 ($8,000 if 50+) to a traditional IRA without taking the tax deduction. Then convert this money to a Roth IRA. No additional taxes owed since the contribution was already made with after-tax dollars.

When it can be done: Any time, but primarily useful when your income exceeds direct Roth IRA contribution limits ($153k-$161k for 2025).

Notes: Works best with empty traditional IRA accounts to avoid pro-rata rule complications.

Mega-backdoor Roth IRA

Definition: Converting after-tax 401k contributions to Roth IRA money.

Mechanics: Make after-tax contributions to your 401k beyond the pre-tax/Roth limits. Then either convert directly to Roth IRA (if plan allows) or roll to a rollover IRA first, then convert to Roth IRA. No additional taxes since money was already taxed.

When it can be done: Whenever your 401k plan allows after-tax contributions and in-service distributions or conversions.

Notes: Total 401k contributions (pre-tax + Roth + after-tax) are limited to $71,000 for 2025 ($78,500 if 50+). Not all 401k plans offer this option.

Introducing Haystack – a grab bag of extension methods for .NET

Like many developers, I have collected a bunch of useful methods over time. Most of the time, these methods don’t have unit tests, nor do they have performance tests. Many of them have origins at StackOverflow — which uses the MIT license — and many of them don’t.

I started collecting them formally about two years ago. Recently I decided to actually turn them into something I could consume via nuget, because I was getting fed up with copying and pasting code everywhere.

Compatibility

Haystack targets .NET Standard 1.3, which means it works with:

  • .NET 4.6+
  • .NET Core 1.0+
  • Mono 4.6+
  • UWP 10+

Tradeoffs

  • Performance vs maintainability: If I have to choose between maintainability and raw speed, I’ll choose maintainability. To that end, if there were was more than one maintainable approach, I chose the faster of the two, using Benchmark.NET to determine the winner. In some cases, like constant time string comparisons, slower is actually better, so as not to leak information, but only in certain places, so those places where optimizations might leak information are purposely slow, whereas the less security-critical areas use the faster implementation.
  • Correctness: For the most part, each method has unit tests associated with it.

Examples

string.ConstantTimeCompare

Constant time string comparison matter in cryptography for various reasons. To that end, fast string comparisons can leak information, so we want to exhaustively check all the bytes in the string, even if we know the strings aren’t equal early on.

const string here = "Here";
const string there = "There";
 
var areSame = here.ConstantTimeEquals(there);   // false

string.TrimStart and string.TrimEnd

It’s useful to be able to remove substrings from the beginning and/or end of a string. With or without a StringComparer overload.

const string trim = "Hello world";
const string hello = "Hello worldThis is a hello worldHello world";
 
var trimFront = hello.TrimStart(trim);   // This is a hello worldHello world
var trimEnd = hello.TrimEnd(trim);       // Hello worldThis is a hello world
var trimBoth = hello.Trim(trim);         // This is a hello world

The library is growing bit-by-bit, and contributions are welcome!

Productivity fetishism

I switched to Firefox recently, which has a “Recommended by Pocket” section on the New Tab page. As I expected, many of the recommendations are productivity fetish articles from Lifehacker and similar rubbish. Their job is not to make you more productive–whatever that means–it’s to keep you reading.

Instead:

  1. Discover what’s valuable. Talk to people with high visibility and the insight to match. That could be an executive, or it could be your spouse.
  2. Do only valuable things. Being busy doesn’t mean you’re doing anything worth doing.
  3. Learn your tools. If something feels like it’s harder than it should be, you’re using the wrong tool, or you don’t know your tools well enough.

If you follow these guidelines, you’ll be happier, less stressed, and deliver more value.