Don’t use PDF.
Don’t use Word.
For heaven’s sake, don’t “make a deck”.
Just use markdown.
Agents can consume markdown so much easier than they can consume these other formats. Just use markdown.
Don’t use PDF.
Don’t use Word.
For heaven’s sake, don’t “make a deck”.
Just use markdown.
Agents can consume markdown so much easier than they can consume these other formats. Just use markdown.
I benchmarked equality, GetHashCode, and HashSet operations across different C# type implementations. The tests compared struct, readonly struct, record, and readonly record struct at various sizes (16-128 bytes).
The conventional wisdom has been “don’t have more than 16 bytes in a struct” with 32 bit CPU architectures, and no more than 32 bytes on 64 bit CPUs, but mostly the former. I wanted to see what the actual performance degradation was as struct size increases.
I like to use readonly record struct to model aspects of my domain knowing that the compiler will optimize the type away to just the properties it contains, while guaranteeing domain correctness.
Tests were run on an Apple M3 using .NET 8 with BenchmarkDotNet’s ShortRun configuration.
record is a value type, so increasing the number of properties doesn’t change the performance characteristics much, but I did it to keep the basis for comparison as even as possible.
| Type | 16-byte (ns) | 32-byte (ns) | 64-byte (ns) | 128-byte (ns) |
|---|---|---|---|---|
| Struct | 7.66 | 8.06 | 10.83 | 16.46 |
| Struct + IEquatable | 0.00 | 1.25 | 3.61 | 9.82 |
| Readonly Struct | 7.56 | 8.02 | 10.91 | 16.51 |
| Readonly Struct + IEquatable | 0.00 | 1.08 | 3.21 | 9.64 |
| Readonly Record Struct | 0.00 | 1.00 | 2.82 | 7.57 |
| Record | 0.94 | 1.27 | 2.55 | 6.32 |
| Type | 16-byte (ns) | 32-byte (ns) | 64-byte (ns) | 128-byte (ns) |
|---|---|---|---|---|
| Struct | 12.41 | 12.64 | 10.88 | 13.13 |
| Struct + IEquatable | 1.90 | 3.14 | 9.45 | 17.71 |
| Readonly Struct | 11.14 | 12.95 | 10.98 | 13.22 |
| Readonly Struct + IEquatable | 1.73 | 2.87 | 8.96 | 16.82 |
| Readonly Record Struct | 0.00 | 0.43 | 2.67 | 9.80 |
| Record | 1.33 | 1.79 | 4.31 | 10.82 |
| Type | 16-byte (ns) | 32-byte (ns) | 64-byte (ns) | 128-byte (ns) |
|---|---|---|---|---|
| Struct | 20.27 | 23.59 | 24.00 | 34.56 |
| Struct + IEquatable | 5.08 | 6.73 | 15.90 | 27.34 |
| Readonly Struct | 20.47 | 23.87 | 24.08 | 33.87 |
| Readonly Struct + IEquatable | 5.11 | 6.61 | 15.50 | 27.41 |
| Readonly Record Struct | 2.42 | 4.61 | 9.57 | 20.65 |
| Record | 6.14 | 6.89 | 10.51 | 19.20 |
| Type | 16-byte | 32-byte | 64-byte | 128-byte |
|---|---|---|---|---|
| Struct (Equals) | 64 B | 96 B | 160 B | 288 B |
| Struct (GetHashCode) | 32 B | 48 B | 80 B | 144 B |
| Struct (HashSet) | 96 B | 144 B | 240 B | 432 B |
| Readonly Struct (Equals) | 64 B | 96 B | 160 B | 288 B |
| Readonly Struct (GetHashCode) | 32 B | 48 B | 80 B | 144 B |
| Readonly Struct (HashSet) | 96 B | 144 B | 240 B | 432 B |
| All others | 0 B | 0 B | 0 B | 0 B |
IEquatable<T> use ValueType.Equals, which boxes and allocates, causing 7-16ns overhead per operation.ValueType.Equals is 3-5x slower.I benchmarked 14 common approaches to counting substrings in .NET. The approaches differed up to 60-70x in execution time with memory allocations ranging from zero to over 160KB.
Tests searched for substring “the” in two strings on an Apple M3 using .NET 8 with BenchmarkDotNet’s ShortRun configuration. The big string was the first chapter of The Hobbit, and the small string was the first 100 chars of the big string.
| Approach | Small (ns) | Large (ns) | Allocated |
|---|---|---|---|
| Span | 17.71 | 8,227 | 0 B |
| IndexOf (Ordinal) | 18.93 | 8,662 | 0 B |
| IndexOf (OrdinalIgnoreCase) | 20.47 | 10,463 | 0 B |
| String.Replace | 37.33 | 24,645 | 216 B / 87,963 B |
| Cached, compiled Regex | 127.17 | 40,968 | 560 B / 162,880 B |
| Instantiating a Regex inline | 416.44 | 49,698 | 2,528 B / 164,848 B |
| Static Regex (Regex.Match) | 154.42 | 50,996 | 560 B / 162,880 B |
| String.Split | 145.47 | 70,195 | 304 B / 111,058 B |
| IndexOf (InvariantCulture) | 1,216.64 | 523,154 | 0 B / 1 B |
| IndexOf (InvariantCultureIgnoreCase) | 1,314.57 | 534,426 | 0 B / 1 B |
| IndexOf (CurrentCultureIgnoreCase) | 1,329.19 | 536,436 | 0 B / 1 B |
| IndexOf (CurrentCulture – default) | 1,224.49 | 553,913 | 0 B / 1 B |
Allocated column shows small/large text allocations.
If you’re a backend or line of business developer modeling your domain, you probably want IndexOf with the Ordinal or OrdinalIgnoreCase comparer, depending on domain semantics.
FrozenDictionary offers faster reads but slower creation. Here’s when the trade-off makes sense.
Based on benchmark data, here are the break-even points where FrozenDictionary becomes worthwhile:
| Collection Size | Cache Hits | Cache Misses |
|---|---|---|
| 10 elements | 276 reads | 125 reads |
| 100 elements | 1,831 reads | 804 reads |
| 1,000 elements | 22,157 reads | 9,634 reads |
| 10,000 elements | 217,271 reads | 104,890 reads |
(Based on string keys and OrdinalIgnoreCase comparison.)
FrozenDictionary is much faster for failed lookups (0.33ns vs 6-7ns), so collections with cache misses justify the switch sooner.FrozenDictionary maintains good performance regardless of collection size.Switch to FrozenDictionary when you expect:
FrozenDictionary‘s creation penalty is substantial but decreases as collection size increases:
| Elements | Dictionary | FrozenDictionary | Overhead |
|---|---|---|---|
| 10 | 90.30 ns | 867.24 ns | 9.6x |
| 100 | 900.73 ns | 6,285.94 ns | 7.0x |
| 1,000 | 10,597.66 ns | 65,989.60 ns | 6.2x |
| 10,000 | 138,642.89 ns | 781,551.17 ns | 5.6x |
| Elements | dict Hit | dict Miss | Frozen Hit | frozen Miss |
|---|---|---|---|---|
| 10 | 5.48 ns | 6.54 ns | 2.66 ns | 0.33 ns |
| 100 | 5.77 ns | 7.04 ns | 2.83 ns | 0.34 ns |
| 1,000 | 5.45 ns | 6.08 ns | 2.95 ns | 0.33 ns |
| 10,000 | 5.64 ns | 6.49 ns | 2.68 ns | 0.36 ns |
My Logitech Brio stopped working after I upgraded from Monterey to Ventura. It’s always been connected to an OWC dock, along with a bunch of other peripherals. Maybe I can save you 15-20 minutes by sharing what I did:
And that was it. You can use Photo Booth to test at steps 3 and 4 to make sure it’s working along the way. I also rebooted, and everything stayed fixed after the restart.
Because I can never remember them.
This is an honest-to-goodness note to my future self.
Definition: Making non-deductible contributions to a traditional IRA, then converting to Roth IRA.
Mechanics: Contribute up to $7,000 ($8,000 if 50+) to a traditional IRA without taking the tax deduction. Then convert this money to a Roth IRA. No additional taxes owed since the contribution was already made with after-tax dollars.
When it can be done: Any time, but primarily useful when your income exceeds direct Roth IRA contribution limits ($153k-$161k for 2025).
Notes: Works best with empty traditional IRA accounts to avoid pro-rata rule complications.
Definition: Converting after-tax 401k contributions to Roth IRA money.
Mechanics: Make after-tax contributions to your 401k beyond the pre-tax/Roth limits. Then either convert directly to Roth IRA (if plan allows) or roll to a rollover IRA first, then convert to Roth IRA. No additional taxes since money was already taxed.
When it can be done: Whenever your 401k plan allows after-tax contributions and in-service distributions or conversions.
Notes: Total 401k contributions (pre-tax + Roth + after-tax) are limited to $71,000 for 2025 ($78,500 if 50+). Not all 401k plans offer this option.
Like many developers, I have collected a bunch of useful methods over time. Most of the time, these methods don’t have unit tests, nor do they have performance tests. Many of them have origins at StackOverflow — which uses the MIT license — and many of them don’t.
I started collecting them formally about two years ago. Recently I decided to actually turn them into something I could consume via nuget, because I was getting fed up with copying and pasting code everywhere.
Haystack targets .NET Standard 1.3, which means it works with:
Constant time string comparison matter in cryptography for various reasons. To that end, fast string comparisons can leak information, so we want to exhaustively check all the bytes in the string, even if we know the strings aren’t equal early on.
const string here = "Here"; const string there = "There"; var areSame = here.ConstantTimeEquals(there); // false
It’s useful to be able to remove substrings from the beginning and/or end of a string. With or without a StringComparer overload.
const string trim = "Hello world"; const string hello = "Hello worldThis is a hello worldHello world"; var trimFront = hello.TrimStart(trim); // This is a hello worldHello world var trimEnd = hello.TrimEnd(trim); // Hello worldThis is a hello world var trimBoth = hello.Trim(trim); // This is a hello world
The library is growing bit-by-bit, and contributions are welcome!
I switched to Firefox recently, which has a “Recommended by Pocket” section on the New Tab page. As I expected, many of the recommendations are productivity fetish articles from Lifehacker and similar rubbish. Their job is not to make you more productive–whatever that means–it’s to keep you reading.
Instead:
If you follow these guidelines, you’ll be happier, less stressed, and deliver more value.
I’ve run ical.net, an RFC-5545 (icalendar) library for .NET since ~May 2016. It’s basically the only game in town if you need to do anything with icalendar-formatted data. (Those .ics files you get as email attachments are icalendar data.)
A lot of these fall into the “pretty obvious” category of observations.
If nothing else, it serves as a historical reference for your own benefit. It also helps your users understand whether it’s worth upgrading. And when your coworkers ask if a version jump is important weeks after you’ve published it, you can point them to the release notes for that version, and they’ll never ask you again.
One of the best things I did when I first figured out how to make a nuget package was push as much into my nuspec file as I could. Everything I learned about various do’s and don’ts was pushed into the code in the moment I learned it.
Not everything in ical.net is automated, and I think I’m OK with that for now. For example, a merge doesn’t trigger a new nuget package version. I think that’s probably a feature rather than a bug.
I suspect I’ll reach a second tipping point where
Scott Hanselman has the right of this:
Keep your emails to 3-4 sentences, Hanselman says. Anything longer should be on a blog or wiki or on your product’s documentation, FAQ or knowledge base. “Anywhere in the world except email because email is where your keystrokes go to die,” he says.
That means I reply to a lot of emails with a variation of “Please ask this on StackOverflow so I can answer it in public.” And many of those answers are tailored to the question, and then I include a link to a wiki page that answers a more general form of the question. Public redundancy is okay.
Accrete your public documentation.
When I took over dday.ical, there were about 70 (out of maybe 250) unit tests that were failing. There was so much noise that it was impossible to know anything about the state of the code. My primary aim was to improve performance for some production issues that we were having, but I couldn’t safely do that without resolving the crazy number of broken unit tests.
The first thing I did was evaluate each and every broken test, and decide what to do. Having a real, safe baseline was imperative, because you never want to introduce a regression that could have been caught.
The corollary to this is that sometimes your unit tests assert the wrong things. So a bugfix in one place may expose an bad assertion in a unit test elsewhere. That happened quite a lot, especially early on.
(So long as your unit tests are passing.)
Pinning down what “smaller” means is difficult. Lines of code may be a rough proxy, but I think I mean smaller in the sense of “high semantic density” + “low cognitive load”.
I think a preference for semantic density is a taste that develops over time.
Given a version number MAJOR.MINOR.PATCH, increment the:
This seems like common sense advice, but by imposing some modest constraints, it frees you from thinking about certain classes of problems:
And by holding my own feet to the fire, and following my own rules, I’m a better developer.
.NET Core is an exciting development. I would LOVE for ical.net to have a .NET Core version, and I’ve made some strides in that direction. But the .NET Core tooling is still beta, the progress in VS 2017 RC notwithstanding. I spent some time trying to get a version working–and I did–but I couldn’t see any easy way to automate the compilation of a .NET Core nuget package alongside the normal framework versions without hating my life.
So I abandoned it.
When the tooling is out of beta, I expect maintaining a Core version will be easier and Core adoption will be higher, both of which improve the ROI with respect to development effort.
Automation, comprehensive unit test coverage with a mandatory-100% pass rate, lower cognitive load, higher semantic density, etc. All these things help you go faster with a high degree of confidence later on.
And if you’re not okay with that, then being a maintainer might not be a good fit for you.
All of these things are common when you run an open source project that has traction. Ask anyone.