All posts by Rian

LexisMed has a new home

For years, LexisMed has lived here on this website. Well, as of mid-January, that’s no longer the case. I’ve split it off into two–soon to be three–distinct products:

  • LexisMed Speller: an inexpensive, installable set of dictionaries that work with Chrome, Firefox, Office, and ClaroReader. Also available for institutions with real deployment tooling.
  • LexisMed Lexicon: raw datasets that can be used by software developers building applications that require a rich medical terminology database.
  • LexisMed API: a client-side set of software development components that provide spell-checking capabilities in applications. These haven’t been created, but they’re in progress. The first version will be focused on .NET applications.

Along the way, I bumped the word count up to about 810,000 words.

9 observations on 6 months of running a moderately-successful open source project

I’ve run, an RFC-5545 (icalendar) library for .NET since ~May 2016. It’s basically the only game in town if you need to do anything with icalendar-formatted data. (Those .ics files you get as email attachments are icalendar data.)

A lot of these fall into the “pretty obvious” category of observations.

1) Release notes matter

If nothing else, it serves as a historical reference for your own benefit. It also helps your users understand whether it’s worth upgrading. And when your coworkers ask if a version jump is important weeks after you’ve published it, you can point them to the release notes for that version, and they’ll never ask you again.

2) Automation is important

One of the best things I did when I first figured out how to make a nuget package was push as much into my nuspec file as I could. Everything I learned about various do’s and don’ts was pushed into the code in the moment I learned it.

Not everything in is automated, and I think I’m OK with that for now. For example, a merge doesn’t trigger a new nuget package version. I think that’s probably a feature rather than a bug.

I suspect I’ll reach a second tipping point where

3) Document in public

Scott Hanselman has the right of this:

Keep your emails to 3-4 sentences, Hanselman says. Anything longer should be on a blog or wiki or on your product’s documentation, FAQ or knowledge base. “Anywhere in the world except email because email is where your keystrokes go to die,” he says.

That means I reply to a lot of emails with a variation of “Please ask this on StackOverflow so I can answer it in public.” And many of those answers are tailored to the question, and then I include a link to a wiki page that answers a more general form of the question. Public redundancy is okay.

Accrete your public documentation.

4) Broken unit tests should be fixed or (possibly) deleted

When I took over dday.ical, there were about 70 (out of maybe 250) unit tests that were failing. There was so much noise that it was impossible to know anything about the state of the code. My primary aim was to improve performance for some production issues that we were having, but I couldn’t safely do that without resolving the crazy number of broken unit tests.

The first thing I did was evaluate each and every broken test, and decide what to do. Having a real, safe baseline was imperative, because you never want to introduce a regression that could have been caught.

The corollary to this is that sometimes your unit tests assert the wrong things. So a bugfix in one place may expose an bad assertion in a unit test elsewhere. That happened quite a lot, especially early on.

5) Making code smaller is always the right thing to do

(So long as your unit tests are passing.)

Pinning down what “smaller” means is difficult. Lines of code may be a rough proxy, but I think I mean smaller in the sense of “high semantic density” + “low cognitive load”.

  • Reducing cognitive load can be achieved by simple things like reducing the number of superfluous types; eliminating unnecessary layers of indirection; having descriptive variable and method names; and having a preference for short, pure methods.
  • Semantic density can be increased by moving to a more declarative style of programming. Loops take up a lot of space and aren’t terribly powerful compared to their functional analogs: map, filter, fold, etc. (I personally find that I write more bugs when writing imperative code. YMMV.) You won’t find many loops in, but you will find a lot of LINQ.

I think a preference for semantic density is a taste that develops over time.

6) Semantic versioning is the bee’s knees

In a nutshell:

Given a version number MAJOR.MINOR.PATCH, increment the:

  1. MAJOR version when you make incompatible API changes,
  2. MINOR version when you add functionality in a backwards-compatible manner, and
  3. PATCH version when you make backwards-compatible bug fixes.

This seems like common sense advice, but by imposing some modest constraints, it frees you from thinking about certain classes of problems:

  • It’s concrete guidance to contributors as to why their pull requests are or are not acceptable, namely: breaking changes are a no-no
  • Maintaining a stable API is a good way to inspire confidence in consumers of your library

And by holding my own feet to the fire, and following my own rules, I’m a better developer.

7) People will want bleeding-edge features, but delivering them might not be the highest-impact thing you can do

.NET Core is an exciting development. I would LOVE for to have a .NET Core version, and I’ve made some strides in that direction. But the .NET Core tooling is still beta, the progress in VS 2017 RC notwithstanding. I spent some time trying to get a version working–and I did–but I couldn’t see any easy way to automate the compilation of a .NET Core nuget package alongside the normal framework versions without hating my life.

So I abandoned it.

When the tooling is out of beta, I expect maintaining a Core version will be easier and Core adoption will be higher, both of which improve the ROI with respect to development effort.

8) It’s all cumulative

Automation, comprehensive unit test coverage with a mandatory-100% pass rate, lower cognitive load, higher semantic density, etc. All these things help you go faster with a high degree of confidence later on.

9) People are bad at asking questions and opening tickets

And if you’re not okay with that, then being a maintainer might not be a good fit for you.

  • No, I really can’t make sense of your 17,000-line Google Calendar attachment, sorry.
  • No, I won’t debug your application for you, just because it uses on 2 lines of your 100+ line method, sorry.
  • No, I’m not going to drop everything to help you, no matter how many emails you send me in a 10 minute time interval, sorry.

All of these things are common when you run an open source project that has traction. Ask anyone.

A self-contained, roll-forward schema updater

I use Dapper for most of my database interactions. I like it because it’s simple, and does exactly one thing: runs SQL queries, and returns the typed results.

I also like to deploy my schema changes as part of my application itself instead of doing it as a separate data deployment. On application startup, the scripts are loaded and executed in lexical order one by one, where each schema change is idempotent in isolation.

The problem you run into is making destructive changes to schema, which is a reasonable thing to want to do. If script 003 creates a column of UNIQUEIDENTIFIER, and you want to convert that column to NVARCHAR in script 008, you have to go back do some reconciliation between column types. Adding indexes into the mix makes it even hairier. Scripts that are idempotent in isolation are easy to write. Maintaining a series of scripts that can be safely applied in order from beginning to end every time an application starts up is not.

Unless you keep track of which schema alterations have already been applied, and only apply the changes that the application hasn’t seen before. Here’s a short, self-contained implementation:

Proposed functionality and API changes for v3

Downloading remote resources

When I ported to .NET Core, I removed the ability to download remote payloads from a URI. I did this for many reasons:

  • There are myriad ways of accessing an HTTP resource. There are myriad ways of doing authentication. Consumers of are in a position to know the details of their environment, including security concerns, so responsibility for these concerns should lie with the developers using the library.
  • Choosing to support HttpClient leaves .NET 4.0 users out in the cold. Choosing to support WebClient brings those people into the fold, but leaves .NET Core and WinRT users out. It also prevents developers working with newer versions of .NET from benefiting from HttpClient.
  • Non-blocking IO leaves developers working with WinForms and framework versions < 4.5 out in the cold. Bringing those developers back into the fold means we can’t make use of async Tasks. Given the popularity of microservices and’s origins on the server side, this is a non-starter.

We can’t satisfy all use cases if we try to do everything, so instead I’ve decided that we’ll leave over-the-wire tasks to the developers using

The primacy of strings

To that end… strings will be the primary way to work with A developer should be able to instantiate everything from a huge collection of calendars down to a single calendar component (a VEVENT for example) by passing it a string that represents that thing. In modern C#, working directly with strings is more natural than passing Streams around, which is emblematic of old-school Java. It’s also more error prone: I fixed several memory leaks during the .NET Core port due to undisposed Streams)

  • The constructor will be the deserializer. It is reasonable for the constructor to deserialize the textual representation into the typed representation.
  • ToString() will be the serializer. It is reasonable for ToString() to serialize the typed representation into the textual representation.

Constructors as deserializers buys us…

Immutable types and (maybe) a fluid API

One of the challenges I faced when refactoring for performance was reasoning about mutable properties during serialization and deserialization. Today, deserialization makes extensive use of public, mutable properties. In fact, the documentation reflects this mutability:

To be completely honest, this state of affairs makes it quite difficult to make internal changes without breaking stuff. Many properties would naturally be getter-only, because they can be derived from simple internals, like Duration above. Yet they’re explicitly set during deserialization. This is an incredible vector for bugs and breaking changes. (Ask me how I know…)

If we close these doors and windows, it will increase our internal maneuverability.

Fluid API

Look at the code above. Couldn’t it be more elegant? Shouldn’t it be? I don’t yet have a fully-formed idea of what a more fluid API might look like. Suggestions welcome.

Component names


The .NET framework guidelines recommend prefixing interface names with “I”. The calendar spec is called “iCalendar”, as in “internet calendar”, which is an unfortunate coincidence. Naming conventions like IICalendarCollection offend my sense of aesthetics, so I renamed some objects when I forked from dday. I’ve come around to valuing consistency over aesthetics, so I may go back to the double-I where it makes sense to do so.


The object that represents “a DateTime with a time zone” is called a CalDateTime. I’m not wild about this; we already have the .NET DateTime struct which has its own shortcomings that’ve been exhaustively documented elsewhere. A reasonable replacement for CalDateTime might be a DateTimeOffset with a string representation of an IANA, BCL, or Serialization time zone, with the time zone conversions delegated to NodaTime for computing recurrences. (In fact, NodaTime is already doing the heavy lifting behind the scenes for performance reasons, but the implementation isn’t pretty because of CalDateTime‘s mutability. Were it immutable, it would have been a straightforward engine replacement.)

CalDateTime is the lynchpin for most of the library. Most of its public properties should be simple expression bodies. Saner serialization and deserialization will have to come first as outlined above.

Divergence from spec completeness and adherence


The iCalendar spec has ways of representing time change rules with VTIMEZONE. In the old days, dday.ical used this information to figure out Standard Time/Summer Time transitions. But as the spec itself notes:

Note: The specification of a global time zone registry is not addressed by this document and is left for future study. However, implementers may find the Olson time zone database [TZ] a useful reference. It is an informal, public-domain collection of time zone information, which is currently being maintained by volunteer Internet participants, and is used in several operating systems. This database contains current and historical time zone information for a wide variety of locations around the globe; it provides a time zone identifier for every unique time zone rule set in actual use since 1970, with historical data going back to the introduction of standard time.

At this point in time, the IANA (née Olson) tz database is the best source of truth. Relying on clients to specify reasonable time zone and time change behavior is unrealistic. I hope the spec authors revisit the VTIMEZONE element, and instead have it specify a standard time zone string, preferably IANA.

To that end… will continue to preserve VTIMEZONE fields, but it will not use them for recurrence computations or understanding Summer/Winter time changes. It will continue to rely on NodaTime for that.


As mentioned above, will no longer include functionality to download resources from URIs. It will continue to preserve these fields so clients can do what they wish with the information they contain. This isn’t a divergence from the spec, per se, which doesn’t state that clients should provide facilities to download resources.

dday.ical is now and available under the MIT license with many performance enhancements

A few months ago, I needed to do some calendar programming for work, and I came across the dday.ical library, like many developers before me. And like many developers, I discovered that dday.ical doesn’t have the best performance, particularly under heavy server loads.

I dug in, and started making changes to the source code, and that’s when I discovered that the licensing was ambiguous, and that it had been abandoned. I was concerned that I might be exposing my company to risk due to unclear copyright, and a non-standard license.

With some effort, I was able to track down Doug Day (dday), and he gave me permission to fork, rename (, and relicense his library (MIT), which I have done. So I’m happy to report…

dday.ical is now

mdavid, who saw to it that the library wasn’t lost to the dustbin of Internet history, has graciously redirected dday users to Khalid Abuhakmeh, who published the dday nuget package that you might be using (you should switch ASAP) has also agreed to archive and redirect users to

So… why should you use the new package?

Unambiguous licensing

Doug has revoked his copyright, and given unrestricted permission to give dday.ical new life as That means is unencumbered by legal ambiguities.

Many performance enhancements

My changes to have been mostly performance-focused. I was lucky in that dday.ical has always included a robust test suite with about 170 unit tests that exercise all the features of the library. Some were broken, or referenced non-existent ics files, so I nuked those right away, and concentrated on the set of tests that were working as a baseline for making safe changes.

The numbers:

  • Old dday.ical test suite: ~17 seconds
  • Latest nuget package: 3.5 seconds

There’s no games here. really is that much faster.

Profiling showed a few hotspots which I attacked first, but those only bought me maybe 3-4 seconds improvement. There was no single thing that resulted in huge performance gains. Rather it was many, many small changes that contributed, quite often by improve garbage collection pauses, many of which were 5ms+, which is an eternity in computing time.

Here are a few themes that stand out in my memory:

  • Route all time zone conversions through NodaTime, which actually exposed some bugs in what the unit tests were asserting
  • Converting .NET 1.1 collections (Hashtable, ArrayList) to modern, generic equivalents
  • Converting List<T> to HashSet<T> for many collections, including creating stable, minimal GetHashCode() methods, though more attention is still needed in this area. A nice side effect of this was that lot of lookups and collection operations then became set operations (ExceptWith(), UnionWith(), etc.)
  • Converting several O(n^2) methods to O(n) or better by restructuring methods based on information that was available in context
  • Converted a lot of loops to LINQ. (Yes, really!)
  • Specifying initial collection sizes when using array-backed collections like List<T> and Dictionary<TKey, TValue>
  • Moved variables closer to their usage, which sometimes meant that certain expensive calls don’t occur at all, because the method exits before reaches it. This also had the effect of pushing some variables into gen 0 garbage collection. (Anecdotally, I have noticed GC pauses are fewer and further between, though I don’t have any hard data that it’s actually significant.)
  • Moving expensive calls outside of tight loops. Unfortunately the library makes extensive use of the service-provider antipattern. A common thing was to have an expensive call (get me a deserializer for Foo!) inside a tight loop that’s only ever deserializing Foos. So you can make the call once and just reuse the deserializer.
  • Implemented a lazy caching layer as suggested in one of the TODOs in the comments.

Along the way, I converted a lot of code to modern, idiomatic C#, which actually helped performance as much as any of the discrete things I did above. As I work towards a .NET Core port, I have the runtime down to about 2.8 seconds just through clarifying and restructuring existing code, and idiomatic simplifications.

What’s next?

  • A .NET Core port is nearly complete.
  • The has virtually no documentation. I hope to improve the readme with some simple examples this morning/afternoon.
  • I have been bug collecting on Stack Overflow, and have a few maybe-bugs to investigate and/or write test cases for.
  • Maybe some API changes for v3, still TBD. I’ll discuss these in a future blog post.

Line wrapping at word boundaries for console applications in C#

I didn’t like any of the solutions floating around the web for displaying blocks of text wrapped at word boundaries, so I wrote one.


This is a really long line of text that shoul
dn't be wrapped mid-word, but is

Becomes this:

This is a really long line of text that
shouldn't be wrapped mid-word, but is

Here it is:

Will Amazon provide pharmacy services that take market share from Walgreens and CVS in the future?

I wrote an answer to this question on Quora back in 2013.

Will Amazon provide pharmacy services that take market share from Walgreens and CVS in the future?

In other words, will Amazon be able to take market share from those companies that fill your prescriptions?

It’s unlikely, because it’s outside Amazon’s core strengths (warehouse automation and computing infrastructure), and there’s little benefit (for consumers or Amazon) in having Amazon fill your Rx’s.

A couple of things about filling a prescription that might help you understand the problem a little better:

  • Every prescription must be checked by a pharmacist before it is dispensed to a patient to ensure correctness. Therefore Amazon would have to hire a lot of pharmacists.
  • Copayments are, the same for at every pharmacy, assuming the pharmacy takes your insurance, and the cost of the medication is greater than your copayment in the traditional $10/$25/$50-type Rx copayment structure. The exception to this is when your pharmacy benefits manager (PBM) decides to offer you 3 months for the price of one (or two) if you do your Rx by mail. Sometimes retail chains will match this–but not often–and they’re effectively eating the loss when they do.
  • Not all prescriptions are recurring. You’re not going to get your antibiotic or painkiller filled at Amazon, because you need it now. These immediate prescriptions are 40-50% of pharmacy volume… this is enough volume to sustain neighborhood pharmacies well into the future.

The fact of the matter is, your PBM probably already offers the benefit of prescriptions by mail, and they do it cheaper than Amazon could.

Dispensing medications doesn’t scale well, and the people who have licenses to do it are expensive. When a pharmacist does QA on a prescription they’re checking a couple of things:

  • Does the drug match the prescription?
  • Are the instructions clear? Do they make sense?
  • Is the medication contraindicated with any of the other drugs the patient is taking?
  • Is the medication contraindicated with any of the medical conditions the person has?
  • Does it make sense from an age/weight/gender perspective?
  • Does the prescription itself make sense? (You’d be shocked at the percentage of prescriptions that have to be changed, which necessitates a call to the prescriber to correct whatever the problem is. IOW, it’s very labor-intensive.)

Electronic prescribing is a panacea for exactly two things:

  • It solves the bad handwriting problem
  • Drugs match the dosages they come in (I.e. You won’t see an Rx for Celebrex 15mg, because no such thing exists.)

It does not solve the:

  • Idiotic directions problem
  • The nonsensical quantity problem
  • The wrong drug selection problem
  • The wrong dosage problem
  • Any number of sanity problem permutations (which are alarmingly common)

Essentially, you have to solve the GIGO problem in a very, very reliable way in order to automate the practice of retail pharmacy. Most medication errors are prescriber errors, not dispensing errors. Error checking in health care is very hard to automate, because there are always exceptions to the rule, and you always need to be able to override normal parameters to account for it.

How I negotiated with Sallie Mae/Navient to save $115K on my student loans

March 22, 2015

This article is for people who have defaulted on their Sallie Mae/Navient student loans. If you haven’t defaulted, or if you’re paying traditional subsidized or unsubsidized federal loans, this won’t work for you. For those of you that ARE in this position, this post is for you. You can get your life back.

I’m sharing all of my actual numbers, because it makes the conversation more useful.

Managing the chaos

Like many people, I was unemployed in 2009-2010. I had the bad fortune of graduating in the middle of the recession, and had quite a bit of difficulty finding a “big kid” job, i.e. one that would let me pay my bills–including my student loans. Also like many people who are struggling with debt they can’t pay, I was plagued by phone calls, and they were universally unproductive, because stones don’t have much blood to give. The first step to getting your feet under you is to create mental space, and the biggest thing is to stop the unwanted calls.

In addition to sending letters, I did this:

  1. Get a Google Voice number
  2. Log into your delinquent accounts, and use the Google Voice number as your only phone number
  3. Don’t answer numbers you don’t recognize
  4. Take down each collection agency’s contact information (phone number, debt they’re collecting on, etc.) when they leave you a voicemail
  5. Block each caller one by one

This builds a strategic rolodex for tackling your debts when you’ve got your feet under you. If getting back on your feet takes a while–it took me 2 years–you’ll notice that debt gets resold fairly often, and as it gets resold, the settlement offers get better and better. This is particularly true for unsecured, consumer debt, and less true with student debt.

Negotiating with Sallie Mae/Navient and FMS

Sallie Mae stops trying to collect debts themselves fairly quickly, and they tend to outsource this to other agencies. Unlike consumer debt, Sallie Mae does not sell the debt to the servicing organization. Instead they retain ownership of the debt, as well as the terms and conditions under which that debt may be settled. (In fact, if you try to call Sallie Mae directly, you will be redirected to the servicing agency without ever having talked to a human being.) The debt collector is just a proxy, but they’re the ones you’ll be dealing with.

My debt was serviced by an organization called FMS. You can Google them; there are many horror stories, but my experience was pretty good, barring a few incidents. I had settled a couple of smaller credit card debts to this point, so I made sure to unblock their phone number only when I had a small lump of money available to make a down payment. I knew I wasn’t going to be able to discuss a full settlement, but maybe I could do something to move the needle in the right direction. This ended up being a good move, though the benefits weren’t obvious until much later.

Default settlements

I’m going to use the term “default settlement” below. I don’t know for sure, but I believe that Sallie Mae’s proxies are authorized to offer some percentage (65-70% or so) as a settlement amount, without phoning the Sallie Mae mothership. The reason I believe this is true, is because they would periodically offer me settlements on the spot which didn’t require them to phone home. This was in contrast to my counteroffers which required a ~24-48 hour turnaround time where they had to talk to someone with more authority.

The reduced-interest plan

June 2011 balance: $144,586.

I brought my account up to date on July 25, 2011 with a $1,493.38 payment, and set up a recurring payment every two weeks for $372.56. This was their “reduced interest plan”, where the interest rate dropped to 0.01%. There was no discussion of a settlement at this point that I can recall. If there had been, it would have been WAY more money than I had, so it didn’t matter.

I made bi-weekly payments from July 2011 to May 2012.

The first settlement offer: the first $80K

In May 2012, I got a phone call from FMS to re-up my recurring payments. (They can only schedule 12 at a time.) At this time, the rep I had been dealing with all along offered me a settlement that was still too large for me to take advantage of in one shot. I told her as much, and if I recall correctly, she conferred with her manager and the Sallie Mae mothership, and they made me a counter-offer: an $80,000 reduction if I:

  1. Made a $7000 down payment by the end of the month
  2. Paid $800/month for 45 months
  3. At the 0.01% interest rate

This dropped the loan term from 155 months to 45 months, a 9+ year reduction. BUT, if I broke the terms, the full balance came back at the original interest rate, minus whatever I’d paid. I went for it, because saving $80,000 and 9 years was too good to leave on the table.

  • Settlement starting balance: $45,375
  • Made the $7000 down-payment (with my dad’s help) in May 2012, which
  • Reduced the amount left to pay to $36,375 (or so I thought, more on that below)

I set up a $400 recurring payment every 2 weeks, including months with 3 weeks to ensure I’d make the deadline with some headroom.

A bump in the road

Unfortunately, FMS wouldn’t send me paperwork stating the terms of the settlement, which (as I suspected) came back to bite me. I also hadn’t recorded our phone conversations, because until this point, there was no reason to think that I would need to.

December 2013 rolled around, and I received a phone call telling me that I was almost out of time, and that I owed like $45,442 by ~February 2014, which didn’t sound right. Unfortunately, I was dealing with a new representative, and she couldn’t decipher the notes of the previous representative. It was my notes against theirs… and when you’re in this position, the other party holds all the cards; you’re just along for the ride, hoping they don’t fuck anything up too badly. (That said, I’m very confident that my notes were more accurate. Not that it mattered then, and I can’t imagine it would have mattered in a courtroom.)

There was about a week of back-and-forth, but the takeaway was that I owed the $45.4K, but that the terms were extended until September 20, 2018. That was a big relief–there was no way my pre-wife and I could have come up with the money in that time.

I made sure to record that conversation should things go awry again. Check the laws in your state… my state is a two-party state which means that I needed the rep’s permission to record the conversation.

The final $20K

Because FMS can’t schedule more than 12 payments at a time, I end up talking to them about once a year. While re-upping my payments for this year, the rep mentioned that for whatever reason, Sallie Mae was accepting settlements “for pennies on the dollar this month”. That’s just a figure of speech, so I didn’t know if that was literally pennies or what, but she asked if I was interested in seeing if they would re-negotiate the settlement, because I’d basically paid $35K already, and was a model citizen. Of course I said yes, and they offered me their default settlement of $24K on the $35K owed on the spot, which is 68 cents on the dollar. I told them I couldn’t do more than $10K–a true statement–fully expecting a counteroffer for somewhere between $10-20K, whereupon we’d have to borrow some money from my wife’s parents. They said they’d have to call SLMA to see if they’d approve it.

The next morning, I got a call back: Sallie Mae had approved the $10K for the remaining $35K. The rep was shocked. The manager was shocked. They told me no one in the office had thought it would go through, which I believe. I get the feeling I’m going to be an office legend for the foreseeable future.


  • $144,586 original balance
  • $45,898 paid over 3.25 years
  • $98,688 saved
  • 68% discount (or 32 cents on the dollar) when all was said and done

FMS payments

Here’s a Google spreadsheet that shows all the debits over that time. Alternatively, you can download the Excel version.

FMS payments 2011 to 2015

Total student loans paid during this time

I have more traditional subsidized and unsubsidized student loans that actually had interest rates, so I focused on overpaying those during this time.

All student loans between 2011 and 2015

Conclusions and tax implications

Once you wrap up your settlement, you’ll have taxes to pay. In my case, my income tax burden for 2015 is now my salary + $98,600, which is… a lot. Depending on where you are financially, you may be able to reduce the canceled debt “income” by whatever your net worth is, if it’s negative by filing a Form 982. To determine if this is available to you, you can fill out the worksheet on page 8 of this IRS form. If the sum you come up with is negative, you can subtract that amount from your paper “income”. (I suggest you talk to an accountant if this applies to you, though.)

Other options include maxing our your pre-tax retirement contributions (401k/403b), and/or using your FSA plan to do something expensive like getting the LASIK you always wanted. Unfortunately, doing this latter thing requires knowledge ahead of time that you’ll be settling during this particular FSA year.

Otherwise you’ll want to adjust your tax withholding, because you’ll pay an underpayment penalty in addition to the tax on this “income” if you don’t pay enough tax throughout the year.

So I settled on a settlement saving my wife and I about $100,000 and ten years. This will let us buy a house and start a family years earlier than we had thought we’d be able to. I think my situation may be unusual, but I don’t believe for a moment that I am a beautiful and unique snowflake. Three and a half years ago, my Sallie Mae situation seemed hopeless, and now… it’s over. It took a lot of hard work, and an unwavering focus to get here, but it can be done.

If I can do it, so can others.

Addendum – June 1, 2015

I wrote this article back on March 22 — 3.5 months ago. I had expected to be able to publish this much earlier, when I got the statement that our business was concluded. During this period, a few things happened

  • I never received the paperwork stating that I had fulfilled my side of the deal
  • Sallie Mae/Navient and FMS parted ways as business partners, which made it harder for me to get information from either one of them
  • I had to fight with Sallie Mae/Navient in an attempt to get them to send me paperwork. They never did. When I talked to them on the phone, they stopped allowing me to record our conversations for some reason

Until today, I had no idea whether this was really done or not. I pull my credit report every year, and was expecting to wait until the summer in order to see if the status of my Sallie Mae/Navient loans were changed. But I bought a new car last week, and part of the financing involved the dealership pulling my credit report, which I was able to take a picture of. It indicated that the loans were settled for less than the balanced owed.

I feel reasonably confident that this is the end. Finally.

Leave a comment

  • If it’s your first comment on my blog, it will probably go into the moderation queue. Don’t worry, it’s not lost; I just need to approve it. It could be a few minutes, hours, or days. I will get to it, though.
  • Try to explain how your situation is different than people that have commented before you. Questions that amount to “I owe money, but can’t afford payments. What should I do?” aren’t constructive.
  • I will assume all questions are about private loans only. Federal loans are a completely different kettle of fish.

1099-C update – Feb 2, 2016

I received ten(!) 1099-C forms from Navient on Jan 28. When I reported them on my taxes, I collapsed them down into a single entry for the total amount. I also filled out insolvency Form 982. I was deeply insolvent at the time of the discharge, so instead of paying income tax on an extra $115,282, I only paid income tax on $32,313, because I was underwater by $82,969.

I used TaxAct, which made the process very straightforward. I collapsed the ten forms into a single line item because TaxAct cannot handle more than five 1099-C forms, and their Form 982 worksheet can only be applied against a single line item. We’ll see if the IRS complains. (I don’t know why they would:- the numbers are identical whether they’re reported across ten line items or one.)

Settlement amounts from other readers

  • $30K for $8K or 27 cents on the dollar – Jan 2016.
    • Update Feb 2017: his tax bill was $6,500–$4,500 federal / $2,000 state