I’ve shipped a lot of software where internationalization starts as “we’ll add it later” and ends as “why is this hot path allocating and formatting strings on every request?” This time I wanted to do it differently.
So I ran an experiment: could I build a serious, production-ready internationalization framework for .NET in a weekend, using Claude Code as a turbocharged implementation partner—while keeping full creative control over design, constraints, and quality?
The result is MoonBuggy: a zero-allocation i18n library for .NET (Core+) that generates translation code at compile time via source generators, emits direct TextWriter.Write calls, and uses ICU MessageFormat with full CLDR plural rules for all Unicode locales—without shipping an ICU runtime dependency.
Death by a thousand allocations
Most i18n approaches in server-side .NET do some combination of:
- Lookup a key in a dictionary/resource manager
- Fetch a localized template string
- Run runtime formatting (often
string.Format-style or interpolation) - Allocate intermediate strings (sometimes multiple)
- Repeat for every request, in every view, on every hot path
That’s fine for back-office apps. It’s less fine when you care about throughput, p99 latency, GC pressure, and predictable performance under load.
MoonBuggy’s stance is simple: if translations are known at build time, formatting and routing to the right plural case can also be compiled into the binary, and your request path shouldn’t pay extra just because your app supports Polish.
What is MoonBuggy?
MoonBuggy is a .NET internationalization library with three key pieces:
- A runtime package (what your app references)
- A source generator (runs at compile time, emits strongly-shaped code)
- A CLI tool (for working with message catalogs)
You write calls like this (including inside Razor views):
@_t("Welcome to $name$!", new { name = Model.SiteName })
And for translator-friendly rich text, you can write markdown:
@_m("Click **[here]($url$)** to continue", new { url = Model.ContinueUrl })
At compile time, the generator produces specialized code for each locale and pluralization branch, using direct TextWriter.Write(...) calls. No dictionary lookups. No runtime parsing. No runtime string formatting engine doing work on every request.
String extraction
MoonBuggy uses ICU MessageFormat semantics with full CLDR plural rules for all Unicode locales.
The important detail is this: MoonBuggy embeds plural rules into your binary, so you don’t need an ICU runtime library deployed alongside your app.
That gives you “serious i18n” behavior (plural categories like one, few, many, etc. depending on locale) while staying deployment-friendly and predictable in production.
A small illustration (ICU-style plural):
"You have {count, plural, one {# message} other {# messages}}"
MoonBuggy compiles the branching logic so the runtime path becomes “evaluate plural category for this locale + count, then write the right bits.”
Translation services
Translations live in standard PO files, which means you can use existing tooling like:
Also: the PO format is shared with Lingui.js, which matters more than it sounds. If you have a product with a .NET backend/admin and a JS frontend, you can reuse the same message catalogs across stacks instead of maintaining parallel translation universes.
MoonBuggy learned a lot from Lingui.js’s pragmatism here, and from patterns I’ve seen work at scale in real systems.
Compile-time baking
Source generators are a chance to catch errors at compile time and not at run time. They aren’t just performance hacks.
MoonBuggy ships with compile-time diagnostics that catch things like:
- Missing variables (
$name$used but not provided) - Malformed message syntax
- Type errors for parameters
- Other “this will break at runtime” issues
This is the part I personally care about most: you shouldn’t discover broken translations from a production exception, or from a user screenshot in a language you don’t speak.
Markdown strings
I’ve never loved the “translators edit raw HTML strings” workflow. It’s brittle, and it invites either broken markup or overly constrained messages. On the other hand, all developers know markdown.
MoonBuggy’s _m() lets developers work in markdown, which then renders to a Lingui.js-like something0> tagged test, to allow them to work safely and predictably. Finally, the compiler generates HTML. Developers get structured string; translators get readable source, the user doesn’t see any problem at all.
Example:
The developer sees:
@_m("Click **[here]($url$)** to continue", new { url = "https://example.com" })
The translator sees:
Click here0> to continue
And the broswer sees:
Click here to continue
The intent is straightforward: keep markup ergonomic for humans, while keeping output correct for the browser.
Installation
MoonBuggy is split so your runtime stays lean, and your build gets the generator.
Typical setup:
user@host:~/app $ dotnet add package MoonBuggy
user@host:~/app $ dotnet add package MoonBuggy.Generator
user@host:~/app $ dotnet tool install -g moonbuggy
Then you add PO files, run the CLI to manage catalogs, and the generator does the rest at build time.
(Exact steps live in the docs: https://intelligenthack.github.io/moonbuggy/)
The 2 days with Claude Code experiment
Let’s me make this clear. Something like this is not for the faint of heart, this wasn’t a “prompt → accept whatever it writes” workflow! The only way building something like this is viable for infrastructure-y libraries is to treat the model like a fast junior engineer with infinite stamina, but little smarts:
- I set strong, opinionated constraints first (zero allocations, PO compatibility, ICU semantics, generator architecture, lingui compatibility)
- I also wrote an implementation plan in 15 phases, created an extensive test plan, a clear API surface, etc. This was done with a specific CLAUDE.md skill that guided me through the process. I still have to work on phase 16.
- I asked Claude to implement each phase individually. Most took about 10m to do the initial run, plus testing, fixing etc. In total perhaps 0.5 hours average each phase.
- I reviewed commits like I would in a serious codebase. Plenty of harsh criticism was given.
- I used tests and refactors using the
test-driven-developmentskill (perfect in this case) - I forced Claude to rewrite pieces when the generated approach felt clever, lazy or dumb. Honestly, this happened in about 2 or 3 phases out of 15.
Claude was extremely effective at the mechanical parts (wiring, repetitive code emission patterns, edge-case enumeration), and it was also useful for exploring alternative designs quickly and digging throw mindboggling boring documentation.
All in all I used 2 full days at the Max x5 tier.
The “shape” of the library—the opinionated bits, the performance posture, the file formats, the Razor ergonomics, the diagnostics strategy—those had to be human-led, or you end up with a pile of features that don’t cohere. I used Claude as a smart, accellerating typewriter and not as a magic box.
“Production-ready” and “new”?
MoonBuggy is new, but it’s not a toy. It’s MIT-licensed, designed to be used in real applications, and engineered around constraints that matter in production: performance, correctness, tooling compatibility, and maintainability.
That said, i18n is a deep pit of edge cases. The only way to harden something like this is real-world usage across:
- Multiple locales with non-trivial plural rules
- Real translation workflows (merge conflicts, fuzzy entries, translator tooling quirks)
- High-traffic paths where allocations matter
- Razor-heavy apps where ergonomics are everything
So I’m explicitly asking: try it in a real scenario and tell me what breaks, what’s awkward, and what you’d change.
Where to start
If you want to contribute, open issues with concrete reproduction cases (locale, message, expected output), or send PRs. MIT license means you can also adopt it, fork it, and ship it inside your stack if you need to.
A quick ask
If you’re the kind of person who has ever profiled a localized view and muttered “why are we formatting strings in a loop,” I’d love your eyes on this.
Try MoonBuggy, kick the tires, and share feedback—especially around generator output quality, edge cases in ICU syntax, and real translation workflow friction.