Go vs C#, Part 3: Compiler, Runtime, Type System, Modules, and Everything Else

Alex Yakunin
ServiceTitan — Titan Tech
28 min readJan 17, 2020
Want to know who is who here? Read till the end :)

This is the last, but hopefully, the most interesting post in series. Part 1 and Part 2 were focused on two keys features of Golang — goroutines and nearly pauseless GC. This post adds all the missing pieces.

Similarities

Both languages:

  • Compile to native code
  • Can target multiple platforms
  • Rely on garbage collection
  • Support modules (assemblies in .NET)
  • Support for classes (structs in Go), interfaces, and function pointers (delegates in .NET)
  • Offer a set of options for error handling
  • Support asynchronous execution
  • Have rich base class library
  • Have similar runtime performance

But there are more differences than similarities in how all these features are implemented. Let’s jump to these :)

Compilation

Go compiles to native binaries — i.e. its binaries are “tied” to the operating system it is compiled for.

.NET Core compiles to cross-platform binaries by default. You need .NET Core Runtime to run these with “dotnet <executable>” command; these binaries contain MSIL code — a machine-like code that’s transformed to native code by .NET Just-In-Time compiler. The JIT compiler is highly efficient — in particular, it caches previously compiled modules (and most of BCL modules get pre-compiled and cached right when you install .NET Core), it’s fast — by default, it emits method’s native code w/o any complex optimizations on its first invocation, and produces its optimized version as soon (or if) it recognizes the method is frequently invoked — i.e. you get a “lightweight PGO” for free there.

You still can produce fully native binaries with .NET Native.

Garbage Collection

On the surface, it’s very similar — but there are dramatic differences in the implementation.

.NET GC is optimized for max. throughput (the max. rate of allocations it can sustain) and runtime performance:

  • It’s generational, which also means it’s built to be extremely CPU cache-friendly. When your code runs, it’s highly likely all the objects it recently allocated or used are either in L0 CPU cache (that’s where Gen0 lives) or in L1 cache (that’s where Gen1 lives).
  • Because it’s a generational GC with compactions, allocations are extremely cheap in C#: basically, a pointer increment + comparison, i.e. heap allocations are fairly similar to stack allocations here.
  • On a downside, it does compactions — i.e. every object it allocates may move a few times in the heap (~ once per GenN→GenN+1 transition + once per full GC), but worse, compactions imply .NET has to fix the pointers to any object it moves — in CPU registers, on stacks and from other objects in the heap, so it has to make longer pauses to run such fixups on compactions.

Go Garbage Collector is, on the contrary, designed to be pauseless:

  • It’s way less cache-friendly — honestly, there is almost nothing that makes it cache-friendly at all except the fact it keeps objects of the same type close to each other
  • There are no generations, so every GC is a full GC in Go; if your app rapidly allocates & de-references objects, you’re more likely to see OOM or allocation throttling there, because GC isn’t scanning the object graph to free up what’s unreferenced quickly enough.
  • But there are no compactions, so no pointer fixups, etc. — which means Go should have a completely pauseless GC assuming you’re ok to spend a bit of extra time per each pointer operation (~ if it’s a “mark” GC phase, mark the target of every reference you write as “alive” — but of course, the devil is in the details). But it wasn’t pauseless from the beginning — GC pauses were an “existential threat” in Go circa 2014, but the developers managed to reduce them to ~ sub-milliseconds closer to 2017.

Tiny GC pauses in Go are paid with raw performance. If you want more details on this, check out Part 2. The comparison there was done before against .NET Core 2.1, and I plan to share the update for the most current state (.NET Core 3.1 vs Go 1.13.6) next week. But preliminarily, the distance became even larger:

  • C# is ~ 4.5x faster on a single-threaded burst allocation test. The difference grows to 23x — 7.7B allocations per second for C# vs 0.34B for Go, when the thread count scales to 48.
  • So allocation speed scales ~ linearly with the thread count on .NET up to the core count (1 → 48 on test machine); as for Go, it maxes out and stays ~ stable in 12 … 36 threads range, but drops by almost 40% (to 0.33B/s) when it gets closer to 48 threads.

For the sustained allocation speed & STW pauses, we can compare the results for 32GB static set and 36 threads (on 48 cores):

  • 10.05 GB/s for .NET, max. STW pause was 2.6s, 99.99% = 72ms
  • 2.89 GB/s for Go, max. STW pause was 0.1s, 99.99% = 46ms
  • “We can compare …” means it was the most complex test Go was able to complete on 128 GB machine — it crashed with OOM on every single test with (static set size ≥ 1GB & thread count = 48 / 48 cores). Moreover, it was reliably crashing Windows Desktop Manager on (static set size ≥ 64GB & thread count = 36 / 48) — so far it’s not fully clear how, but feels like it was freezing instead of terminating on OOM, causing OOM in WDM as a result.
  • All .NET Core tests completed without OOM.

If you’re interested in details, see the raw output for tests on Ryzen Threadripper 3960x w/ 128GB RAM, Windows only for now.

Modules

Again, similar on the surface, but very different in nature.

Related concepts in Go:

  • Package: ~ a folder with source code. So adding a package means ~ adding more source code to your project. Each package is re-compiled only when it or its dependencies change. The compiled version of a package is something only Go cares about — you shouldn’t even know it exists. Packages produce, ultimately, either libraries or executables, even though there is no explicit compilation result for libraries — they are consumed in source code form.
  • Module (new from v1.13): ~ a folder with source code + .mod file storing semantic version of the module + all of its dependencies. It can be published to Go module repository.

Similarly, there are 3 module-related concepts in C#:

  • Project: a folder with C# files + .csproj file describing all the dependencies and common properties for assembly to emit.
  • Assembly: it’s a project compilation result, that contains MSIL code + metadata describing it (methods, types, etc.). Remember, .NET relies on its JIT compiler to run the code, so basically, .NET assemblies are ~ like a mix of .obj (or .o) + .h/.hpp files in C. They don’t store source code, though all the symbols and their compiled implementations are there. Similarly, assemblies are can be libraries, executables, or both (nothing prevents you from importing whatever you want from assembly that contains an entry point).
  • NuGet package: an .nuget file (actually, a Zip archive) containing .NET assembly + anything else you want + .nuspec file with the maniphest. Such files are normally published to one of public NuGet repositories, though you can use a private one too. Typically you reference NuGet packages instead of assemblies from your C# projects (.csproj files); they’re automatically downloaded & installed when your projects get built (e.g. with “dotnet build”). But since the NuGet format isn’t tied to .NET, other tools (e.g. Chocolatey) use it for their own packages.

So Go packages contain source code, though .NET packages don’t. Is the difference ends here?

No. The biggest difference is that .NET can load and unload the assemblies in runtime “integrating” the types there with the current set of types. In particular, this enables the following scenarios:

  • Plugins: you can declare ~ IMyAppPlugin interface in your app, implement a logic loading all the assemblies from Plugins folder, create instances of all the types implementing IPlugin there and invoke something like IPlugin.Embed(myApp). That’s why .NET apps are quite extendable.
  • Runtime code generation: .NET has Reflection.Emit API and LambdaExpression.Compile methods (they use Reflection.Emit under the hood) — both emit dynamic assemblies under the hood, and they’re almost instant. Your .NET code can emit any .NET code — and this new code can use any types .NET Runtime knows at this point, as well as emit its own types. This feature is heavily used to speed up either a complex logic (all major .NET serializers use it; compiled Regex expressions leave most of other regex implementations in dust, including Go’s) or type-dependent logic (most of IoC containers rely on it). This also enables some of AOP scenarios.
  • Self-introspection on code level: since your code can access the MSIL and metadata for any part of the app, your code can inspect itself (tools like Cecil help a lot with this) — for example, to generate its heavily parallel version running on GPU (check out this sample relying on ILGPU).
  • All of this together enables completely weird (but apparently quite interesting) scenarios — e.g. even the apps that never meant to be extendable are getting extensions in hacky ways due to this. The most notable modern-day example I know is Beat Saber — the most popular VR game for last 2 years I am a big fan of. Different people hacked 50+ plugins and 20,000+ community-crafted maps for it, even though the game doesn’t have an official plugin API. How? Well, it’s mostly a .NET app — Beat Saber is built on Unity, which uses C# / .NET as its main “scripting” language. And there is a number of open-source tools for .NET (Fody, Harmony) capable of post-processing already compiled assemblies to embed, change, or remove whatever you like there. So someone crafted BSIPA for Beat Saber, which embeds plugin invocation endpoints right into game assemblies and makes sure the game loads plugins when it starts. Viola! Oculus Quest version of Beat Saber has a similar mod (BMBF), even though Quest runs on Android (but Unity for Android still runs .NET).

Go provides “plugin” package, that technically allows you to load .so files dynamically — but:

  • This works only on Linux and Mac OS
  • The compilation environment for the host and the plugin has to be exactly the same — in particular, all package references must match exactly.
  • There are many other cons, so “Many people misunderstand what the plugins can do today. They don’t currently easily enable 3rd parties to make plugins for your app; […] in practice only the original build system can reliably build the plugins. The issues are full of people finding all the little differences in their build environments.”
  • This explains why Hashicorp (the company behind Terraform, Consul, Vault, etc. — they absolutely have to provide a way for third-party vendors to write plugins) rely on their own plugin API hosting plugins in sub-processes and invoking them via IPC.
  • So there are workarounds, but they are pricey & don’t cover all the use cases in-process plugins can: you can’t share the data between plugins and the host, use the same in-process caches, etc., which is a show-stopper for many of scenarios I described above.

Classes, Structs, Interfaces

C# has both classes and structs (value types):

  • Classes always “live” in the heap, structs live on call stacks and in the heap — either as fields of other classes or in their “boxed” form. Thus “new” expression:
    — for class: makes a heap allocation + invokes its constructor
    — for struct: just calls the constructor; space for the struct is already reserved at that point (on the current stack frame or in the field of another class/struct).
  • Classes are always passed by reference, structs are passed by value by default; you can pass them by reference too (via in/ref/out parameter, ref returns, ref structs, etc. — but the set of cases is limited here)
  • Classes may have virtual methods and may inherit from other classes; structs have neither
  • When packed into arrays, structs require exactly their size per item, classes require pointer size (i.e. 8 bytes on 64-bit system) + obviously, the memory for the instance itself
  • Every instance in heap has 2-pointer header: a pointer to virtual method table (~ type descriptor) and a system reserved pointer-size data (stores a pseudo-random value used for referential equality + reserves a few bits for GC and synchronization)
  • All interface-typed values require 1 pointer

And in Go there are just structs, but:

  • They support inheritance via embedding.
  • Structs can exist both in the heap and on call stacks:
  • By default, you create structs w/o explicitly specifying where it’s supposed to be created, and escape analysis helps compiler decide where to place it: on call stack, on the heap. As far as I understand, it also can place it on call stack & move to heap later. You can also explicitly allocate the struct on the heap.
  • There is no object header for heap-stored objects in Go, so structs take the same space on goroutine stacks, in heap, inside fields of other structs, and inside arrays/slices. The absence of header means there is no good way to implement reference-based equality for such objects. Don’t worry if you don’t see the connection — I’ll explain this in the “Equality” section later.
  • There are no virtual methods for structs, but structs can implement interfaces — so you get ones once you cast your struct to an interface.
  • Interestingly, interface-typed values are 2-pointer values (so they take 16 bytes on 64-bit platform or two CPU registers): the first one points to the underlying struct, and the second one — to the interface method table. So:
    — Type information “travels” with the object on .NET (inside its header)
    — And on the contrary, it “travels” with the pointer in Go.

There are obvious pros and cons for both approaches:

  • Overall, structs in Go work very similarly to structs in .NET — with a few improvements (embedding + casting to an interface w/o boxing)
  • .NET requires more time to invoke interface members (it caches references to interface method tables, but still)
  • Go requires more space to pass interface references — in registers, on call stack, in arrays/slices, etc.;

It worth mentioning here that Go:

  • Returns “err” value (of error type, which is an interface) from nearly every method that may fail
  • Always passes return values via call stack, not via registers — btw, pay attention to an unusual “prologue” of every call that checks for the potential need for stack expansion. That’s the price Go pays for goroutines — most of other static languages don’t do this extra check per every call.
  • So this extra “err” requires extra 16 bytes on the call stack. Moreover, the code that gets “err” from a call it makes has to make an extra check for “err == nil”… Isn’t this per-call “extra” (16 bytes on call stack + two comparisons) a bit too expensive?

And a few more observations:

  • The interface field size exceeds machine word size in Go, so it can’t be updated atomically. Not sure if this creates any big issues, but I know for sure it’s pretty frequent in .NET to have pointers that are updated atomically (e.g. to the root of some shared immutable model). Though a workaround with wrapping interface pointer into a struct & using the pointer to it instead may work in most cases — it’s just a tiny bit slower access (resolve one extra pointer) + requires extra allocation on update.
  • On a bright side, this feature (seemingly — I didn’t check this) allows Go to cast any struct (e.g. stored in an array or in a field of another struct) to an interface it supports without boxing. The same is impossible for .NET (though you can achieve something similar inside a generic method, i.e. there are ~ workarounds allowing you to get rid of extra allocations in similar scenarios).

Overall, Go model seems simpler / more attractive:

  • No object headers (though if you want generational GC, you’ll anyway get these, I guess)
  • No value types / reference types
  • Embedding for structs + inheritance only for interfaces seems something that’s easier to understand + it’s closer to what happens under the hood

But none of this is strong enough to be a deal-breaker; in addition, Go has its own problems — e.g. I instantly found escape analysis is a leaky abstraction there; earlier I wrote about a similar issue with slices, and “Equality” section below describes another issue. Thus it feels there might be more of such issues… Though I don’t know it well enough to claim this with any certainty.

Verdict for now: 50/50.

Error Handling

C# uses “classic” exception handling; if you’re interested nasty details, check out my Exception Handling 101 post on this.

Go chooses a fairly unique path with two options:

  • Explicit error passing: there is a convention that the last value returned from a method that may gracefully fail has to be “err” (of error type) — nil (null pointer) in case everything is fine and something in case it’s not. The caller has to check for nil explicitly.
  • There is also defer, panic, and recover — they’re used for ~ non-graceful failures.

Honestly, it’s hard for me to tell what’s better from the human perspective w/o being a bit biased:

  • If you read my “Exception Handling 101”, you noticed I see no issues with classic exception handling. IMO it’s way more about knowing the rules of the game vs the underlying mechanism.

But I can’t mention that Go model is clearly more expensive:

  • Classic exception handling is designed to make sure you pay the bill once something happens, otherwise, there is virtually no extra cost.
  • On the contrary, Go exception handling model makes your program pay the bill per every call it makes, that returns “err”, and per every “defer”.

And finally, if panic → recover pattern doesn’t differ much from a regular exception handling, do you still feel the original idea of returning “err” everywhere is still conceptually good — because otherwise why do you need both?

Equality (==, ≠)

It works completely differently on .NET and in Go.

First, a short introduction: equality normally requires two operations:

  • Compare two instances for equality
  • Compute instance’s hash code in a way consistent with equality comparison operation.

This implies hash code must be always equal for equal instances, and highly likely unequal for unequal instances (it can still be equal, of course — this is called a “hash collision”). In other words, if you compare two hashes & see they are unequal, the instances are definitely unequal; and if the hashes are equal, this tells nothing — the instances might be equal or unequal.

And finally, hashes should not change over time for the immutable instances. Sets, maps, and other collections rely on hashes, so if you put (key1, value1) pair into a (hash)map, and later key1’s hash gets changed, map[key1] lookup won’t produce value1 anymore.

All of this means equality and hashes are mostly meaningless for mutable objects — unless you use only their immutable part in Equals and GetHashCode operations:

  • If there are no GC compactions, object address in memory fits the description of “immutable part” — it’s unique for every object and never changes.
  • There are also objects that look immutable from their public API side, but have mutable internal state — e.g. because they cache something. For example, it could be your own string wrapper, that caches string’s hash code to avoid its recomputations (let’s say the strings you’re dealing with could be very long). Its full state is mutable, but the publicly available part of it is immutable. That’s why you can implement equality and hash code computation for it.

On .NET, equality is mostly user-defined — you have to manually code it for structs (pass-by-value types), and:

  • Normally you mark most of your structs as read-only (immutable) → the implementation of GetHashCode and Equals is straightforward
  • If you’re writing non-read-only struct, you should apply the rules as I described above, i.e. ideally, compare just the immutable part.
  • Visual Studio and Rider can generate Equals and GetHashCode implementations automatically

On the contrary, classes (pass-by-reference types) automatically get reference-based equality: two references are equal if they point to the same instance. Normally you don’t change this — even though you can.

  • Reference-based equality requires some extra in a language with compacting GC. You can’t assume the pointer, if untouched, retains its value — pointers are modified by GC on heap compactions. And this issue presents an additional problem for pointer-based equality: maybe you can implement the comparison (you need to read & compare two pointers atomically), but how do you compute the hash, which has to stay the same for the same pointer — even if it changes?
  • In .NET, this “extra” is a pseudo-random number stored in object header, that acts as a hash code used for reference equality. Unfortunately, I don’t know how it’s computed, though most likely it’s derived from object address & some seed (likely, an additive) that changes over time (if you have compactions, lots of addresses may match).

But it’s very different in Go, where equality is always structural. I guess this is because of two factors:

  • All structs behave like they’re passed by value, even though the pointers are passed under the hood. And since the pointer is something you aren’t supposed to even think about here, it’s logical to ignore it from the equality standpoint as well.
  • I wrote that reference-based equality requires ~ header or something similar in a language with compacting GC. And even though Go doesn’t have a compacting GC yet, it reserves the possibility to add it in future. That’s why it explicitly prohibits you to assume the pointers are stable. But since all the objects in Go don’t have headers, reference-based equality is simply impossible here.
  • One of the consequences of this is how equality works for interfaces: they are equal if the underlying instances have the same type and are structurally equal. For the comparison, in .NET and Java interfaces are equal if and only if they belong to the same instance (i.e. it’s reference-based equality).

In addition, in Go:

As usual, pros and cons:

  • Go wins in simplicity here: yes, it’s easier to understand how equality works there
  • And loses on every other edge: there are plenty of very generic cases where you really need either a custom equality logic or reference-based equality.

Base Class Library

The most striking difference here is: .NET BCL has a fair number of methods and interfaces that are treated specially by C# compiler (even though the compiler isn’t looking for a specific interface — it looks for the presence of methods with the same names). Some examples:

  • IDisposable/IAsyncDisposable: used in “using” statements, provides support for resource disposal / (stream.Close-like scenarios). In reality, the compiler looks for presence of Dispose/DisposeAsync methods. If you need disposal, you implement one of these interfaces.
  • IEnumerable<T> & IAsyncEnumerable<T>: used in “foreach” loops and in methods with “yield return”, provides support for sequence enumeration. In reality, the compiler looks for GetEnumerator method.
  • Task/Task<T>/ValueTask/ValueTask<T>: used in “await” expressions, provides support for async completion notification. In reality, the compiler looks for GetAwaiter method.
  • Enumerable/Queryable.Select/Where/… (tens of other extension methods): used in LINQ expressions (see “from”, “where”, “select”, “group” & other keywords); the compiler transforms these expressions ~ to the chains of method calls.
  • IEquatable<T> and IComparable<T> interfaces — all generic collections in .NET rely on them to test for equality or relative order. In particular, Dictionary<TKey, TValue> uses IEquatable<T> to compare & hash the keys.
  • Even the very base type — Object — provides GetHashCode() and Equals(…) you can override in descendants, + GetType() and a few other methods you can invoke.

Go, on contrary, provides language support (i.e. special syntax) only for system types (slices, maps, etc.), but there are no interfaces or types you can implement or extend, that are somehow supported by the language.

The gist:

  • C# is well-integrated with its BCL. Equality/hashing, sequences/LINQ, disposal — all of this is partially supported by C#.
  • Go takes a different path by providing as little of such integrations as possible.

Other similar features that present in both languages

  • Slices in Go ~= Span<T> in .NET
  • Extension methods: very similar — you are free to “attach” methods to any structs and interfaces in Go
  • Both languages support unsafe pointers / unsafe code.

Similar anti-patterns / design mistakes

Go features missing in C#

  • Go-like asynchronous execution model —there is a dedicated section below on goroutines and async-await.
  • A convention instead of extra keywords for public/private members — the number of modifiers in C# member declarations can scare even a veteran sometimes: “protected internal static readonly Really? really”
  • That’s mostly it.

C# features missing in Go

Grab a cup of coffee — the list is long:

1. Generics — and honestly, that’s important. If you look at any other modern statically compiled language, generics are there. And I fear it’s going to be fairly hard to add them to go — mostly because of its static type system. I’ll expand on it further, but the main consequences of this are:

  • It’s harder to design truly efficient generic data structures and algorithms on Go (though some of its features — mostly, the way interfaces are implemented there — partially mitigate this)
  • Obviously, you are more limited in compiler-supported type checks due to this. Again, this isn’t something you can’t live without, but generics & type checks is (arguably) the main reason developers tend to use TypeScript instead of JavaScript more and more.
  • There is an opinion that generics weren’t added to Go to keep things simpler — that’s obviously not true. They’re simply not that easy to implement — esp. in a language that’s designed to have a static type system in runtime. It’s not the addition in this case, but a significant refactoring; moreover, it’s probably the most fundamental feature Go has on its roadmap. This explains why generics were announced ~ 2.5 years ago, and there are still no PRs / issues associated with it (I did my best trying to find this; maybe I’m wrong).
  • Generics influence everything you write, but mostly — your BCL. And honestly, you should add them earlier rather than later — the more you wait, the larger part of your BCL is going to become obsolete as soon as you add them. There were no generics in .NET for the first 4 years (2002 … 2006), and some legacy of this is still there (e.g. Hashtable and other untyped collections/interfaces from System.Collections are still in BCL — even in .NET Core). And Go is 10 years old now.

2. Lambda expressions; more precisely, Go offers anonymous functions (closures), but there is no type inference for parameters, so the code relying on them looks ugly.

3. Sequence generators (methods with “yield return”)

4. LINQ (~ language integrated monads); there are a few modules attempting to implement LINQ-to-Enumerable for Go. But even a quick look into the examples reveals that it’s neither convenient nor performant there:

  • Missing lambdas make you write way more code
  • Missing generics make you cast each function argument from interface{} type (it’s similar to Object in C#) to its actual type — and that’s for every invocation of a function that evaluates criteria
  • The compiler can’t help you with any type checks there — all the sequences have the same type (~ like IEnumerable<object> in .NET)

5. Operator overloading — quite useful in some scenarios (e.g. types like BigInteger and Vector<T> clearly benefit from this; overloading == and ≠ is also very common)

6. All of this means DSLs (domain-specific languages) are much harder to build in Go. On the contrary, they are pretty easy to build on C#, and F# — with its quotations, computation workflows, and type providers — is simply a paradise for DSL builders.

But are DSL important? Well, here are some examples of such DSLs on .NET:

  • WebSharper transforms any F# quotations (~ specially decorated code on F#) to JS on the fly effectively turning the F# itself into a DSL;
  • Projects like ILGPU (freeware), AleaGPU (commercial, though ~ free for consumer GPUs), and Hybridizer (commercial) enable you to write CUDA kernels (i.e. run your code on GPUs) on any .NET language or use GPU to process your data in heavily parallel fashion — and that’s w/o a need to use any other language.
  • LINQ data providers form another subset of, in fact, DSLs built on top of C# — that’s what you primarily use there to access and process the data. LINQ to enumerable is used quite frequently — probably, in every other method processing some kind of sequence. And if you know it well, it feels quite inconvenient to write a few lines of code instead of a one-liner like “.GroupBy(x ⇒ x.Name).OrderBy(g ⇒ -g.Count).ToList()”.

7. Tuples — definitely useful too. For the note, multiple return values in Go is totally not the same thing; Tuples + out parameters is what you use in C# in similar scenarios.

8. Nullable<T>/Option<T> types — I guess you need generics for that, so…

9. Expression trees — a crucial part for LINQ-to-Queryables / LINQ providers

10. Pattern matching

11. Enumerations

12. Attributes — again, quite frequently used feature

13. “using” keyword / IDisposable interface — clearly a big miss

14. SIMD intrinsics — and yeah, they can help you go get nearly C++ speed on some problems.

And there are plenty of less important features:

  • Dynamic binding / DLR
  • String interpolation
  • Auto-properties
  • Anonymous types
  • Events — though not a big deal, I guess — Rx is replacing events everywhere, and delegates are enough to implement your own version of these.
  • Indices & ranges, range expressions, out parameters, default parameter values, default interface methods, read-only members, nameof expressions…

Asynchronous Execution — recap from Part 1

If you’re interested in a detailed comparison of goroutines and async-await, check out Part 1. The gist is:

  • Go destroys C#, if we compare the convenience of asynchronous programming here and there — you get it basically for free in Go (in terms of coding), even though you pay for this convenience by a tiny bit of performance per every call you make.
  • If you’re ok seeing hundreds of async-await statements, you’re going to be fine with C# too. But you won’t love async-await there after learning how it works in Go, even though the same model (async-await) is used in almost any other language.
  • It worth saying that async-await machinery in C# allows you to implement your own Task-like objects — e.g. your own lightweight tasks. Feels like a plus, though so far I never had to use this :)
  • The performance is hard to compare because the execution models are very different. There are no recent benchmarks; as for the past benchmarks (~ 1–2 years old), C# and Go were very close.

Sequences, Rx, IAsyncEnumerable<T>

This section is here to mainly to demonstrate why goroutines are nearly as important as generics.

.NET BCL provides at least 3 types of sequences:

  • IEnumerable<T> is for interactive (“pull”) & synchronous sequences. The caller (usually — via “foreach” loop) “pulls” the items from a sequence, which makes the enumerator do some work to provide it. It is synchronous because all the handlers are.
  • IObservable<T> is for reactive (“push”) and synchronous sequences. The caller “pushes” items (events) to an event sequence and its subscribes run some computation (e.g. produce items in their own sequences) as a result of this.
  • IAsyncEnumerable<T> is reactive-interactive synchronous-asynchronous sequence. Its caller may asynchronously await for the next item in such a stream, and thus its handlers can be both synchronous and asynchronous (and btw, you don’t pay a big penalty for this — IAsyncEnumerator<T>.MoveNext() returns ValueTask<bool>, i.e. there should be no allocations for synchronous calls).
  • In addition to that, C# provides a special syntax sugar allowing you to write methods returning IEnumerable<T> and IAsyncEmumerable<T> (sequence generators) in a very convenient way (using “yield return” to return the next item).

So C# has a lot of fancy stuff here, Go doesn’t. Now a statement you probably don’t expect: any sequence implementation for Go automatically provides features of all these 3 sequence types. Wait, what? Well, any function in Go is both synchronous and asynchronous. So to create a reactive-style sequence there, you need:

  • a reactive-style item producer + an IEnumerator-like type awaiting for the next item from the producer’s channel in its MoveNext()-like method
  • an IEnumerable.Consume() method enumerating the sequence until the end — in a newly created goroutine.

→ If you’re on Go, IEnumerable<T> is all you need…

But (a big “but”): Go has no generics, no lambdas, so no type checks, way more verbose syntax, need to cast every handler’s parameter to its actual type, worse performance, etc. — i.e. what I described is a dream, the reality is much darker.

A couple of questions to language developers:

  • How do you justify the fact humans, not machines have to deal with fairly dumb work associated with asynchronous programming — assuming nearly all the code we write nowadays is at least potentially asynchronous?
  • Why Go is the only language that tackles this well? On a surface, it seems goroutines are definitely easier to implement than generics. So why other language developers are ignoring the opportunity to simply copy a good solution?

Runtime Performance

Overall, it’s similar. But it worth mentioning that currently C# virtually destroys Go on most of the tests @ Computer Language Benchmark Game:

The only tests where C# is loosing are math problems, which is a bit surprising. A quick check reveals that:

  • “pidigits” relies on an external library for big integers, i.e. it’s more a performance test for this library + external function call test
  • “mandelbrot’s” first “for” loop assumes Vector<double>.Count (the size of a hardware SIMD register in doubles) is always 2, though in reality it is hardware-dependent, and it should be at least 4 on modern CPUs.
    → Most likely this test is ~ 2x slower solely due to a bug.
  • “n-body” doesn’t use SIMD — neither for C#, nor for Go. Which explains a similarly low performance on it for both (+/- JIT time, which actually is included to every C# timing), and also explains why C++ (7.30s) is so much ahead there: its code is heavily optimized w/ SIMD intrinsics. Same for Rust, same for Fortran — i.e. all top performers rely on SIMD there.

The geometric mean for the speed factor is 1.53x — i.e. it’s significant.

Just so you know: a few years ago C# was somewhere behind Java on these tests — but that’s mostly because all the tests on CLBG are running on Ubuntu, and Mono (the open-source cross-platform .NET runtime, which used to be ~ 2x slower than .NET Framework) was the only way to run C# code on Ubuntu before .NET Core. Finally, .NET Core itself is noticeably faster than .NET Framework 4.X — even on Windows.

Epilogue

Why do developers switch from one programming language to another? There are tons of factors:

  • Is the language getting more popular?
  • How steep is the learning curve?
  • Will I be able to find a good job requiring this language?
  • Will it provide the desirable performance for my next project?
  • Do I like the syntax?

And once you get more experienced, you definitely add one more item your own similar list: the amount of ugly code you have to write every day to solve your typical problems.

That’s why I love C# a lot (F# too, but that’s another story):

  • LINQ and IEnumerable<T> method calls are much shorter than sets of nested “for” loop — moreover, they are similarly fast, easier to read & understand.
  • Generics allow you to have a single implementation of abstraction that works equally well for any of its type parameters, so you don’t have to maintain a set of handcrafted versions of it, which are mostly a copycat of each other.
  • I obviously can go on and on, but…

I started to look at Go hoping to see something similar. And even though Go has nearly perfect asynchronous programming model (all of your code is automatically both synchronous and asynchronous) and most of the code we write nowadays is potentially asynchronous, is this enough?

Honestly — no, not at all. If you ignore goroutines, it’s going to be way harder to find other compelling reasons to use the language.

And unfortunately, it’s not just me complaining — there are many others. I highly recommend you to check out Go: the Good, the Bad, and the Ugly — unfortunately, I found it only when I already wrote this document, otherwise, it could be significantly shorter. That’s exactly how I feel about Go: it’s a mix of the Good, the Bad, and the Ugly. Two quotes from this post:

… It looks like Go’s design happened in a parallel universe […] where most of what happened in compilers and programming language design in the 90’s and 2000’s never happened.

… On the one hand, I could talk for hours about how horrible Go is. On the other hand, Go is obviously a very good language.

So my current stance on C# vs Go is:

  • For now, .NET is ahead almost everywhere — the only big exception is asynchronous execution.
  • If .NET implements Go-style synchronous-asynchronous execution model, I won’t find a compelling technical reason to look at Go — as you might notice, almost everything else in Go is inferior to .NET, though it certainly has a few other (but much smaller) gems.
  • Similarly, once Go implements generics and lambdas, I’ll definitely start paying much more attention to it. But honestly, it needs so much more…

And if you’re curious, the right picture for this post is:

If you know me, you also know I’m a cat person

Note that it’s not as bad as it might look for Mr. Gopher — if you didn’t notice, he uses a gun against Mr. Cat on this picture. So he’s definitely safe, and the cat is clearly a bit scared. Who knows — maybe a few more years, a few more pounds for Mr. Gopher, and he won’t even need a gun :)

The perceived simplicity of Go is certainly attractive for developers. The observations I’ve made while digging for various documentation and examples is that there are lots of true programming gurus in Goland, as well as pure adepts. Posts like this one (its author clearly doesn’t understand what’s unique in Go, but still praises it) made me feel I’m back to early days of .NET — I am sure I wrote something similar about .NET in past :)

It’s a pity if .NET Core team @ Microsoft — while being so busy with adding thousands of ReadWriteLockUnlockAcquireReleaseCopyPasteAsync overloads — wins every tiny battle, but loses the war by ignoring an opportunity to solve all these issues once and for all of such cases.

Similarly, it’s a pity if Golang team spends a few more years continuing to tolerate the absence of generics, pretty bad allocation / GC performance (“How much did tiny STW pauses cost? Everything.”), and a fair number of other areas (e.g. it feels like Golang “denies” functional programming by simply avoiding anything related to FP :) ).

Thus if you like the series and/or want to attract Microsoft and Google’s attention to some of the problems highlighted here, please share / upvote / send it to the influencers you know :)

And certainly, thanks for getting to the end of this longread.

P.S. Check out our new project: Stl.Fusion, an open-source library for .NET Core and Blazor striving to be your #1 choice for real-time apps. Its unified state update pipeline is truly unique and mind-blowing.

--

--