I feel that nowadays Rust is the language to go when you are doing system programming, but C# is not a bad choice either. With .NET 9 being released in a few weeks we will get NativeAOT (compilation to a native single binary) for x86 (x64 and ARM64 are already available). At work, I'm writing patches for legacy apps and needed to use C++ for most of my tasks. Nowadays, I'm doing more and more stuff in C# and enjoying it. For WinAPI there is a fantastic cswin32 [1] project that generates FFIs signatures on the fly. And it's fairly simple to extend it for other Windows libraries (I did it for detours [2], for example). And using pointers or working with native memory blocks is straightforward and intuitive for people with C/C++ background.
Although NativeAOT sounds cool and it's better than nothing: I don't like that it comes with a bunch of limitations[1]. I would have loved this if you could just use it without any changes, but I'm very worried that at some point I used something that prevents me from getting it to work and I have to figure out which limitation I just walked into. Correct me if I'm wrong.
With source generation, I'd say that its biggest limitation is rapidly diminishing. Even ASP.NET Core is embracing it, allowing for better support for json deserialization and eventually MVC.
Kwpolska 18 hours ago [-]
Those limitations are often obvious. With AOT, you don't have the VM around, you can't load new bytecode or introspect the objects. I would focus on writing working code, and try to go AOT close to the end. If it fails and it's not fixable, tough luck, but it works on the standard runtime.
taberiand 18 hours ago [-]
That's interesting - I would have thought targeting aot at the outset and then switching away only when the design became incompatible would be more effective, only because by going for aot at the end I'd probably have introduced some code or dependency that isn't aot compatible and yet too much work to replace
metaltyphoon 17 hours ago [-]
> I would have thought targeting aot at the outset and then switching away only when the design became incompatible would be more effective
That’s exactly what I do too.
neonsunset 4 hours ago [-]
Technically speaking, there is VM (you could also consider GC to be a part of it, but in HN understanding it's an umbrella term that can mean many things). Because the type system facilities are there which is what allows reflection to work.
The main source of confusion as to why some believe that NativeAOT prohibits this are libraries which perform unbound reflection in a way that isn't statically analyzable (think accessing a method by a computed string that the compiler cannot see and not annotating with attributes the exact members you would like to keep and compile the code for) or libraries which rely on reflection emit. But even reflection emit works for limited scenarios where runtime compilation is not actually required like constructing a generic method where argument is a class - there could only be a single generic instantiation of __Canon argument in this case, which can be emitted at compile time. You can even expect the reflection to work faster under NativeAOT - it uses a more modern pure C# implementation and does not need to deal with the fact that types can be added or removed at runtime.
zigzag312 4 days ago [-]
I always felt like these features are adding a new programming paradigm to C# that allows you to bypass GC in safe code.
I wish more people would talk about it. Thank you for such an interesting article!
algorithmsRcool 2 days ago [-]
Span and ref-like types enable massive changes to the way that memory is managed in C#. You can absolutely write almost GC-less code. I have been tinkering with a toy no-GC database engine in C# based on Direct I/O and some object pooling. I have been amazed at how far i can get before resorting to GC heap allocations
DeathArrow 1 days ago [-]
Anything that uses classes and interfaces will be memory managed by the GC. So instead of using lists, dictionaries, IEnumerable, you will have to roll your own.
It would be better if the GC can be turned off with a switch and just add a delete operator to manually free memory.
algorithmsRcool 1 days ago [-]
>Anything that uses classes and interfaces will be memory managed by the GC...
Yes and no.
Yes, almost all of the standard library collection are allocation heavy and it is still the dominate pattern in C#, so if you want to avoid the GC you need to avoid these and resort to building your own primitives based on Memory/Span. Which sucks.
However, you can use interfaces in a no GC world since you can constrain those interfaces to be structs or ref-structs and the compiler will enforce rules that prevent them from being boxed onto the GC heap.
Also of recent note, the JIT can now automagically convert simple gc-heap allocations into stack allocations if it can trivially prove they don't escape the stack context.
> It would be better if the GC can be turned off with a switch and just add a delete operator to manually free memory.
It is a little know fact that you can actually swap out the GC of the runtime. So you could plug in a null implementation that never collects (at your own peril...)
As for a delete operator, you can just roll your own struct based allocation framework that uses IDisposable to reclaim memory. But then you need to deal with all the traditional bugs like use-after-free and double-free and the like.
For me, I think low-gc is the happy medium. Avoid the heap in 99% of cases but let the GC keep things air tight
CraigJPerry 9 hours ago [-]
>> It is a little know fact that you can actually swap out the GC of the runtime. So you could plug in a null implementation
How do you do this? Just so I can have another tool in my tool shed. Googling got me to an archived repo on GitHub with a sample GC - which is enough but Wonder if there’s something off the shelf.
In java land, the Epsilon GC (a do nothing GC) enables a pattern that’s handy in perf test jobs in CI pipelines occasionally for some projects (I.e. run with epsilon but constrain max memory for the process - ci builds will fail if memory usage increases)
algorithmsRcool 18 hours ago [-]
> As for a delete operator, you can just roll your own struct based allocation framework that uses IDisposable to reclaim memory. But then you need to deal with all the traditional bugs like use-after-free and double-free and the like.
I forgot that there is built in support for this model using the MemoryManager<T> class [0]. A memory manager is an abstract class that represents a block of other memory, including possibly unmanaged memory. It implements IDisposable already so you can just plug into this.
The Memory<T> struct can optionally internally point to a MemoryManager instance allowing you to plug your perfered style of allocation and freeing of memory into parts of the framework.
There is a little irony that a MemoryManager<T> is itself a class and therefore managed on the gc-heap, but you can defeat this by using ObjectPool<T> to recycle those instances to keep allocation count steady state and not trigger the GC.
I have used this before (in the toy database i mentioned earlier) to allocate aligned blocks of unmanaged memory.
> constrain those interfaces to be structs or ref-structs
How?
I know of constraints on generic type parameters, but not how to do this. A cursory search is unhelpful.
neonsunset 1 days ago [-]
I think the comment just meant using generic constraints with structs.
e.g.
interface Foo {
int Calculate();
}
static void CalculateThing<T>(T impl)
where T: Foo {
var num = impl.Calculate() * 2;
Console.WriteLine(num);
}
Here if you pass a struct that implements 'Foo', 'CalculateThing' will be monomorphized and the dispatch will be zero-cost, same as in Rust.
You can apply additional constraints like `where T: struct` or `allows ref struct`. The last one is a new addition which acts like a lifetime restriction that says that you are not allowed to box T because it may be a ref struct. Ref structs are for all intents and purposes regular structs that can hold so-called "managed references" aka byrefs, which have syntax 'ref T', which is discussed in detail by the article this submission links to (ref structs can also hold other ref structs, you are not limited in nesting, but you are limited in cyclicality).
algorithmsRcool 23 hours ago [-]
Yes, this is what I meant
yarg 22 hours ago [-]
> It would be better if the GC can be turned off with a switch and just add a delete operator to manually free memory.
This breaks the fundamental assumptions built into pretty much every piece of software ever written in the language - it's a completely inviable option.
Incorporating a borrow checker allows for uncollected code to be incorporated without breaking absolutely everything else at the same time.
neonsunset 1 days ago [-]
Given that ref structs can now be generic arguments and cannot be boxed - you have more ways to enforce that no boxing occurs at compile-time. It is true that you have to roll your own collections, but even dispatching on interfaces by making them generic constraints (which is zero-cost) instead of boxing is a good start.
As for delete operator, 'dispose' works well enough. I have a toy native vector that I use for all sorts of one-off tasks:
// A is a shorthand for default allocator, a thin wrapper on top of malloc/realloc/free
// this allows for Zig-style allocator specialization
using var nums = (NVec<int, A>)[1, 2, 3, 4];
nums.Add(5);
...
// underlying pointer is freed at the end of the scope
It is very easy to implement and I assume C and C++ developers would feel right at home, except with better UX.
This retains full compatibility with the standard library through interfaces and being convertible to Span<T>, which almost everything accepts nowadays.
System-provided allocators are slower at small allocations than GC, but Jemalloc easily fixes that.
ComputerGuru 1 days ago [-]
> Given that ref structs can now be generic arguments
I missed this development! That was a big pain working with ref structs when they first came out.
algorithmsRcool 23 hours ago [-]
Ref-structs can also implement interfaces now too. The C# compiler team has been really delivering in this space the last few iterations
DeathArrow 1 days ago [-]
I really mean using existing stuff, without rolling your own:
List<int> nums = [1, 2, 3, 4];
//do stuff with nums
Delete(nums);
neonsunset 1 days ago [-]
Okay, I see where you are coming from. This is a common ask, but it works against the principles that make generational GCs performant. You can't "delete" an object from the heap, because dead objects are not deallocated. Instead, live objects are preserved and moved to an older generation, with memory now occupied by only dead objects made available for subsequent allocations immediately.
In addition, objects that hold references to other objects internally would need an implementation that would allow to traverse and recursively free references in a statically understood way. This gets nasty quick since a List<T> can hold, let's say, strings, which may or may not have other locations referring to them. Memory safety goes out of the window for dubious performance wins (not even necessarily, since this is where GC has better throughput).
> Okay, I see where you are coming from. This is a common ask, but it works against the principles that make generational GCs performant.
In my comment I already suggested a context where GC can be turned off. I said: "It would be better if the GC can be turned off with a switch and just add a delete operator to manually free memory."
whizzter 1 days ago [-]
And that'd totally break down as soon as some underlying class does something you didn't expect. C++ RAII patterns and Rust's ownership systems are required for a very good reason (that the GC sidesteps but also makes all code dependent of), the NVec further up in the thread works because it's an explicit abstraction.
pjmlp 1 days ago [-]
Use the stuff from Marshal and OS interop then, there are even malloc/free variants.
Also there is C++ for that, if the goal is to use C# as C++.
zigzag312 1 days ago [-]
This looks intriguing. Is there anywhere I could see more details about this?
This really is a PoC. You might get better results by using snippets as the inspiration for rolling something tailored to your specific use-case.
zigzag312 1 days ago [-]
Thank you! It'll be fine learning resource.
pjmlp 1 days ago [-]
This kind of features is C# catching up to what was already possible in languages like Modula-3.
Unfortunately, as usual in computing, we have to do huge circles shaped in zig-zag, instead of adopting what was right in front of us.
uticus 1 days ago [-]
The biggest difference is the number of people involved and the target. C# is built for collaboration by a large number of people (of sometimes little experience), for everything from Windows GUI to microservice AWS lambdas
pjmlp 1 days ago [-]
Of course, the point is that this all traces back to Java being a language originally designed for settop boxes, leaving the features of Oberon/Cedar/Modula-3/Eiffel/... behind, C# being born out of Sun's lawsuit when J++ was the original language for Ext-VOS, WinDev resistance to anything not C and C++, Singularity, Midori, Phoenix, languages like D, Go, Rust gaining attention, and so on and on.
Lots of zig-zags.
I am a firm believer that if languages like Java and C# had been like those languages that predated them, most likely C and C++ would have been even less relevant in the 2010's, and revisions like C++11 wouldn't have been as important as they turned out to be.
uticus 22 hours ago [-]
Well said, I agree
algorithmsRcool 18 hours ago [-]
There is very little new under the sun. It reminds me of the wheel of time books, as the wheel turns we forget about the learnings of the previous age and reinvent them for ourselves. Often worse.
johnisgood 1 days ago [-]
Yeah I feel like we are taking "reinventing the wheel" to a whole new level, and with enough time, people forget, same with Lisp and Forth (i.e. how they just re--implement stuff that were already a thing in those two languages, but perhaps under a different name).
jason_oster 24 hours ago [-]
As completely off topic as my response will be, I'll at least keep to the theme of this thread. I was reminded the other day of an article that called out React and Flux for reinventing the Windows 1.0 dispatch architecture, and it made me laugh: https://www.bitquabit.com/post/the-more-things-change/
Also, can't miss the opportunity to bring up Graydon's iconic 2010 talk "Technology from the past come to save the future from itself". http://venge.net/graydon/talks/
pjmlp 4 days ago [-]
Many of these features exist since .NET 1.0, given its scope of languages to support, including C++.
So even those that weren't initially exposed in unsafe mode, were available at the MSIL level and could be generated via helper methods making use of "System.Reflection.Emit".
Naturally having them as C# language features is more ergonomic and safer than a misuse of MSIL opcodes.
MarkSweep 1 days ago [-]
There is a runtime (not C#) feature that has been added that is relevant to the article: ref fields in structs. Before these, only certain runtime-blessed types like Span<T> could contain refs directly.
In case anyone is interested, here is the spec about refs in structs and other lifetime features mentioned in the article:
And the latest version is massively relaxing the ref restrictions on generics.
zigzag312 4 days ago [-]
"System.Reflection.Emit" is not compatible with NativeAOT.
Using C/C++/Rust to do the same task is probably more productive than emitting MSIL opcodes, so that solution wasn't really that practical.
But with these new features being more ergonomic and practical, it becomes cost effective to just do it in C# instead of introducing another language.
4 days ago [-]
pjmlp 4 days ago [-]
Yeah, but nothing of that is the point being discussed, with Native AOT still not available in GUI workloads.
Also P/Invoke and CCW/RCW do have costs cross the runtime layer, even if minor when compared with other languages.
Rohansi 1 days ago [-]
I believe you can avoid most of not all of the P/invoke overhead these days by using unmanaged function pointers and not using the automatic marshalling.
neonsunset 1 days ago [-]
Whenever you use [DllImport], the analyzer will nudge you to auto-fix it to [LibraryImport] which source-generates a marshalling stub (if any is needed) that then calls an inner [DllImport] that does not require runtime marshalling. This is very cheap since function address gets cached into a readonly static which then gets baked into the machine code once the JIT produces Tier-1 compilation for your method.
On NativeAOT, you can instead use "DirectPInvoke" which links against specified binary and relies on system loader just like C/C++ code would. Then, you can also statically link and embed the dependency into your binary (if .lib/.a is available) instead which will turn pinvokes into direct calls (marshalling if applicable and GC frame transition remain, on that read below).
Lastly, it is beneficial to annotate short-lived PInvoke calls with [SuppressGCTransition] which avoids some deoptimizations and GC frame transition calls around interop and makes the calls as cheap as direct calls in C + GC poll (a single usually not-taken branch). With this the cost of interop effectively evaporates which is one of the features that makes .NET as a relatively high-level runtime so good at systems programming.
Unmanaged function pointers have similar overhead, and identical if you apply [SuppressGCTransition] to them in the same way.
* LibraryImport is not needed if pinvoke signature only has primitives, structs that satisfy 'unmanaged' constraint or raw pointers since no marshalling is required for these.
zigzag312 1 days ago [-]
Saving this as I don't remember seeing such succinct explanation of these attributes before :)
zigzag312 4 days ago [-]
I'm not sure I follow. Where are GUI workloads being discussed in the article?
If anything, article doesn't talk about MSIL or CLR, but C# language features. CLR is not the only target C# supports.
NativeAOT is supported in Avalonia (cross-platform UI framework), Razor Slices (dynamically render HTML from Minimal APIs) and I think there is also some support for AOT in MonoGame & FNA (game dev frameworks).
However, it's still early and a lot of the ecosystem doesn't support NativeAOT.
pjmlp 4 days ago [-]
No, neither was Native AOT.
Native AOT depends on CLR infrastructure.
vlovich123 1 days ago [-]
> This restriction is central to Rust’s safety guarantees, but C# doesn’t need it. The reason is that Rust has to account for the possibility that a reference may be invalidated at any time.
Is this right? I thought Rust's reason for XOR is deeper & is how it also guarantees memory safety for multi-threaded code too (& not just for reference lifetimes).
lionkor 1 days ago [-]
That's how I understood it, too
yeputons 1 days ago [-]
>How is it possible that I can write efficient and provably-safe code in C# without a degree in type theory?
Because of two things mentioned in the article just below.
> Here we see C#’s first trade-off: lifetimes are less explicit, but also less powerful.
If C# is less powerful, it does not need powerful syntax. One does not need explicit lifetimes in Rust for a long time either, deduction work just fine.
> The escape hatch: garbage collection
If C# is ok with not tracking _all_ lifetimes _exactly_, it does not need powerful syntax. Not an option in Rust, by design.
Basically, not all code is possible to write, and not all code is as efficient.
daxfohl 1 days ago [-]
Both have "unsafe" escape hatches, so all code is possible to write efficiently. Just some cases are harder to prove correct in the type systems of each.
Decabytes 1 days ago [-]
My biggest issue with C# is that it doesn't have a good cross platform GUI. Maui dev is too slow and lacks a lot of the features a lot of people want, and Avalonia still uses the hybrid axml approach that just feels bad. I wish C# had a Flutter like library that utilized C#'s Hot Reload features
zigzag312 24 hours ago [-]
I agree. Avalonia tries too much to be like WPF, but in my opinion, WPF was two steps forward and one step back (mainly due to XAML).
Using something like Photino (https://www.tryphotino.io) with Blazor can start to feel like an actual good C# cross plat GUI solution but even as a C# truther I agree with you.
I think you'll start seeing a lot more "cross platform C# frameworks" when PanGUI drops:
https://pangui.io
It's a native layout/gui util by the devs of the mega-popular Odin extension in Unity, and the idea is to directly solve "good native c# gui lib" with the implementation just being a single shader and an API that is more like DearIMGUI.
I already do iterative hot reload GUI with DearImGUI in that engine so PanGUI will work in the same way.
fireant 13 hours ago [-]
Pangui seems really cool. Especially since it's from people developing Unity plugins so we could get good GUI for games too.
noveltyaccount 19 hours ago [-]
Photino sounds almost identical to Maui Blazor Hybrid. Do you know the difference?
kkukshtel 17 hours ago [-]
Photino is way less "all in one" like Maui Blazor Hybrid. Photino is basically just a cross platform way to open a native webview on desktop platforms and give you hooks to call in and out of that, one of which is dotnet.
From there, you can do your front end in absolutely whatever (Svelte, Next, etc.) and your back end is the .NET host doing whatever. So it's basically making a "native webapp", not actually doing what Maui Blazor Hybrid does where it's opening a native context and injecting a webview (if I understand it correctly)
neonsunset 23 hours ago [-]
You have Avalonia, Uno Platform and, yes, MAUI. Most cross-platform GUI frameworks are flawed regardless of the language. The ones .NET has are decent, with various ways to approach the UI wiring - you have both XAML and declarative SwiftUI-style (and with MVU pattern too) options. Avalonia even has F# support through FuncUI. There are also plenty of bindings for SDL2, there's GTK# successor Gir.Core.
I wish the comments focused more on the subject of the article which is interesting and under-discussed.
fire_lake 5 hours ago [-]
MAUI is the official one but Linux support AFAICT? Such a shame!
two_handfuls 2 hours ago [-]
Things could have gone so much differently if they had committed early on to being cross-platform.
Instead, its growth was stunted and many people avoid it even though it is an excellent language.
DonaldPShimoda 1 days ago [-]
> Instead of throwing an exception, we’ve decided that this function should always return something, even if it’s not in the haystack
The right move at this point would be to use an optional type, surely...
PoignardAzur 1 days ago [-]
Cool article!
Quick nitpick: the find example could return a reference to a static variable, and thus avoid both the heavy syntax and the leaked allocation:
It doesn't even need to be a `static`. It can be `const` or just inlined `&0` because borrows of consts get promoted to `'static`
sirwhinesalot 1 days ago [-]
The sort of "borrow checking" C# does is also similar to what the lifetime profile for C++ tries to do to catch bugs (but that's sadly a bit of a shitshow, existing C++ code is too much of a mess).
A related idea are the concept of second class references, as exist in Hylo. There the "ref" is not part of the type, but the way they work is very similar.
Lifetimes give you a lot of power but, IMO, I think languages that do this should choose between either being fully explicit about them, or going "second class" like C# and Hylo and avoiding lifetime annotations entirely.
Eliding them like Rust does can be convenient for experts but is actually a nightmare for newbies. For an example of a language that does explicit lifetimes without becoming unbearable, check out Austral.
Instead of C#'s scope ref solution to having a function accept and return multiple references, another option (in an imaginary language) would be to explicitly refer to the relevant parameters:
ref(b) double whatever(ref Point a, ref Point b) {
return b.x;
}
nickitolas 24 hours ago [-]
You're correct that as far as I understand it the analysis propsoed by the C++ Lifetime Safety Profile is similar in many ways, however I think there's a few important distinctions with these C# features that are not directly related to the analysis: The C++ safety profilers are trying to be backwards compatible with as much C++ code as possible. Whereas my understanding is most of what's talked about in this post is sort of a clean break from idiomatic C#, and is not changing the semantics or adding new warnings to any pre-existing code. Another difference is that C++ obviously does not have a built in runtime GC, so the situations mentioned in this post that get "fixed" by GC heap allocation would remain an issue.
sirwhinesalot 20 hours ago [-]
Yes, I only meant that the way they work (meaning how the "lifetimes" are tracked) is very similar.
C++ has to be "best effort" because it tries to bolt these semantics onto the pre-existing reference types, which were never required to adhere to them. It can catch some obvious bugs but most of the time you'll get a pile of false positives and negatives.
pjmlp 1 days ago [-]
In theory, unfortunately VC++ and clang implementations since 2015 still leave too much off the table for us to fully rely on their lifetime static analysis.
not2b 24 hours ago [-]
For the first example, for C or C++ code gcc will catch the dangling reference/pointer and warn about it, and since my group normally turns on -Wall -Werror, it's an error. However, the analysis is local and not as powerful as Rust's borrow checker.
> Maybe I’m bad at searching for these things, but these changes to C# seem to have gone completely under the radar in places where you read about memory safety and performance.
The reason is this changes are not aimed on average Joe developer writing C# microservices. This changes and whole Span/ref dialect of C# are aimed on Dr. Smartass developer writing C# high performance libraries. It's advance-level feature.
Basically gives you a release-by-release highlight reel of what's changed and why it's changed.
I glance at it every release cycle to get an idea of what's coming up. The even numbered releases are LTS releases while the odd numbered releases (like the forthcoming 9) are short term. But the language and runtime are fairly stable now after the .NET Framework -> .NET Core turbulence and now runtime upgrades are mostly just changing a value in a file to select your target language and runtime version.
snoman 1 days ago [-]
There’s some YouTubers that regularly cover these if youre the type that enjoys watching it instead. Nick Chapsas is one I enjoy watching.
zero0529 1 days ago [-]
Is there a RSS feed for this? I can’t find it unfortunately
I don't think Span<T> and ref are particularly sophisticated concepts.
Span makes working with large buffers easier for Joe developer, if he could be bothered to spend 20 seconds looking at the examples in the documentation.
DeathArrow 1 days ago [-]
I use low level C# constructs, mostly for fun. At current job we write backend microservices and our business domain doesn't need too much low level stuff.
But before span and friends you could always use pointers. Spans just make things friendlier.
And C# also has built-in SIMD libraries if you need to do some high performance arithmetic stuff.
daxfohl 1 days ago [-]
So why all the interest in rust, comparatively?
My assumption is since there is a GC, and it is not native code, there are too many use cases where it can't apply, but rust can. Once there is a way to have it compete with rust in every use case rust can be used, maybe there will be more talk.
stackghost 1 days ago [-]
Garbage collection doesn't imply interpreted. Common Lisp has had GC'ed compiled code for decades.
umanwizard 23 hours ago [-]
Or Go, for a more mainstream modern example.
daxfohl 19 hours ago [-]
Yeah but C# is. Or, compiled to IL and JITted. Unless there is an AOT thing now that can truly compete with c and rust. Been out of the ecosystem for a while.
All this "advance" stuff does is work around the too clever memory model and allow to simply allocate data on stack, something invented ~60 years ago.
WorldMaker 1 days ago [-]
C# always supported stack allocations and always supported things like stack-based pointer operations. It just tagged a lot of it as "Unsafe" and/or required the `unsafe` language keyword (and concomitant security escalation in the old Framework security model where assemblies that used `unsafe` code needed additional code certificates to be installed into places like the GAC).
The "advanced" stuff is very much about bringing Rust-like lifetimes to the language and moving the powers and capabilities outside of the `unsafe` keyword world, by making it much less unsafe in similar ways to how Rust does lifetime/borrow-checking but converted to C#/CLR's classic type system. It's adding the "too clever" memory model of Rust to the much simpler memory model of a GC. (GCs are a very simple memory model invented ~70 years ago.)
EVa5I7bHFq9mnYK 9 hours ago [-]
70 years ago the memory model was: keep all data in global static memory locations (Fortran). Then Algol and Pascal came around, implementing the stack memory model.
WorldMaker 3 hours ago [-]
Lisp was released in 1958, same as Algol, and doing some form of garbage collection from that humble beginning.
1 days ago [-]
lowbloodsugar 2 days ago [-]
>This restriction is central to Rust’s safety guarantees, but C# doesn’t need it. The reason is that Rust has to account for the possibility that a reference may be invalidated at any time.
That’s not why though. There’s lots of reasons for Rusts safety model, such as allowing for vastly faster code because aliasing can’t happen unless both references are read only, in which case it doesn’t matter. There is lot to Rusts borrow rules that this article misses.
It’s like the article earlier today that was, essentially, “I don’t understand Rust and it would be better if it was Haskell”.
fulafel 1 days ago [-]
Is there some benchmarking work that has quantified this speedup from aliasing guarantees or is this more of a sufficiently smart compiler[1] thing?
The Sufficiently Smart Compiler is a hypothetical argument used in advocating for a programming language whose naive implementations might tend towards inefficient designs, whereas the aliasing argument is not itself hypothetical but it may not be backed by data. That's a different thing altogether.
Whether the aliasing argument holds water does not affect whether it was used as justification for Rust's design.
You can always try running some benchmarks by building code with -Zmutable-noalias=no.
fulafel 1 days ago [-]
Thanks. If anyone has a link to results handy I'd be intersted. In particular a lot changes can be said to be 0-5% where the average is very close to 0...
1 days ago [-]
physicsguy 1 days ago [-]
> aliasing can’t happen unless both references are read only
Other languages have long had aliasing, Fortran for one. C and C++ have the restrict keyword though obviously it's a programmer guarantee there and is less safe, since if the user of the function does pass the same memory ref offset for e.g. the optimisation is not safe.
vlovich123 1 days ago [-]
> C and C++ have the restrict keyword
I'd say in name only given that there were numerous aliasing bugs in llvm that only became visible when Rust tried to leverage it. I suspect similar pitfalls exist in every single C/C++ compiler because the rules for restrict are not only difficult to understand for use but also difficult to implement correctly.
physicsguy 1 days ago [-]
Left a comment on another reply, but as I said there, there's a big difference in approach because restrict usually only gets sprinkled in on very few functions, but it doesn't mean it's not used.
orangeboats 1 days ago [-]
The restrict keyword is very seldomly used in C programs. You could probably remove it and still be able to compile the majority of C programs.
(Otherwise, the Rust project wouldn't have encountered all the bugs related to aliasing analysis in LLVM.)
physicsguy 1 days ago [-]
It's not used much in a sense that it's applied to very few functions, but it is well known and widely used in limited circumstances. The bugs in LLVM (and GCC) are basically because Rust is using aliasing much more widely than it ever has been in most C programs.
Take for e.g. this:
void add(double *A, double *B, double *C, int N) {
for(int i = 0; i < N; i++) {
C[i] = A[i] + B[i];
}
}
You generally wouldn't find many C developers sprinkling restrict in on functions like this, since that function could be useful to someone using add on two overlapping arrays.
On the other hand, someone writing a ODE solver in a scientific code might write a function like this, where it would never make sense for the memory locations to overlap:
void RHS(double* restrict x, double* restrict xdot, int N, double dt) {
for(int i = 0; i < N; i++) {
xdot[i] = -x[i]/dt;
}
}
In those sorts of circumstances, it's one of the first performance optimisations you might reach for in your C/C++ toolkit, before starting to look at for e.g. parallelism. It's been in every simulation or mathematical code base I've worked on in 10+ years at various different academic institutions and industry companies.
It's generally true that C/C++ code rarely if ever uses restrict & that Rust was the first to actually put any real pressure on those code paths and once it was found it took over a year to fix and it's incorrect to state that the miscompilation was only in code patterns that would only exist in Rust.
physicsguy 21 hours ago [-]
In the areas I've worked these sorts of cases would have been picked up by tests, and especially checking for correctness of output between different optimisation levels. But I can concede that that's perhaps not the standard sort of workflow for many C/C++ developers.
int_19h 1 days ago [-]
You should be able to remove `restrict` in any valid C program and still compile it and run it with the same result, no? Adding `restrict` can make otherwise valid code UB if there's aliasing, but the reverse shouldn't ever appply.
umanwizard 23 hours ago [-]
I think OP meant you could remove the “restrict” keyword from the language and most programs would still compile fine.
kazinator 1 days ago [-]
The ISO C standard uses the restrict qualifier on some standard library function declarations in such a way that it does nothing.
akira2501 23 hours ago [-]
> The defaults can also be unintuitive: say we wanted to write a method on a struct which returns a reference to one of the struct’s members.
Why would you do that?
> n fact, this is so common that Rust doesn’t require you to write the lifetimes explicitly
This is an actual _pattern_? Yikes^2.
0x457 23 hours ago [-]
> Why would you do that?
a getter?
> This is an actual _pattern_? Yikes^2.
wat.
akira2501 21 hours ago [-]
> a getter?
Getters return values. This returns a pointer. So it's an accessor. With unchecked semantics. It's bizzare to me that anyone would use this technique. It's all downside with no upside.
> wat.
I'm expressing surprise that anyone would do this. I'm sure you were capable of understanding that.
two_handfuls 2 hours ago [-]
> Getters return values. This returns a pointer. So it's an accessor. With unchecked semantics.
This isn't exactly a pointer: Rust distinguishes between read-only and mutable ("exclusive") references.
This returns a read-only reference, so it's very much like a getter: you cannot use it to modify the thing it points to.
It's just that it does it without a copy, which matters for performance in some cases.
0x457 24 minutes ago [-]
To be clear, you can add `get_x_mut()` and return mutable/exclusive reference that can be used to mutate the field behind it. It's not the same as a setter in this case.
0x457 20 hours ago [-]
> Getters return values. This returns a pointer. So it's an accessor. With unchecked semantics. It's bizzare to me that anyone would use this technique. It's all downside with no upside.
When I use getter, I want to see the value of a field. I don't want an owned copy of said value, I just want to look at it, so returning reference makes _a lot more_ sense than returning a copy. In example it uses `i32`, but that's just for readability.
> I'm expressing surprise that anyone would do this. I'm sure you were capable of understanding that.
Yes, and I'm expressing surprised that you think it's bad. I'm not even sure what is bad? Lifetime elision that is well documented and works in a non-ambiguous manner? Using references instead of values? Do we need to memcpy overything now to please you?
akira2501 20 hours ago [-]
> I want to see the value of a field. I don't want an owned copy of said value, I just want to look at it, so returning reference makes _a lot more_ sense than returning a copy.
You can look at it with an owned copy. What is the issue? Is premature optimization the default mode in writing Rust? You don't see the issues with this?
> I'm expressing surprised that you think it's bad
You're surprised that someone simply has a different opinion? Your reaction failed to convey that.
0x457 20 hours ago [-]
> You can look at it with an owned copy. What is the issue? Is premature optimization the default mode in writing Rust? You don't see the issues with this?
uhm, common sense isn't a premature optimization. Avoiding a needless copy is the default mode in writting rust and any other language.
neonsunset 5 days ago [-]
What a great article, thank you for sharing it!
Animats 1 days ago [-]
So what are the C# compiler lifetime error messages like? It if guesses about lifetimes, the messages have to be good.
lionkor 1 days ago [-]
The article has some examples
raverbashing 1 days ago [-]
> How is it possible that I can write efficient and provably-safe code in C# without a degree in type theory?
Excellent question
And I feel that Rust, by making it explicit, makes it harder and unergonomic on the developer
>How is it possible that I can write efficient and provably-safe code in C# without a degree in type theory?
Because Anders Hejlsberg is one of the greatest language architects and the C# team are continuing that tradition.
The only grudge I have against them is they promised us discriminated unions since forever and they are still discussing how to implement it. I think that is the greatest feature C# is missing.
For the rest C# is mostly perfect. It has a good blend of functional and OOP, you can do both low level and high level code. You can target both the VM or the bare hardware. You can write all types of code beside system programming (due to the garbage collector). But you can do web backend, web front-end, services, desktop, mobile apps, microcontroller stuff, games and all else. It has very good libraries and frameworks for whatever you need. The experience with Visual Studio is stellar.
And the community is great. And for most domains there is generally only one library or framework everybody uses so you not only don't have to ask what to use for a new feature or project, but you also find very good examples and help if you need.
It feels like a better, more strait trough version of Java, less verbose and less boiler plate-y. So that's why .NET didn't need its own Kotlin.
Sure, it can't meet the speed of Rust or C# for some tasks because of the garbage collector. But provided you AOT compule, disable the garbage collector and do manual memory management, it should.
GTP 1 days ago [-]
How does C# fares in terms of portability these days? I checked years ago, and at the time, for non-Windows OSes you had to use Mono. But whether your application was going to work or not also depended on which graphic libraries you were using, e.g. WinForms wasn't going to work on Mono. At the time, C# was presented to me as a better Java, but to me it seemed that Java had true cross-platform compatibility while C# was going to work nicely only on Windows, unless you had some proper planning of checking beforehand which libraries were working with Mono.
WorldMaker 1 days ago [-]
Back in the day Mono had surprisingly good WinForms support on Gtk. It was never going to win awards for pretty and could never hit true compatibility with P/Invoke calls to esoteric Win32 APIs, but it was good enough to run any simple WinForms app you wanted to write for it and ran some of the "popular" ones just fine. (That old Mono WinForms support was recently donated to Wine, which seems like a good home for it.)
.NET has moved to being directly cross-platform today and is great at server/console app cross-platform now, but its support for cross-platform UI is still relatively nascent. The official effort is called MAUI, has mostly but not exclusively focused on mobile, and it is being developed in the open (as open source does) and leaves a lot to be desired, including by its relatively slow pace compared to how fast the server/console app cross-platform stuff moves. The Linux desktop support, specifically, seems constantly in need of open source contributors that it can't find.
You'll see a bunch of mentions of third-party options Avalonia and Uno Platform doing very well in that space, though, so there is interesting competition, at least.
lwansbrough 1 days ago [-]
.NET is totally cross platform these days. Our company develops locally on Windows and deploys to Linux. I’m the only team member on Mac and it works flawlessly.
bluGill 1 days ago [-]
If you only care about linux on x86-64 or some ARM it is cross platform. Getting .net on FreeBSD is possible, but it isn't supported at all. QNX from what I can tell seems like it should be possible but a quick search didn't find anyone who succored (a few asked). My company has an in house non-posix OS useful for some embedded things, forget about it. There are a lot of CPUs out there that it won't work on.
.NET has some small cross platform abilities, but calling it totally cross platform is wrong.
lifthrasiir 1 days ago [-]
That's still pretty much cross-platform for all practical purposes, as it supports far more platforms than most softwares anyway. After all cross-platform only means that it runs on multiple platforms, not on all possible or even technically feasible platforms. Being cross-platform usually means a much easier porting but that porting still has to be done somehow.
smaudet 1 days ago [-]
> for all practical purposes
In fairness this ignores a lot of embedded work.
Java gets to cheat here a bit because they have some custom embedded stuff, but they are also not actually running on all CPUs.
lifthrasiir 1 days ago [-]
Embedded stuffs require much more than mere cross-platform anyway ;-)
- Application development targets on iOS and Android use Mono. Android can be targeted as linux-bionic with regular CoreCLR, but it's pretty niche. iOS has experimental NativeAOT support but nothing set in stone yet, there are similar plans for Android too.
- ARMv6 requires building runtime with Mono target. Bulding runtime is actually quite easy compared to other projects of similar size. There are community-published docker images for .NET 7 but I haven't seen any for .NET 8.
- WASM also uses Mono for the time being. There is a NativeAOT-LLVM experiment which promises significant bundle size and performance improvements
- For all the FreeBSD slander, .NET does a decent job at supporting it - it is listed in all sorts of OS enums, dotnet/runtime actively accepts patches to improve its support and there are contributions and considerations to ensure it does not break. It is present in https://www.freshports.org/lang/dotnet
At the end of the day, I can run .NET on my router with OpenWRT or Raspberry Pi4 and all the laptops and desktops. This is already quite a good level given it's completely self-contained platform. It takes a lot of engineering effort to support everything.
giancarlostoro 1 days ago [-]
Weird, I opened a binary I built years ago on Windows in Mono, and it was Winforms and rendered correctly, I think you mean WPF and the later GUI techs. Winforms renders nicely on Mono for a little while now I think?
There's a lot of options, but also the latest of .NET (not Framework) just runs natively on Linux, Mac and Windows, and there's a few open source UI libraries as mentioned by others like Avalonia that allow your UI to run on any OS.
GTP 1 days ago [-]
The issue at the time was having a WinForms application to run also on macOS, and IIRC at the time WinForms wasn't supported outside of Windows. Maybe Mono on Windows is still different from Mono on macOS. Anyway, the situation seem to be much better now. I'm not going to invest time into C# at the moment, since I'm in the Java ecosystem and Im currently taking some time to practice with Kotlin. But it's good to know that now C# is an option as well.
giancarlostoro 1 days ago [-]
Forgot to include my OS: I ran a .NET (Framework? I think) .exe I built on Windows in 2020, on Linux with Mono and it worked and looked (aside from thematic differences) like I remembered it looking in 2024.
GTP 30 minutes ago [-]
Now that I thought a bit more about it, I think I unlocked a memory of WinForms working on some macOS versions and not others. Maybe it was even just supported on 32 bits versions and not in 64 bit versions. One way or another, the bottom line was that it wasn't going to work on the latest macOS version at the time. But I actually tried it on Linux and it worked there.
Rizu 1 days ago [-]
you should checkout AvalonuiaUI[0] or unoPlatform[1] if wanting to target web/mobile/window/linux/macOS
if building for the web online, asp.net core runs on Linux servers as well as windows
and there's MAUI [2] ( not a fan of this), you are better-off with with the others.
in summary c# and .NET is cross-platform, third party developers build better frameworks and tools for other platform while Microsoft prefers to develop for Microsoft ecosystem, if you get
> and there's MAUI [2] ( not a fan of this), you are better-off with with the others.
I will say MS has been obsessed with trying to take a slice of the mobile pie.
However their Xamarin/WPF stuff left so much to be desired and was such a Jenga Tower that I totally get the community direction to go with a framework you have ostensibly have more control over vs learning that certain WPF elements are causes of e.g. memory leaks...
dachris 1 days ago [-]
If you're doing ASP.NET Core webdev, it's seamless. Runs in Linux docker containers. Developers in my team have either Windows (Visual Studio) or Linux or Mac (Rider) machines.
CharlieDigital 1 days ago [-]
Very good.
I work at one of the few startups that uses C# and .NET.
Dev machines are all M1/M3 MacBook Pros and we deploy to a mix of x64 and Arm64 instances on GCP and AWS.
I use VS Code on macOS while the rest of the team prefers Rider.
Zero friction for backend work and certainly more pleasant than Node. (We still use Node and JS for all front-end work).
SirMaster 1 days ago [-]
I mean since Mono it has completely changed. They are about to release .NET 9 which is the 8th version (there was no v4 to reduce confusion with the legacy .NET Framework) since being cross-platform.
Mono was a third party glorified hack to get C# to work on other OS. .NET has been natively cross platform with an entirely new compiler and framework since mid 2016.
GTP 1 days ago [-]
> Mono was a third party glorified hack to get C# to work on other OS.
Indeed, this is what I didn't like back then. Java has official support for other OSes, which C# was lacking at the time. Good to hear that things changed now.
jabwd 1 days ago [-]
Except that the GC makes it exactly not viable for games and its one of the biggest problems Unity devs run into. I agree it's a great language, but its not a do it all.
Rohansi 1 days ago [-]
Unity has literally the worst implementation of C# out there right now. Not only is it running Mono instead of .NET (Core) but it's also not even using Mono's generational GC (SGen). They have been working on switching from Mono to .NET for years now because Mono isn't being updated to support newer C# versions but it will also be a significant performance boost, according to one of the Unity developers in this area [1].
IL2CPP, Unity's C# to C++ compiler, does not help for any of this. It just allows Unity to support platforms where JIT is not allowed or possible. The GC is the same if using Mono or IL2CPP. The performance of code is also roughly identical to Mono on average, which may be surprising, but if you inspect the generated code you'll see why [2].
They did not - it is still a work in progress with no announced target release date. They also have no current plans to upgrade the GC being used by IL2CPP (their C# AOT compiler).
I could argue the opposite - GC makes it more viable for games. "GC is bad" misses too much nuance. It goes like this: developer very quickly and productively gets minimum viable game going using naive C# code. Management and investors are happy with speed of progress. Developers see frame rate stutters, they learn about hot path profiling, gen0/1/2/3 GC & how to keep GC extremely fast, stackalloc, array pooling, Span<T>, native alloc; progressively enhancing quickly until there are no problems. These advanced concepts are quick and low risk to use, and in the case of many of the advanced concepts; what you would be doing in other languages anyway.
DeathArrow 1 days ago [-]
The only reason we might see FPS drop in games, is not because C# and its GC. It's mostly because the poor usage of the graphics pipeline and the lack of optimization. As a former game developer I had to do a lot of optimization so our games run nicely on mobile phones with modest hardware.
C# it's plenty fast for game programming.
mycocola 1 days ago [-]
That entirely depends on the game. Recent example is Risk of Rain 2, which had frequent hitches caused by the C# garbage collector. Someone made a mod to fix this by delaying the garbage collection until the next load-screen — in other words, controlled memory leakage.
The developers of Risk of Rain 2 were undoubtedly aware of the hitches, but it interfered with their vision of the game, and affected users were left with a degraded experience.
It's worth mentioning that when game developers scope of the features of their game, available tech informs the feature-set. Faster languages thus enable a wider feature-set.
munificent 1 days ago [-]
> It's worth mentioning that when game developers scope of the features of their game, available tech informs the feature-set. Faster languages thus enable a wider feature-set.
This is true, but developer productivity also informs the feature set.
A game could support all possible features if written carefully in bare metal C. But it would take two decades to finish and the company would go out of business.
Game developers are always navigating the complex boundary around "How quickly can I ship the features I want with acceptable performance?"
Given that hardware is getting faster and human brains are not, I expect that over time higher level languages become a better fit for games. I think C# (and other statically typed GC languages) are a good balance right now between good enough runtime performance and better developer velocity than C++.
Const-me 1 days ago [-]
> frequent hitches caused by the C# garbage collector
They probably create too much garbage. It’s equally easy to slow down C++ code with too many malloc/free functions called by the standard library collections and smart pointers.
The solution is the same for both languages: allocate memory in large blocks, implement object pools and/or arena allocators on top of these blocks.
Neither C++ nor C# standard libraries have much support for that design pattern. In both languages, it’s something programmers have to implement themselves. I did things like that multiple time in both languages. I found that, when necessary, it’s not terribly hard to implement that in either C++ or C#.
smaudet 1 days ago [-]
> In both languages, it’s something programmers have to implement themselves.
I think this is where the difference between these languages and rust shines - Rust seems to make these things explicit, C++/C# hides behind compiler warnings.
Some things you can't do as a result in Rust, but really if the rust community cares it could port those features (make an always stack type type, e.g.).
Code base velocity is important to consider in addition to dev velocity, if the code needs to be significantly altered to support a concept it swept under the rug e.g. object pools/memory arenas, then that feature is less likely to be used and harder to implement later on.
As you say, it's not hard to do or a difficult concept to grasp, once a dev knows about them, but making things explicit is why we use strongly typed languages in the first place...
Rohansi 1 days ago [-]
The GC that Unity is using is extremely bad by today's standards. C# everywhere else has a significantly better GC.
In this game's case though they possibly didn't do much optimization to reduce GC by pooling, etc. Unity has very good profiling tools to track down allocations built in so they could have easily found significant sources of GC allocations and reduced them. I work on one of the larger Unity games and we always profile and try to pool everything to reduce GC hitches.
n4r9 1 days ago [-]
Apparently that was released in 2019? Both C# and dotnet have had multiple major releases since then, with significant performance improvements.
SeasonalEnnui 1 days ago [-]
A good datapoint, thanks.
Extending my original point - C# got really good in the last 5 years with regards to performance & low-level features. There might be an entrenched opinion problem to overcome here.
bluGill 1 days ago [-]
Anybody writing a game should be writing in a game engine. There are too many things you want in a game that just come "free" from an engine that you will spend years writing by hand.
GC can work or not when writing a game engine. However everybody who writes a significant graphical game engine in a GC language learns how to fight the garbage collector - at the very least delaying GC until between frames. Often they treat the game like safety critical: preallocate all buffers so that there is no garbage in the first place (or perhaps minimal garbage). Without garbage collection might technically use more CPU cycles, but in general they are spread out more over time and so more consistent.
jayd16 1 days ago [-]
The two biggest engines, Unreal and Unity, use a GC. Unity itself uses C#. C# is viable for games but you do need to be aware of the garbage you make.
It's really not that hard to structure a game that pre-allocates and keeps per frame allocs at zero.
alkonaut 1 days ago [-]
It's hard to use C# without creating garbage. But it's not impossible. Usually you'd just create some arenas for your important stuff, and avoid allocating a lot of transient objects such as enumerators etc. So long as you can generate 0 bytes of allocation each frame, you won't need a GC no matter how many frames you render.
The question is only this: does it become so convoluted that you could just as well have used C++?
jayd16 1 days ago [-]
Enumerators are usually value types as long as you use the concrete type. Using the interface will box it. You can work around this by simply using List<T> as the type instead of the IEnnumerable.
You have to jump through some hoops but it's really not that convoluted and miles easier than good C++.
alkonaut 1 days ago [-]
The problem with it is that you don't know. The fundamental language construct "foreach" is one that may or may not allocate and it's hard for you as a developer to be sure. Many other low level things do this or at least used to (events/boxing/params arrays, ...).
I wish there was an attribute in C# that was "[MustNotAllocate]" which files the compilation on known allocations such as these. It's otherwise very easy to accidentally introduce some tiny allocation into a hot loop, and it only manifests as a tiny pause after 20 minutes of runtime.
ygra 1 days ago [-]
While this would be nice for certain applications, I'm not sure it's really needed in general. Most people writing C# don't have to know about these things, simply because it doesn't matter in many applications. If you're writing performance-critical C#, you're already on a weird language subset and know you way around these issues. Plus, allocations in hot loops stand out very prominently in a profiler.
That being said, .NET includes lots of performance-focused analyzers, directing you to faster and less-allocatey equivalents. There surely also is one on NuGet that could flag foreach over a class-based enumerator (or LINQ usage on a collection that can be foreach-ed allocation-free). If not, it's very easy to write and you get compiler and IDE warnings about the things you care about.
At work we use C# a lot and adding custom analyzers ensuring code patterns we prefer or require has been one of the best things we did this year, as everyone on the team requires a bit less institutional knowledge and just gets warnings when they do something wrong, perhaps even with a code fix to automatically fix the issue.
jayd16 1 days ago [-]
If you know what types you're using, you do know. If you don't know what you're calling, that's a pretty high bar that I'm not sure C++ clears.
alkonaut 21 hours ago [-]
If you are calling SomeType.SomeMethod(a, b, c) then you don't know what combintions of a, b, c could allocate unless you can peek into it or try every combination of a, b and c. So it's hard to know in the general case even with profiling and testing.
neonsunset 1 days ago [-]
Most often you do know whether an API allocates. It is always possible to microbenchmark it with [MemoryDiagnoser] or profile it with VS or Rider. I absolutely love Rider's dynamic program analysis that just runs alongside me running an application with F5, ideally in release, and then I can go through every single allocation site and decide what to do.
Even when allocations happen, .NET is much more tolerant to allocation traffic than, for example, Go. You can absolutely live with a few allocations here and there. If all you have are small transient allocations - it means that live object count will be very low, and all such allocations will die in Gen 0. In scenarios like these, it is uncommon to see infrequent sub-500us GC pauses.
Last but not least, .NET is continuously being improved - pretty much all standard library methods already allocate only what's necessary (which can mean nothing at all), and with each release everything that has room for optimization gets optimized further. .NET 9 comes with object stack allocation / escape analysis enabled by default, and .NET 10 will improve this further. Even without this, LINQ for example is well-behaved and can be used far more liberally than in the past.
It might sound surprising to many here but among all GC-based platforms, .NET gives you the most tools to manage the memory and control allocations. There is a learning curve to this, but you will find yourself fighting them much more rarely in performance-critical code than in alternatives.
1 days ago [-]
lifthrasiir 1 days ago [-]
At least for Unity, the actual problem lies in IL2CPP and not C#. I have professionally used C# in real-time game servers and GC was never a big issue. (We did use C++ in the lower layer but only for the availability of Boost.Asio, database connectors and scripting engines.)
Rohansi 1 days ago [-]
Unity lets you use either IL2cPP (AOT) or Mono (JIT). Either way it will use Boehm GC which is a lot worse than the .NET GC. If your game servers weren't using Unity then they are using a better GC.
lifthrasiir 1 days ago [-]
Yeah, we rolled our own server framework in .NET mainly because we were doing MMOs and there were no off-the-shelf frameworks (including Unity's) explicitly designed for that. In fact, I believe this is still mostly true today.
DeathArrow 1 days ago [-]
> one of the biggest problems Unity devs run into
Unity used Mono. Which wasn't the best C# implementation, performance wise. After Mono changed its license, instead of paying for the license, Unity chose to implement their infamous IL2CPP, which wasn't better.
Now they want to use CoreCLR which is miles better than both Mono and IL2CPP.
pjmlp 1 days ago [-]
Except that is a matter of developer skill, and Unity using Mono with its lame GC implementation, as proven by CAPCOM's custom .NET Core fork based engine used for Devil May Cry on the PlayStation 5.
smaudet 1 days ago [-]
We can all agree Unity is terrible.
Would be nice to hear about a Rust Game engine, though.
pjmlp 23 hours ago [-]
Check Bevy.
bob1029 1 days ago [-]
GC in modern .NET runtime is quite fast. You can get very low latency collections in the normal workstation GC mode.
Also, if you invoke GC intentionally at convenient timing boundaries (I.e., after each frame), you may observe that the maximum delay is more controllable. Letting the runtime pick when to do GC is what usually burns people. Don't let the garbage pile up across 1000 frames. Take it out every chance you get.
Traubenfuchs 1 days ago [-]
> if you invoke GC intentionally at convenient timing boundaries (I.e., after each frame),
Manually invoking GC many times per second is a viable approach?
munificent 1 days ago [-]
It can be, yes.
You're basically trading off worse throughput for better latency.
If you forcibly run the GC every frame, it's going to burn cycles repeatedly analyzing the same still-alive objects over and over again. So the overall performance will suffer.
But it means that you don't have a big pile of garbage accumulating across many frames that will eventually cause a large pause when the GC runs and has to visit all of it.
For interactive software like games, it is often the right idea to sacrifice maximum overall efficiency for more predictable stable latency.
neonsunset 1 days ago [-]
This might be more problematic under CoreCLR than under Unity. Prematurely invoking GC will cause objects that are more likely to die in Gen 0 to be promoted to Gen 1, accumulate there and then die there. This will cause unnecessary inter-generational traffic and will extend object lifetimes longer than strictly necessary. Because live object count is the main factor that affects pause duration, this may be undesirable.
OSU! represents an extreme case where the main game loop runs at 1000hz, so for much more realistic ~120hz you have plenty of options.
smaudet 1 days ago [-]
If you could even just pass an array of objects to be collected or something, this would so much easier.
Magic, code or otherwise, sucks when the spell/library/runtime has different expectations than your own.
You expect levitation to apply to people, but the runtime only levitates carbon based life forms. You end up levitating people without their affects (weapons/armor), to the embarrassment of everyone.
There should be no magic, everything should be parameterized, the GC is a dangerous call, but it should be exposed as well (and lots of dire warnings issued to those using it).
munificent 21 hours ago [-]
> If you could even just pass an array of objects to be collected or something
If you have a bunch of objects in an array that you have a reference to such that you can pass it, then, by definition, those objects are not garbage, since they're still accessible to the program.
smaudet 21 hours ago [-]
Yes. Use a WriteOnlyArray or whatever, Semantics aside though...
There should be some middle ground between RAII and invoking Dispose/delete and full blown automatic GC.
bob1029 1 days ago [-]
It has worked well in my prototypes. There is a reason a GC.Collect method is exposed for use.
smaudet 1 days ago [-]
At least for this instance you have a good idea which objects are "ripe" for collection. There should be some way to specify "collect these, my infra objects don't need to be".
zigzag312 1 days ago [-]
Games would need alternative GC optimized for low latency instead of maximum throughput.
AFAIK it has been possible to replace the GC with alternative implementation for the past few years, but no one has made one yet.
EDIT: Some experimental alternative GC implementations:
Many of the top games in recent years have used it, so you've got a funny definition of "not viable".
johnisgood 1 days ago [-]
Or roll their own, so they used GC in one way or another.
greener_grass 1 days ago [-]
> not viable for games
> Unity devs run into
So it's viable but not perfect
Paradigma11 1 days ago [-]
Doesnt Unity use its own GC or transpiles to C++?
Unity on .Net core is more than a year away, no?
pjmlp 1 days ago [-]
It uses the prehistoric Mono GC. Additionally it transpiles IL to C++ due to many targets like consoles, and iDevices, not allowing for a JIT.
They also have a C# subset called Burst, which could have been avoided if they were using .NET Core.
rafaelmn 1 days ago [-]
C# has much better primitives for controlling memory layout than Java (structs, reified generics).
BUT it's definitely not a language designed for no-gc so there are footguns everywhere - that's why Rider ships special static analysis tools that will warn you about this. So you can keep GC out of your critical paths, but it won't be pretty at that point. But better than Java :D
neonsunset 1 days ago [-]
> but it won't be pretty at that point
Possibly prettier than C and C++ still. Every time I write something and think "this could use C" and then I use C and then I remember why I was using C# for low-level implementation in the first place.
It's not as sophisticated and good of a choice as Rust, but it also offers "simpler" experience, and in my highly biased opinion pointers-based code with struct abstractions in C# are easier to reason about and compose than more rudimentary C way of doing it, and less error-prone and difficult to work with than C++. And building final product takes way less time because the tooling is so much friendlier.
neonsunset 1 days ago [-]
Unity (and its GC) is not representative of the performance you get with CoreCLR.
The article discusses ref lifetime analysis that does have relationship with GC, but it does not force you into using one. Byrefs are very special - they can hold references to stack, to GC-owned memory and to unmanaged memory. You can get a pointer to device mapped memory and wrap it with a Span<T> and it will "just work".
mafuy 1 days ago [-]
Well, when I worked in Unity I used to compile C# code with the LLVM backend. It was as fast as C++ code would be. So Unity is perhaps an example in favor of C#.
zigzag312 1 days ago [-]
> The only grudge I have against them is they promised us discriminated unions since forever and they are still discussing how to implement it. I think that is the greatest feature C# is missing.
To ease the wait you could try Dunet (discriminated union source generator).
Isn't OneOf more like a type union, and not a tagged/discriminated union?
alkonaut 1 days ago [-]
The DU stuff is enormous once you consider all the corners it touches. Especially with refinements. E.g. in code like
if (s is string or s is int) {
// what's the type of s here? is it "string | int" ?
}
And not to mention that the BCL should probably get new overloads using DU's for some APIs. But there is at least a work in progress now, after years of nothing.
colejohnson66 1 days ago [-]
One of the claimed benefits of .NET Core was that they could improve the runtime at a much faster pace than .NET Framework did, especially if that meant adding new features or even IL opcodes. And they've done this before, with a big one (IMO) being ref fields in ref structs. Lately, when it comes to developing C#, the language design team has frustratingly been trying to shoehorn everything into the compiler instead of modifying the runtime. Then they say the runtime should be modified to pattern-match what they output. If DUs are to be implemented fully in C#, niches would probably be impossible. This means Optional<T>, when T is a class, would take two words.
factormeta 1 days ago [-]
>The experience with Visual Studio is stellar.
I assume you mean just the Windows Visual Studio? The Mac version is not exactly on par with the Windows. Yeah C# is great, but one would need Window's version of VS (NOT VS Code) to take full advantage of C#. For me that is a deal breaker, when the DX of a language is tight to a proprietary sourced IDE by MS.
uticus 1 days ago [-]
Incidentally JetBrains Rider (competitor IDE) announced as free today for non-commercial, if you’d like to try it out:
[edit: I’ll note I’ve used successfully both Win and Linux]
alkonaut 1 days ago [-]
Mac visual studio isn't visual studio, it's something else that they stuck the label visual studio on. They are about as related as java and javascript (which are famously, related as car is to carpet)
metaltyphoon 1 days ago [-]
VS for mac was sunset a while ago. You either use VSC or Rider( which is now free for non commercial use )
contextfree 21 hours ago [-]
IIRC the last couple of releases had some new/overhauled features they said were built for both from the same code, so they seemed to be starting down the path of slowly converging them, before they changed their minds and discontinued the Mac version I guess.
moomin 1 days ago [-]
These days, JetBrains have stepped into the gap with Rider. Rider isn't perfect, but there's definitely people who prefer it to Visual Studio.
dachris 1 days ago [-]
Rider definitely can hold a candle to Visual Studio. In my dev bubble there's about a 50/50 split for C# devs (mostly .NET Core) using VS vs Rider
sfn42 2 hours ago [-]
In mine the VS crew is the minority by far. VS is much worse, especially without ReSharper and at that point why not just use rider.
moomin 1 hours ago [-]
I’m sure you can point to many things Rider is better at, but I’ve found enough sharp edges (including, annoyingly, it not being able to infer types that Roslyn can) that it’s not a sell for me. It’s also much faster and supports NCrunch.
sfn42 16 minutes ago [-]
Never heard of ncrunch but I googled it and its described as
> the ultimate live testing tool for Microsoft Visual Studio and JetBrains Rider
So it seems at least that part of your critique is outdated.
I'm not sure what you mean about the inference, I've never had any problem with that that I can remember. And it can be a bit slow to start up or analyze a project at first load but in return it gives much better code completion and such.
MaxGripe 1 days ago [-]
What do you think about F#, then? It already covers everything you mentioned and it has discriminated unions.
DeathArrow 24 hours ago [-]
I absolutely love F#! Two things though: adoption is low so you kind of can't use it professionally and most libraries are written in C#, so you kind of use them in a non idiomatic way.
rafaelmn 1 days ago [-]
Basically C# is like using a Mac and F# is "I use Arch btw."
MaxGripe 1 days ago [-]
However amusing, this comparison doesn’t seem accurate to me. F# may appear more challenging only to someone who is already accustomed to OO programming languages. For people just starting to code, without pre-existing habits, learning F# could be much easier than learning C#
roetlich 1 days ago [-]
But knowing to effectively programming F# requires you to understand OOP and the functional abstractions. Most libraries in .Net still target C#, so understanding C# syntax and mentally translating it to F# is often required. If your application doesn't require many outside dependencies that might be different, but for most projects F# will require a lot more learning.
miloandmilk 1 days ago [-]
This 1000 times.
I have been learning F# for a while now, and while the functional side that is pushed heavily is a joy to use, anything that touches the 'outside world' is going to have way more resources for C# as far as libraries, official documentation, general information including tutorials etc. You will need to understand and work with those.
So you really do need to understand C# syntax and semantics. Additionally there are a few concepts that seem the same in each language but have different implementations and are not compatible (async vs tasks, records) so there is additional stuff to know about when mentally translating between C# and F#.
I really want to love F# but keep banging my head against the wall. Elixir while not being typed yet and not being as general purpose at least allows me to be productive with it's outstanding documentation, abundance of tutorials and books on both the core language and domain specific applications. It is also very easy to mentally translate erlang to elixir and vice versa in the very few occasions needed.
roetlich 1 days ago [-]
> I really want to love F# but keep banging my head against the wall.
Yeah. What's your opinion on Gleam?
miloandmilk 1 days ago [-]
Gleam from a language perspective seems really nice - but it's in it's ramp up stage, I will go through the Gleam Exrercism track and keep an eye on it. It would be great if it became the general purpose typed pragmatic functional language with a large ecosystem I am after!
Decabytes 1 days ago [-]
F# provides escape hatches to OOP since it is closely tied to C#.
rafaelmn 1 days ago [-]
It's more of a hassle to get working vs. having everything nicely polished and first party support.
johnisgood 1 days ago [-]
Just to comment on the meme: it used to be Gentoo instead of Arch in my good old days. :P
ZeroClickOk 1 days ago [-]
No partial classes is painful. I know that we have some alternatives, but migrating from C# is a doom without it.
1 days ago [-]
whizzter 1 days ago [-]
I'm hardly missing discriminated unions (sure exhaustive checking would be nice) anymore since the introduction of switch expressions that in combination with records handles most practical cases.
CharlieDigital 1 days ago [-]
I find DU's particularly useful with `System.Threading.Channels` because it lets the channel handle multiple types of results in one stream.
algorithmsRcool 1 days ago [-]
> Because Anders Hejlsberg is one of the greatest language architects and the C# team are continuing that tradition.
I wish Anders was still in charge of C# :(
nick_ 1 days ago [-]
Fingers crossed that he'll one day come back and make T#
pif 1 days ago [-]
> For the rest C# is mostly perfect.
No, it isn't. The power of C++ templates is still astronomically far from C# generics.
lifthrasiir 1 days ago [-]
That power is usually considered too reckless to retain and simultaneously too cumbersome to actually use, partly because it was never planned in the first place.
galangalalgol 1 days ago [-]
C++ concepts restrain that recklessness, and people hate them for it. Rust will get most of that power when they finally stabilize const generic expressions. I like c# but like the article says, it isn't really borrow checking, so you don't get fearless concurrency. If I want to do coarse grained multi threading (not just a parallel for loop or iterator to speed up math) I only want to use rust now. Once I stopped having to think around thread safety issues and data consistency, I didn't want to go back. But for something single threaded c# or go are great and performant.
radicalbyte 8 hours ago [-]
Is Rust really bullet proof though? I've spent a lot of time fixing concurrency bugs (race conditions), it's one of those things that I'm very very good at but even then it feels like you're Indiana Jones dodging the hidden traps.
Haskell promises to solve concurrency and the Rust boys are always claiming that it's impossible to write buggy code in Rust.. and the jump from C/C++/C#/Golang to Rust is much smaller than to Haskell..
galangalalgol 5 hours ago [-]
You can still leak memory via a container. You can still create deadlocks. You can still throw a panic (with a backtrace). It does not solve the halting problem. But if it compiles you will not have any form of undefined behavior. No reading or writing out of bounds. No use after free, no dangling pointers, and no data getting modified across threads in an inconsistent way. If it needs a lock, the compiler will tell you with an error.
radicalbyte 4 hours ago [-]
> If it needs a lock, the compiler will tell you with an error.
Oh that's what I was getting at, that makes Rust pretty much a must-have tool to have in your tool-belt.
phito 10 hours ago [-]
I'm so glad it doesn't. There is absolutely no need for it and when it's used it usually makes a big mess. It goes in the same pile as multiple inheritance.
DeathArrow 24 hours ago [-]
I don't miss that power. I remember having to modify some template heavy code and it brought me to the verge of madness. :)
jayd16 1 days ago [-]
What about the built in Roslyn source generator stuff? Is that sufficiently abusable?
I'm not a templates/macro guy so I'm curious what's missing.
nick_ 1 days ago [-]
There are pros and cons. C++ templates can't be reified at run-time like C# generics can.
int_19h 24 hours ago [-]
FWIW for C# this also requires having a JIT (to handle generic virtual methods).
neonsunset 23 hours ago [-]
GVM dispatch is notoriously slow(-ish), yeah. But it does not require JIT. Otherwise it wouldn't work with NativeAOT :) (the latter can also auto-seal methods and unconditionally devirtualize few-implementation members which does a good job, guarded devirtualization with JIT does this even better however)
int_19h 23 hours ago [-]
I remember when this feature was specifically not available with NativeAOT.
It's good that it is now, but how can it be implemented in a way that has truly separate instantiations of generics at runtime, when calls cross assembly boundaries? There's no single good place to generate a specialization when virtual method body is in one assembly while the type parameter passed to it is a type in another assembly.
neonsunset 22 hours ago [-]
> how can it be implemented in a way that has truly separate instantiations of generics at runtime, when calls cross assembly boundaries
There are no assembly boundaries under NativeAOT :)
Even with JIT compilation - the main concern, and what requires special handling, are collectible assemblies. In either case it just JITs the implementation. The cost comes from the lookup - you have to look up a virtual member implementation and then specific generic instantiation of it, which is what makes it more expensive. NativeAOT has the definitive knowledge of all generic instantiations that exist, since it must compile all code and the final binary does not have JIT.
roetlich 1 days ago [-]
Yes, another plus point of C#.
Sorry for the snark, but I do think C# compile are just barely acceptable for me, so I'm happy they aren't adding more heavy compile time features.
pif 1 days ago [-]
> For the rest C# is mostly perfect.
No! It misses "typedef", both at module API level and within generics.
1 days ago [-]
bob1029 5 days ago [-]
> Maybe I’m bad at searching for these things, but these changes to C# seem to have gone completely under the radar in places where you read about memory safety and performance. Maybe it’s just because the language additions have happened super slowly, or maybe the C# and Rust communities have so little overlap that there aren’t enough people who program in both languages to notice the similarities.
If you are looking at this through the lens of HN, I think much of this can be attributed to a certain ideological cargo cult that actively seeks to banish any positive sentiment around effective tools. You see this exact same thing with SQL providers, web frameworks, etc. If the tool is useful but doesn't have some ultra-progressive ecosystem around it (i.e., costs money or was invented before the average HN user's DOB), you can make a winning bet that talking about it will result in negative karma outcomes.
Everyone working in enterprise software development has known about the power of this language for well over a decade. But, you won't find a single YC startup that would admit to using it.
zdragnar 1 days ago [-]
Well, common lisp and some schemes and prolog tend to get a good deal of praise here, even if their commercial utilization is lower. OCaml, F# and Clojure tend to get a good deal of favorable comments as well.
I suspect it is less about cargo culting, and more about two separate things:
First, the tooling for C# and really anything dotnet has been awful on any OS other than Windows until fairly recently. Windows is (to be blunt) a very unpopular OS in every development community that isn't dotnet.
Second, anthing enterprise is worth taking with a skeptical grain of salt; "enterprise" typically gets chosen for commercial support contracts, vendor lock-in, or astronaut architects over-engineering everything to fit best practices from 20 years ago. Saying big businesses run on it is a virtue is akin to saying that Oracle software is amazing or that WordPress engineering is amazing because so many websites run on it. Popularity and quality are entirely orthogonal.
I suppose there is probably another reason, which is the cluster fuck that has been the naming and churn of dot net versions for several years. ASP.NET, then core, then the core suffix got dropped at version 5, even though not everything was cross platform... So much pointless confusion.
jeroenhd 1 days ago [-]
C# is an incredible language with a past haunted by Microsoft only making it useful on Windows and expensive licenses for really good editors. It's come a long way, but I don't blame people for thinking of it as "Microsoft Java".
My only issue with many of the improvements in C# is that all of them are optional for backwards compatibility reasons. People who don't know or don't care about new language features can still write C# like it's 2004 and all of the advantages of trying to modernize go out of the window. That means that developers often don't see the need to learn any of the new features, which makes it hard for projects to take advantage of the language improvements.
alkonaut 1 days ago [-]
Hard agree on the backwards compatibility. It appears to be some law of nature that "our next compiler must compile 20 year old code".
Instead of new platform libs and compilers simply defaulting to some reasonable cutoff date and saying "You need to install an ancient compiler to build this".
There is nothing that prevents me from building my old project with an older set of tools. If I want to make use of newer features then I'm happy to continuously update my source code.
Clubber 1 days ago [-]
>It appears to be some law of nature that "our next compiler must compile 20 year old code".
Some examples of companies/products not implementing backwards compatibility are Delphi and Angular. Both are effectively dead. .NET Core wasn't backwards compatible with .NET Framework, but MS created .NET Standard to bridge that gap. .NET Standard allows people to write code in .NET core and will run in .NET Framework. It's not perfect, but apparently it was good enough.
Companies usually won't knowingly adopt a technology that will be obsoleted in the future and require a complete rewrite. That's a disaster.
alkonaut 1 days ago [-]
But that's .NET, not C#. Language and platforms are different. Libraries must be compatible (because you don't know if your library will be consumed in a newer app).
But the compiler only consumes syntax (C#11, C#12 C#13 and so on) so I don't see why the compiler that eats C#13 necessarily must swallow C#5 without modification
Clubber 1 days ago [-]
They did a breaking change in a recent C# where nullable objects must be postfixed with a ?, so old code is:
public Patient Patient { get; set; }
The same thing with modern code would be
public Patient? Patient { get; set; }
Because with the new C#, objects are by default not null. Fortunately there is a compiler flag to turn this off, but it's on by default.
As a guy who has worked in C# since 2005, a breaking change would make me pretty irate. Backwards compatibility has its benefits.
What issues do you have with backwards compatibility?
alkonaut 24 hours ago [-]
NRT wasn't really breaking as it's a warning which you control top level. But there have been some real breaking changes in edge cases but they are pretty far between.
I think the language could be better if it was always structured in the best way possible, rather than in the best compatible way.
As a class library example (which is contrary to what I said earlier about .NET compatibility vs C# compatibility) is that it was a massive mistake to let double.ToString() use the current culture rather than the invariant culture.
It should change to either required passing a culture always (breaking API change) or change to use invariantculture (behaviour change requiring code changes to keep old behavior)
Clubber 21 hours ago [-]
>a massive mistake to let double.ToString() use the current culture rather than the invariant culture.
I would imagine that's a carryover from the Win32/Client-Server days when that would have been a better choice.
Is that annoying? Yea. Is that annoying enough to force companies to collectively spend billions to look through their decades old codebases for double.ToString() and add culture arguments? Also keep in mind, this is a runtime issue, so the time to fix would be much more than if it were a compile issue. I would say no.
That's a great idea (and after the fact, much better than changing the API). On day 1 it should have been easy though.
thijsvandien 1 days ago [-]
> Delphi
Just the move to Unicode (i.e. from 2007 to 2009) took some work, but otherwise I can't think of any intentional breaking changes...? In fact, it's one of the most stable programming environments I know of – granted, in part because of being a little stagnant (but not dead).
Clubber 1 days ago [-]
I seem to recall some in Delphi 4, but it's been forever.
magicalhippo 19 hours ago [-]
Ah yes, the version released in 1998. Let's ignore the 26 years since then...
I've been using Delphi since Delphi 3. The only really breaking change I can recall was the Unicode switch. And that was just a minor blip really. Our 300kloc project at work took a couple of days to clean up the compiler errors and it's been Unicode-handling ever since. It's file integration and database heavy, so lots of string manipulation.
Most of my hobby projects didn't need any code changes.
In fact, the reason Delphi was late to the Unicode party was precisely because they spent so much time designing it to minimize impact on legacy code.
Not saying there hasn't been some cases, but the developers of Delphi have had a lot of focus on keeping existing code running fine. We have a fair bit of code in production that is decades old, some before y2k, and it just keeps on ticking without modification as we upgrade Delphi to newer versions.
Clubber 16 hours ago [-]
>Let's ignore the 26 years since then...
The market has been ignoring Delphi for that long. It probably peaked with D5, once they changed their name from Borland to Inprise, it was over.
I hear it's still somewhat popular in Eastern European countries, but I heard that several years ago.
lomase 1 days ago [-]
You don't need to rewrite old .net project to compile it in a new machine.
But is also not a trivial task.
devjab 1 days ago [-]
> Everyone working in enterprise software development has known about the power of this language for well over a decade.
I think it depends on location. In my part of the world .Net is something which lives in middle sized often stagnating companies. Enterprise around here is married to the JVM and they even tend to use more Typescript on the backend than C#. I’m not going to defend the merits of that in any way, that is just the way of things.
There being said I do get the impression that HN does know that Rust isn’t seeing much adoption as a general purpose language. So I wouldn’t count C# out here considering how excellent it has become since the transition into Core as the main .Net. I say this a an absolute C# hater by the way, I spent a decade with it and I never want to work with it again. (After decades of SWE I have fun with Python, C/Zig, JS/TS, and, no other language.)
DeathArrow 1 days ago [-]
> Enterprise around here is married to the JVM and they even tend to use more Typescript on the backend than C#. I’m not going to defend the merits of that in any way, that is just the way of things.
Many developers already know Java, so it's easier to hire Java developers.
>There being said I do get the impression that HN does know that Rust isn’t seeing much adoption as a general purpose language. So I wouldn’t count C# out here considering how excellent it has become since the transition into Core as the main .Net. I say this a an absolute C# hater by the way, I spent a decade with it and I never want to work with it again. (After decades of SWE I have fun with Python, C/Zig, JS/TS, and, no other language.)
I didn't like the old C# and .NET. However, the new one is wonderful and I quite enjoy using it. More than Java or Go. On par with Python, but I wouldn't use Python for now for large web backend applications.
I tried Rust, bur for some reason I can't grow to like it. I'd prefer using C or Zig and even a sane subset of C++ (if such thing even exists).
devjab 1 days ago [-]
I don’t like C# because I don’t like the “magic” which is also what makes it special. In that regard I actually think highly of Go’s more simplistic approach to everything, from explicit error handling to the flat “class hierarchy”. Go isn’t as good as C# for a lot of things and there are no technical reasons for my C# hatred. Well I guess you could argue that having to fight the “magic” when you run into things it can’t handle as technical but for 99% of the things this isn’t an issue.
Python is a horrible language, but it’s also the language I actually get things build in. I do think it’s a little underrated for large web apps since Django is a true work horse, but it takes discipline. C is for performance, embedded and Python/Typescript libraries and Zig is basically just better C because of the interoperability. Typescript is similar to Python for me, I probably wouldn’t use it if it wasn’t adopted everywhere, but I do like working with it.
We’ve done some Rust pocs but it never really got much traction and nobody really likes it. + I don’t think I’ve ever seen a single Rust job in my area of the world. C/C++ places aren’t adopting it, they are choosing Zig. That is if they’re going away from C/C++ at all.
lenkite 1 days ago [-]
Zig looks to be pretty much work in progress at the moment, with lots of stuff broken. Even if the language is saner to learn than Rust, it cannot be considered ready for production.
EraYaN 2 hours ago [-]
Quite a few C libraries are using it as their build system. Even in "production-ready" libraries.
devjab 1 days ago [-]
I think it works rather well as a drop-in for C, but it’s not like we’re rewriting, or have stopped working with C.
jen20 1 days ago [-]
Where are you? Somewhere with more Zig adoption than Rust outside of the OSS terminal ecosystem sounds pretty interesting!
devjab 1 days ago [-]
A non-Copenhagen part or Denmark, but it’s really not that interesting. Almost no adoption isn’t that much more impressive than no adoption.
I’m fairly confident that PHP, Python, JS/TS, Java and C/C++ will be what people still work on around here when I retire. Go is the only language which has managed to see some real adoption in my two decade career.
radicalbyte 21 hours ago [-]
> I have fun with Python, C/Zig, JS/TS, and, no other language.
Python is the least fun language currently in use at any scale. Pretty much completely down to the lack of a coherent tool chain. When JS has better package management than you then you know you have a massive problem.
failbuffer 1 days ago [-]
Alternatively (or at least additively), most C# developers don't really need all the new ref/Span features. They're writing line-of-business apps and garbage collection is a fact of life, not some burden to be avoided.
Microsoft probably added these features to push the language into new niches (like improving the story around Unity and going after Arduino/IoT). But it's of little practical appeal to their established base.
Dykam 7 hours ago [-]
As far as I'm aware, it's the development of Kestrel which pushed the introduction of ref/Span etc. Due to it Kestrel has seen quite a large speedup, it being one of the fastest HTTP servers nowadays. ref/Span allowed them to make the core almost allocation free, together with using vectorized operations (SIMD ) for parsing the request.
DeathArrow 1 days ago [-]
> Everyone working in enterprise software development has known about the power of this language for well over a decade. But, you won't find a single YC startup that would admit to using it.
Not sure about that. Maybe there are? If you do web or mobile apps, C# would be an excellent choice. Go would be also an excellent choice for web.
For AI I wouldn't use C#. Even though it has excellent ML libraries, most research and popular stuff is done using Python and pytorch, so that's what I would chose.
For very low level, I'd take C or Zig. But I don't know many startups who are into very low level stuff.
DeathArrow 1 days ago [-]
I don't get why you are downvoted. It's true that some languages, frameworks, operating systems are more popular on HN that others. Reasons for this might be complex and we might enter into very hard and complicated sociological arguments if we try to discuss them.
>Everyone working in enterprise software development has known about the power of this language for well over a decade.
What is an enterprise? Is Google not an enterprise? Is Apple not an enterprise? Is Facebook not an enterprise? What about Netflix, Uber and any other big tech company? Weren't all enterprises start-ups at the beginning?
Does enterprise mean boring old company established long before the invention of Internet, which does old boring stuff, employs old boring people and use old boring languages? I imagine a grandpa with a long white beard staring at some CRTs with Cobol code and SAP Hana.
wongarsu 1 days ago [-]
In this context I'd interpret enterprise to mean any company with over 1000 employees that is not what Silicon Valley calls a tech company. So no Apple, Google or Uber, but the notable players of basically every other industry, from A for accommodation or accounting to W for wood paneling.
But I wouldn't say their choice of C# is due to them being old and boring. If it was that, they'd use Java (as many do). In my eyes choosing C# signals to me that you do want good technology (again, you could have gone with Java), but want that technology to be predictable and boring. A decent rate of improvement with minimal disruption, and the ability to solve a lot of issues with money instead of hiring (lots of professionally maintained paid libraries in the ecosystem).
WD-42 1 days ago [-]
I think it has more to do with c# being a Windows only programming language for the majority of its life. And guess what, a lot of people don’t like Windows.
And don’t bring up mono, etc. it was a dumpster fire then and it’s only recently gotten better. It tough for any tech to shed a very long negative legacy.
Kipters 1 days ago [-]
.NET Core 1.0 was released cross-platform 8 years ago though, there's a ton of new devs for who it has always been cross-platform
WD-42 1 days ago [-]
And Rust 1.0 (which this article seems to be comparing against) came out 9 years ago. These things take time.
palmfacehn 1 days ago [-]
Reading this thread is the first I'm learning of this. Even with the enthusiasm expressed here, I'm still suspicious that there will be incompatibilities. Microsoft has a history of doing things a certain way.
sfn42 34 minutes ago [-]
We use .NET for pretty much everything that doesn't run in a browser, our apps run in Linux containers and devs use windows/Linux/Mac with no issues.
GUI libraries might have some potential for improvement but I would reach for C# for any task that didnt strictly require a different language.
johnisgood 1 days ago [-]
Yeah, or you think Ada / SPARK ever comes up when people cry for memory safety? It has existed in Ada / SPARK for ages, but nah...
5 days ago [-]
Ygg2 1 days ago [-]
> ...actively seeks to banish any positive sentiment around effective tools
Effective at what?
Want GC lang with lots of libraries? Use Java.
Want GC free lang with safety? Use Rust.
Otherwise just use C. Or C++.
For me C# lies in this awkward spot. Because of past decisions it will never have quite the ecosystem of Java. And because GC -free and GC libraries mix as well as water and oil, you get somewhat of a library ecosystem bifurcation. Granted GC-less libraries are almost non-existent.
DeathArrow 1 days ago [-]
I worked in both C# and Java. And other languages. Resuming C# and Java to "GC languages with lots of libraries" doesn't accurately depict neither C#, nor Java.
Since we discuss C# here, it is a good jack of all trades language where you can do almost anything, with decent performance, low boilerplate. It's easy to read, easy to learn and you have libraries for everything you need, excellent documentation and plenty of tutorials and examples. A great thing is that for every task and domain there is a good library or framework that most developers use, so you don't have to ask yourself what to use and how and you find a lot of documentation, tutorials and help for everything.
Java it's a bit more boiler plate-y, had a bit less features and ease of use and had many libraries and frameworks that did the same thing. Had Java been better, Kotlin wouldn't need to be invented.
>Want GC lang with lots of libraries? Use Java.
Want a fast to develop and easy to use language? Just use C#.
>Want GC free lang with safety? Use Rust.
Want a language which you can use for almost eveything? Web front-end, web backend, services, microcontrollers, games, desktop and mobile? Use C#.
>Otherwise just use C. Or C++.
Or whatever works for you. Whatever you like, find fun and makes you a productive and happy developer. There is nothing wrong in using C or C++. Or Python. Or Haskell.
lmm 1 days ago [-]
> Java it's a bit more boiler plate-y, had a bit less features and ease of use
Maybe slightly. But the difference is too marginal to change languages over.
> had many libraries and frameworks that did the same thing
Maybe, but it also has many more libraries doing the one obscure thing that you need for your domain.
In a vacuum, C# is a very good language, probably better than Java (as it should be given that it was able to learn some lessons from early Java). But in the wider world of programming languages they really are extremely close to each other, they're suitable for exactly the same problems, and Java has a substantially greater mass of libraries/tooling and probably always will do.
Ygg2 1 days ago [-]
> Since we discuss C# here, it is a good jack of all trades language where you can do almost anything, with decent performance, low boilerplate.
That's basically modern-day Java, with Lombok and other tidbits. Furthermore, if I recall correctly, Java has better performance on web benchmarks than C#.
> Had Java been better, Kotlin wouldn't need to be invented.
Kotlin was invented to make a sugary version of Java, and thus drive more JetBrains sales. It got popular because Oracle got litigious. As someone who's been on the Java train for almost two decades, what usually happens, if any JVM Lang becomes too popular, Java has the tendency to reintegrate its features into itself.
> Whatever you like, find fun and makes you a productive and happy developer. There is nothing wrong in using C or C++. Or Python. Or Haskell.
Sure, assuming it fits the domain. Like, don't use Python for kernel dev or Java for some obscure ML/AI when you could use Python.
Kipters 1 days ago [-]
> That's basically modern-day Java, with Lombok and other tidbits.
I wouldn't call Lombok "modern", more like "a terrifyingly hacky way to tackle limitations in the language despite the unwillingness to make the language friendlier" and a far cry from what source generators can do in C#
xxs 1 days ago [-]
Getters and setters are a mediocre design choice, not a limitation. Records have existed for years, too.
e3bc54b2 1 days ago [-]
> Records have existed for years
As a fan of Records, this is a punch to the gut.
The ecosystem is years and years away from using records. Almost every huge monolith decade+ project is still on Java 8, those who moved to something new still can't be liberal with them, because oh look, none of the serialize/deserialize libs can work with them because everything, to this day, abuses reflection for generating objects like a giant fucking hack it is.
Apology for the rant, but I migrated a big project to 21 early this year, am in the middle of migrating another 1M+ line codebase to 21, and the sorry state of records is such a sad thing to witness.
I give a decade before records are anything but 'a fancy feature'.
xxs 1 days ago [-]
It's a fair point of stuck w/ java8, yet the reference was about "modern java".
With that said - lombok is not needed at any form there either, use a c-tor with fields and make the public final. If you have too many fields in a class, it's likely a good idea to split it regardless.
In all cases dumb getter/setters are just public fields but taking more meta space (and larger byte code, the latter has some consideration when it comes to inlining)
Also, if I had 1M LOC and my serialization/communication libraries didn't support whatever I've picked - I'd patch the libraries to support it.
e3bc54b2 1 days ago [-]
> It's a fair point of stuck w/ java8, yet the reference was about "modern java".
And I'm saying that even after writing the most of the first project (closing in on 100kLOC now) in 21, I still can't have records where the make the most sense (service boundaries) because libs and larger ecosystem don't support them.
> Also, if I had 1M LOC and my serialization/communication libraries didn't support whatever I've picked - I'd patch the libraries to support it.
1MLOC in java land is.. not unusual. And if you're talking about patching libs like jackson/jaxb/whatever, my good person, you truly underestimate how much actual work people have (where Java upgrade is a distant afterthought, I only did it because I wanted to scratch the itch and see how far I could push processes in my org), or how much impact that might have for a drive-by contribution. Updating such core ecosystem libs in java is no small feat. They are used absolutely everywhere, and even tiny changes require big testing. There is a reason you find apache libs in every single project, because they have matured over past couple of decades without such drastic rug-pull of a change.
xxs 1 days ago [-]
I did say all that (incl the 1M+) stuff coming from personal experience. I have "fixed" all kind of libraries (incl. database drivers, JDK itself, PKI cert loading, netty/jetty, ORM providers). I'd consider jaxb/jackson on the easy side of things.
Also I'd actively remove all apache commons as well. Even in Java8 most of the functionality is redundant.
With all that I meant it should not be really underestimation.
e3bc54b2 1 days ago [-]
You are really part of the cream, and I mean it as an honest compliment.
I am part of the dark matter, although self-initiated java upgrades already put me on the right side of bell-curve.
> Also I'd actively remove all apache commons as well. Even in Java8 most of the functionality is redundant.
I used to think that. Then I had to decompress zip files in memory and selectively process the children. Of course Java has the functionality covered in stdlib, but they require so much boilerplate, and commons-compress was such a pleasure that I was done in 10 minutes. The same goes for other apache libs too.
OTOH, I wholeheartedly agree about Lombok being unjustified curse.
1 days ago [-]
Ygg2 1 days ago [-]
Hack or not, it's been working relatively well for the past decade.
But even, if you account for that, the records in Java do most of what Lombok used to do - make class externally immutable, add default `toString`, `equals` and `hashCode` implementations, allow read-only access to fields.
> what source generators can do in C#
Having had the displeasure of developing source generators in C# (in Rider), what they do is make code impossible to debug while working on it. On top of relying on an ancient version of netstandard.
I cannot emphasize enough how eldritch working on them is.
While developing, whatever change you write isn't reflected when you inspect codegen code, and caching can keep old code beyond even post re-compilation unless you restart the build server, or something.
So whenever you try to debug your codegen libs, you toss a coin:
- heads it shows correct code
- tails it's showing previously iteration of code gen code, but the new code is in, so the debugger will at some point get confused
- medusae it's showing previous iteration of code gen code, but new code hasn't been propagated, and you need to do some arcane rituals to make it work.
Hell, even as a user of codegen libs, updating codegen libs caused miscompilation because it was still caching the previous codegen version.
Kwpolska 18 hours ago [-]
> relying on an ancient version of netstandard
They require 2.0, which is the only version that is actually useful, since it supports .NET Framework 4.x.v
Ygg2 9 hours ago [-]
You do realize netstandard 2.0 is 7 years old right? That it misses a ton of functionality compared to current dotnet. Stuff like MaybeNull annotation .
Kwpolska 8 hours ago [-]
It misses a ton of functionality compared to the current .NET (Core), but it does not miss much compared to .NET Framework 4.8. The reason why source generators require it is because they may be run by Visual Studio, which is built on top of the classic .NET Framework. .NET Standard 2.0 is a good trade-off IMO if you need to support both the classic Framework and the modern .NET.
Ygg2 7 hours ago [-]
It missed ton of functionality compared to C# in 2022 (when I last used it). It's about as old as Java 8.
neonsunset 5 hours ago [-]
If setting <LangVersion> to 12 and maybe doing `dotnet add package PolySharp` was too challenging then the source generators API is probably not for you. It's not a language issue :)
Those are notoriously cheat-y for just about all languages on that list. Any actual project is never going to get close to the efficiency that those number would require. Both Java and .NET over promise by A LOT with the numbers they get there.
Ygg2 1 days ago [-]
Cheat-y in what way? I don't consider micro benchmarks that interesting especially since C# does have two aces that Java will get in mid term - namely Simd and primitive types.
The Tech Empower benchmarks do seem to reflect general state of Java Web Framework ecosystem with Vert.x being they hyper fast web framework and Spring being way slower.
EraYaN 1 days ago [-]
Well the amount of tricks that basically nobody can implement in production that those implementations do to get the numbers that they do is ridiculous. Meaning the numbers they get are wildly optimistic at best and misleading at worst.
If you take the standard template for any of these frameworks (both Java and C# and any other language) and you add authentication etc, the real performance will be 5-10% of the numbers reported in those benchmarks. Have a look through some of the weirdness in the implementations it's wild (and sometimes educational). The .NET team especially has done stuff specifically to get faster on those benchmarks.
dahauns 1 days ago [-]
> Have a look through some of the weirdness in the implementations it's wild (and sometimes educational). The .NET team especially has done stuff specifically to get faster on those benchmarks.
Could you give me a pointer or two?
I wondered about that myself, especially considering the massive improvement from "old" .NET to the core/kestrel based solutions - but a quick browsing a while ago mostly left me astonished how...well, for lack of a better word, banal most of the code was.
Agreed though, lack of all kinds of layers like auth, orm etc. are sadly a drawback of these kinds of benchmarks, if understandable - it would make comparability even trickier and has the danger of the comparison matrix of systems/frameworks/libraries exploding in size. But yeah, would be nice datapoints to have. :)
EraYaN 8 hours ago [-]
They don't even use Razor Pages but a custom RazorSlices package to do the templating [1]. Yes, that is much faster because it removes MVC and a ton of infrastructure but it's also kind of gross. Also the use of stuff like UnsafePreferInlineScheduling has some downsides (running application code on the IO thread) and honestly I'd never use in production.
The custom BufferWriter stuff is pretty neat though, although also not really something most people will reach for. And there is more, like the caching of StringBuilders etc.
But it also doesn't use the actual HTTP server to build headers, but they just dump a string into the socket [2], feels a bit unrealistic to me. In general the BenchmarkApplication class [3] is full of non-standard stuff that you'd normally let the framework handle.
I second this sentiment. If anything, I genuinely think that the way .NET's TechEmpower submissions look does more damage than good. BenchmarksGame offers a much better close-up comparison by having much simpler submissions that mostly consist of code that you could totally see yourself write in a regular setting.
.NET is perfectly capable of standing on its own, and if there are specific areas that need improvement - this should serve as a push to further improve DB driver implementations and make ASP.NET Core more robust against various feature configurations. It is already much, much faster than Spring which is a good start, but it could be pushed further.
I'd like to note that neither Go nor Java are viable for high-performance programming in a way that C# is. Neither gives you the required low-level access, performance oriented APIs, ability to use zero-cost abstractions and platform control you get to have in .NET. You can get far with both, but not C++/Rust-far the way you can with C#.
Ygg2 1 days ago [-]
> BenchmarksGame offers a much better close-up comparison by having much simpler submissions that mostly consist of code that you could totally see yourself write in a regular setting
Yeah, except if you are working on Web servers the quality of the framework and its supporting libraries is much more important than what code could theoretically achieve. What is the point of being able to 200 mph when you only ever drive up to 30mph.
> Neither gives you the required low-level access, performance oriented APIs, ability to use zero-cost abstractions.
Java is working on high performance abstractions, see Vector API (Simd) and project Valhalla (custom primitive types).
Sure C# has a theoretical leg up (for which it paid dearly by causing backwards incompatibility with reified generics) but most of the libraries don't use low-level access or SIMD optimizations or what not.
vips7L 23 hours ago [-]
Most the Debian benchmarks for C# are cheaty too. They frequently just call out to C libraries rather than use something implemented in the language.
igouy 19 hours ago [-]
No, they do not "frequently just call out to C libraries".
2 of 10 (pidigits and regex-redux) allow use of widely available third party libraries — GMP, PCRE, RE2 — because there were language implementations that simply wrapped those libraries.
vips7L 4 hours ago [-]
20% is frequently. And calling out to C is cheaty.
Look at all the programming language implementations that provide big integers by calling out to GMP. Why would it be "cheating" when available to all and done openly? Libraries matter.
>Most the Debian benchmarks for C# are cheaty too.<
Just name-calling.
lossolo 16 hours ago [-]
You should really create a way to filter solutions with SIMD intrinsics and without them.
igouy 15 hours ago [-]
Like the hand-written vector instructions | "unsafe" section down-page?
It’s not that easy. I assume other programs hide the use in macros and libraries, in ways far beyond my simple understanding.
lossolo 15 hours ago [-]
Cool! I didn’t see it before, maybe because I mostly use the language vs. language feature. There is no such section there, but it would be very helpful IMO, instead of clicking every solution to check which one is intrinsics free.
igouy 15 hours ago [-]
> language vs. language
Where there are few enough programs that readers should check that the programs they compare seem appropriate for their purpose.
lossolo 12 hours ago [-]
Well, it's your site, so you can do what you want with it, but I don't believe what you just wrote is logical at all. Sometimes, you just want to see, in general, how one language compares to another when one uses intrinsics and the other doesn't, without having to click through every single benchmark across multiple versions to find one without intrinsics. This is just bad UX and waste of time.
igouy 1 hours ago [-]
> one uses intrinsics and the other doesn't
Why? Did you mean both use intrinsics or both don't?
> Sometimes, you just want to see
As-it-says, look for more-secs less-gz-source-code -- probably less optimised.
neonsunset 10 hours ago [-]
Surely you wouldn’t say that if the language you wanted to win had SIMD API :)
neonsunset 21 hours ago [-]
Not sure which benchmarks you have in mind. Could you provide a link to any of those? .NET's standard library never calls into anything C aside from kernel APIs and certain runtime helpers which is a given.
If you meant BenchmarksGame, then it's the other way around - Java is most competitive where it relies heavily on GC[0], and loses in other areas which require capability to write a low-level implementation[1] that C# provides.
The only places where there are C calls are pidigts[2] and regex-redux[3] benchmarks, in both of which Java submissions have to import pre-generated or pre-made bindings to GMP and PCRE2 respectively. As do all other languages, with varying degrees of "preparation".
Im sorry, but calling out to C libraries — regardless of the language — is cheating. Just because everyone in the competition is on steroids doesn’t mean you got there legitimately.
Look at all the programming language implementations that provide big integers by calling out to GMP. Why would it be "cheating" when available to all and done openly? Libraries matter.
neonsunset 2 hours ago [-]
This is a strange reply given that sibling comment points out it's only 2 out of 10 benchmarks where this is allowed because all languages end up calling out to the same libraries.
Even if you prohibit PCRE2, the .NET submissions using out-of-box Regex engine end up being about 4 times faster than Java.
Surprisingly, even though .NET's BigInteger is known for its inefficiency, it ends up being more memory efficient and marginally faster at pidigits than a Java submission that does not use GMP. The implementations are not line-by-line equivalent so may not be perfectly representative of performance of each BigInt implementation.
My point being - if you look at the submissions closer, the data gives much clearer picture and only supports the argument that C# is a very usable language for solving the tasks one would usually reach for C, C++ or Rust instead.
vips7L 51 minutes ago [-]
Its not a strange reply at all. _All_ of those languages are cheating. Those benchmarks are junk because they don't test implementations in the language.
>That's basically modern-day Java, with Lombok and other tidbits.
Lombok is exceptionally backwards. You don't need getters/setters; and you should know how to try hashCode (and equals).
...and records exist
Ygg2 1 days ago [-]
The last few Spring projects I worked on that used the latest Java, still used Lombok. Records do exist, but you can't or don't want to always use them.
high_na_euv 1 days ago [-]
Java ecosystem is more fragmented and inconsistent than C#
C# is better designed lang, has really strong tooling and ecosystem and well designed std lib
Rendered at 17:19:13 GMT+0000 (Coordinated Universal Time) with Vercel.
[1] https://github.com/microsoft/CsWin32
[2] https://lowleveldesign.wordpress.com/2024/07/11/implementing...
[1] https://learn.microsoft.com/en-us/dotnet/core/deploying/nati...
That’s exactly what I do too.
The main source of confusion as to why some believe that NativeAOT prohibits this are libraries which perform unbound reflection in a way that isn't statically analyzable (think accessing a method by a computed string that the compiler cannot see and not annotating with attributes the exact members you would like to keep and compile the code for) or libraries which rely on reflection emit. But even reflection emit works for limited scenarios where runtime compilation is not actually required like constructing a generic method where argument is a class - there could only be a single generic instantiation of __Canon argument in this case, which can be emitted at compile time. You can even expect the reflection to work faster under NativeAOT - it uses a more modern pure C# implementation and does not need to deal with the fact that types can be added or removed at runtime.
I wish more people would talk about it. Thank you for such an interesting article!
It would be better if the GC can be turned off with a switch and just add a delete operator to manually free memory.
Yes and no. Yes, almost all of the standard library collection are allocation heavy and it is still the dominate pattern in C#, so if you want to avoid the GC you need to avoid these and resort to building your own primitives based on Memory/Span. Which sucks.
However, you can use interfaces in a no GC world since you can constrain those interfaces to be structs or ref-structs and the compiler will enforce rules that prevent them from being boxed onto the GC heap.
Also of recent note, the JIT can now automagically convert simple gc-heap allocations into stack allocations if it can trivially prove they don't escape the stack context.
> It would be better if the GC can be turned off with a switch and just add a delete operator to manually free memory.
It is a little know fact that you can actually swap out the GC of the runtime. So you could plug in a null implementation that never collects (at your own peril...)
As for a delete operator, you can just roll your own struct based allocation framework that uses IDisposable to reclaim memory. But then you need to deal with all the traditional bugs like use-after-free and double-free and the like.
For me, I think low-gc is the happy medium. Avoid the heap in 99% of cases but let the GC keep things air tight
How do you do this? Just so I can have another tool in my tool shed. Googling got me to an archived repo on GitHub with a sample GC - which is enough but Wonder if there’s something off the shelf.
In java land, the Epsilon GC (a do nothing GC) enables a pattern that’s handy in perf test jobs in CI pipelines occasionally for some projects (I.e. run with epsilon but constrain max memory for the process - ci builds will fail if memory usage increases)
I forgot that there is built in support for this model using the MemoryManager<T> class [0]. A memory manager is an abstract class that represents a block of other memory, including possibly unmanaged memory. It implements IDisposable already so you can just plug into this.
The Memory<T> struct can optionally internally point to a MemoryManager instance allowing you to plug your perfered style of allocation and freeing of memory into parts of the framework.
There is a little irony that a MemoryManager<T> is itself a class and therefore managed on the gc-heap, but you can defeat this by using ObjectPool<T> to recycle those instances to keep allocation count steady state and not trigger the GC.
I have used this before (in the toy database i mentioned earlier) to allocate aligned blocks of unmanaged memory.
[0] https://learn.microsoft.com/en-us/dotnet/api/system.buffers....
How?
I know of constraints on generic type parameters, but not how to do this. A cursory search is unhelpful.
e.g.
Here if you pass a struct that implements 'Foo', 'CalculateThing' will be monomorphized and the dispatch will be zero-cost, same as in Rust.You can apply additional constraints like `where T: struct` or `allows ref struct`. The last one is a new addition which acts like a lifetime restriction that says that you are not allowed to box T because it may be a ref struct. Ref structs are for all intents and purposes regular structs that can hold so-called "managed references" aka byrefs, which have syntax 'ref T', which is discussed in detail by the article this submission links to (ref structs can also hold other ref structs, you are not limited in nesting, but you are limited in cyclicality).
This breaks the fundamental assumptions built into pretty much every piece of software ever written in the language - it's a completely inviable option.
Incorporating a borrow checker allows for uncollected code to be incorporated without breaking absolutely everything else at the same time.
As for delete operator, 'dispose' works well enough. I have a toy native vector that I use for all sorts of one-off tasks:
It is very easy to implement and I assume C and C++ developers would feel right at home, except with better UX.This retains full compatibility with the standard library through interfaces and being convertible to Span<T>, which almost everything accepts nowadays.
System-provided allocators are slower at small allocations than GC, but Jemalloc easily fixes that.
I missed this development! That was a big pain working with ref structs when they first came out.
List<int> nums = [1, 2, 3, 4];
//do stuff with nums
Delete(nums);
In addition, objects that hold references to other objects internally would need an implementation that would allow to traverse and recursively free references in a statically understood way. This gets nasty quick since a List<T> can hold, let's say, strings, which may or may not have other locations referring to them. Memory safety goes out of the window for dubious performance wins (not even necessarily, since this is where GC has better throughput).
I can recommend watching the lectures from Konrad Kokosa that go into the detail how .NET's GC works: https://www.youtube.com/watch?v=8i1Nv7wGsjk&list=PLpUkQYy-K8...
In my comment I already suggested a context where GC can be turned off. I said: "It would be better if the GC can be turned off with a switch and just add a delete operator to manually free memory."
Also there is C++ for that, if the goal is to use C# as C++.
This really is a PoC. You might get better results by using snippets as the inspiration for rolling something tailored to your specific use-case.
Unfortunately, as usual in computing, we have to do huge circles shaped in zig-zag, instead of adopting what was right in front of us.
Lots of zig-zags.
I am a firm believer that if languages like Java and C# had been like those languages that predated them, most likely C and C++ would have been even less relevant in the 2010's, and revisions like C++11 wouldn't have been as important as they turned out to be.
Also, can't miss the opportunity to bring up Graydon's iconic 2010 talk "Technology from the past come to save the future from itself". http://venge.net/graydon/talks/
So even those that weren't initially exposed in unsafe mode, were available at the MSIL level and could be generated via helper methods making use of "System.Reflection.Emit".
Naturally having them as C# language features is more ergonomic and safer than a misuse of MSIL opcodes.
In case anyone is interested, here is the spec about refs in structs and other lifetime features mentioned in the article:
https://github.com/dotnet/csharplang/blob/main/proposals/csh...
And here is the big list of ways .NET differs from the publish ECMA spec. Some of these differences represent new runtime features.
https://github.com/dotnet/runtime/blob/main/docs/design/spec...
Using C/C++/Rust to do the same task is probably more productive than emitting MSIL opcodes, so that solution wasn't really that practical.
But with these new features being more ergonomic and practical, it becomes cost effective to just do it in C# instead of introducing another language.
Also P/Invoke and CCW/RCW do have costs cross the runtime layer, even if minor when compared with other languages.
On NativeAOT, you can instead use "DirectPInvoke" which links against specified binary and relies on system loader just like C/C++ code would. Then, you can also statically link and embed the dependency into your binary (if .lib/.a is available) instead which will turn pinvokes into direct calls (marshalling if applicable and GC frame transition remain, on that read below).
Lastly, it is beneficial to annotate short-lived PInvoke calls with [SuppressGCTransition] which avoids some deoptimizations and GC frame transition calls around interop and makes the calls as cheap as direct calls in C + GC poll (a single usually not-taken branch). With this the cost of interop effectively evaporates which is one of the features that makes .NET as a relatively high-level runtime so good at systems programming.
Unmanaged function pointers have similar overhead, and identical if you apply [SuppressGCTransition] to them in the same way.
* LibraryImport is not needed if pinvoke signature only has primitives, structs that satisfy 'unmanaged' constraint or raw pointers since no marshalling is required for these.
If anything, article doesn't talk about MSIL or CLR, but C# language features. CLR is not the only target C# supports.
NativeAOT is supported in Avalonia (cross-platform UI framework), Razor Slices (dynamically render HTML from Minimal APIs) and I think there is also some support for AOT in MonoGame & FNA (game dev frameworks).
However, it's still early and a lot of the ecosystem doesn't support NativeAOT.
Native AOT depends on CLR infrastructure.
Is this right? I thought Rust's reason for XOR is deeper & is how it also guarantees memory safety for multi-threaded code too (& not just for reference lifetimes).
Because of two things mentioned in the article just below.
> Here we see C#’s first trade-off: lifetimes are less explicit, but also less powerful.
If C# is less powerful, it does not need powerful syntax. One does not need explicit lifetimes in Rust for a long time either, deduction work just fine.
> The escape hatch: garbage collection
If C# is ok with not tracking _all_ lifetimes _exactly_, it does not need powerful syntax. Not an option in Rust, by design.
Basically, not all code is possible to write, and not all code is as efficient.
I think you'll start seeing a lot more "cross platform C# frameworks" when PanGUI drops: https://pangui.io
It's a native layout/gui util by the devs of the mega-popular Odin extension in Unity, and the idea is to directly solve "good native c# gui lib" with the implementation just being a single shader and an API that is more like DearIMGUI.
I'm also planning on using it my own small 2D C# engine when it's available: https://github.com/zinc-framework
I already do iterative hot reload GUI with DearImGUI in that engine so PanGUI will work in the same way.
From there, you can do your front end in absolutely whatever (Svelte, Next, etc.) and your back end is the .NET host doing whatever. So it's basically making a "native webapp", not actually doing what Maui Blazor Hybrid does where it's opening a native context and injecting a webview (if I understand it correctly)
I wish the comments focused more on the subject of the article which is interesting and under-discussed.
Instead, its growth was stunted and many people avoid it even though it is an excellent language.
The right move at this point would be to use an optional type, surely...
Quick nitpick: the find example could return a reference to a static variable, and thus avoid both the heavy syntax and the leaked allocation:
https://play.rust-lang.org/?version=stable&mode=debug&editio...
A related idea are the concept of second class references, as exist in Hylo. There the "ref" is not part of the type, but the way they work is very similar.
Lifetimes give you a lot of power but, IMO, I think languages that do this should choose between either being fully explicit about them, or going "second class" like C# and Hylo and avoiding lifetime annotations entirely.
Eliding them like Rust does can be convenient for experts but is actually a nightmare for newbies. For an example of a language that does explicit lifetimes without becoming unbearable, check out Austral.
Instead of C#'s scope ref solution to having a function accept and return multiple references, another option (in an imaginary language) would be to explicitly refer to the relevant parameters:
ref(b) double whatever(ref Point a, ref Point b) { return b.x; }
C++ has to be "best effort" because it tries to bolt these semantics onto the pre-existing reference types, which were never required to adhere to them. It can catch some obvious bugs but most of the time you'll get a pile of false positives and negatives.
Try int* bug() { int longlived = 12; int* plonglived = &longlived; { int shortlived = 13; plonglived = &shortlived; } return plonglived; }
With gcc -Wall -Werror
The reason is this changes are not aimed on average Joe developer writing C# microservices. This changes and whole Span/ref dialect of C# are aimed on Dr. Smartass developer writing C# high performance libraries. It's advance-level feature.
Basically gives you a release-by-release highlight reel of what's changed and why it's changed.
I glance at it every release cycle to get an idea of what's coming up. The even numbered releases are LTS releases while the odd numbered releases (like the forthcoming 9) are short term. But the language and runtime are fairly stable now after the .NET Framework -> .NET Core turbulence and now runtime upgrades are mostly just changing a value in a file to select your target language and runtime version.
https://learn.microsoft.com/en-us/archive/msdn-magazine/2018...
Span makes working with large buffers easier for Joe developer, if he could be bothered to spend 20 seconds looking at the examples in the documentation.
But before span and friends you could always use pointers. Spans just make things friendlier.
And C# also has built-in SIMD libraries if you need to do some high performance arithmetic stuff.
My assumption is since there is a GC, and it is not native code, there are too many use cases where it can't apply, but rust can. Once there is a way to have it compete with rust in every use case rust can be used, maybe there will be more talk.
https://learn.microsoft.com/en-us/dotnet/core/deploying/nati...
The "advanced" stuff is very much about bringing Rust-like lifetimes to the language and moving the powers and capabilities outside of the `unsafe` keyword world, by making it much less unsafe in similar ways to how Rust does lifetime/borrow-checking but converted to C#/CLR's classic type system. It's adding the "too clever" memory model of Rust to the much simpler memory model of a GC. (GCs are a very simple memory model invented ~70 years ago.)
That’s not why though. There’s lots of reasons for Rusts safety model, such as allowing for vastly faster code because aliasing can’t happen unless both references are read only, in which case it doesn’t matter. There is lot to Rusts borrow rules that this article misses.
It’s like the article earlier today that was, essentially, “I don’t understand Rust and it would be better if it was Haskell”.
[1] https://kidneybone.com/c2/wiki/SufficientlySmartCompiler
Whether the aliasing argument holds water does not affect whether it was used as justification for Rust's design.
TLDR: 0-5% faster with noalias optimizations on.
You can always try running some benchmarks by building code with -Zmutable-noalias=no.
Other languages have long had aliasing, Fortran for one. C and C++ have the restrict keyword though obviously it's a programmer guarantee there and is less safe, since if the user of the function does pass the same memory ref offset for e.g. the optimisation is not safe.
I'd say in name only given that there were numerous aliasing bugs in llvm that only became visible when Rust tried to leverage it. I suspect similar pitfalls exist in every single C/C++ compiler because the rules for restrict are not only difficult to understand for use but also difficult to implement correctly.
(Otherwise, the Rust project wouldn't have encountered all the bugs related to aliasing analysis in LLVM.)
Take for e.g. this:
You generally wouldn't find many C developers sprinkling restrict in on functions like this, since that function could be useful to someone using add on two overlapping arrays.On the other hand, someone writing a ODE solver in a scientific code might write a function like this, where it would never make sense for the memory locations to overlap:
In those sorts of circumstances, it's one of the first performance optimisations you might reach for in your C/C++ toolkit, before starting to look at for e.g. parallelism. It's been in every simulation or mathematical code base I've worked on in 10+ years at various different academic institutions and industry companies.I'm sure there were probably others.
It's generally true that C/C++ code rarely if ever uses restrict & that Rust was the first to actually put any real pressure on those code paths and once it was found it took over a year to fix and it's incorrect to state that the miscompilation was only in code patterns that would only exist in Rust.
Why would you do that?
> n fact, this is so common that Rust doesn’t require you to write the lifetimes explicitly
This is an actual _pattern_? Yikes^2.
a getter?
> This is an actual _pattern_? Yikes^2.
wat.
Getters return values. This returns a pointer. So it's an accessor. With unchecked semantics. It's bizzare to me that anyone would use this technique. It's all downside with no upside.
> wat.
I'm expressing surprise that anyone would do this. I'm sure you were capable of understanding that.
This isn't exactly a pointer: Rust distinguishes between read-only and mutable ("exclusive") references.
This returns a read-only reference, so it's very much like a getter: you cannot use it to modify the thing it points to.
It's just that it does it without a copy, which matters for performance in some cases.
When I use getter, I want to see the value of a field. I don't want an owned copy of said value, I just want to look at it, so returning reference makes _a lot more_ sense than returning a copy. In example it uses `i32`, but that's just for readability.
> I'm expressing surprise that anyone would do this. I'm sure you were capable of understanding that.
Yes, and I'm expressing surprised that you think it's bad. I'm not even sure what is bad? Lifetime elision that is well documented and works in a non-ambiguous manner? Using references instead of values? Do we need to memcpy overything now to please you?
You can look at it with an owned copy. What is the issue? Is premature optimization the default mode in writing Rust? You don't see the issues with this?
> I'm expressing surprised that you think it's bad
You're surprised that someone simply has a different opinion? Your reaction failed to convey that.
uhm, common sense isn't a premature optimization. Avoiding a needless copy is the default mode in writting rust and any other language.
Excellent question
And I feel that Rust, by making it explicit, makes it harder and unergonomic on the developer
[0] https://news.ycombinator.com/item?id=41761346
Because Anders Hejlsberg is one of the greatest language architects and the C# team are continuing that tradition.
The only grudge I have against them is they promised us discriminated unions since forever and they are still discussing how to implement it. I think that is the greatest feature C# is missing.
For the rest C# is mostly perfect. It has a good blend of functional and OOP, you can do both low level and high level code. You can target both the VM or the bare hardware. You can write all types of code beside system programming (due to the garbage collector). But you can do web backend, web front-end, services, desktop, mobile apps, microcontroller stuff, games and all else. It has very good libraries and frameworks for whatever you need. The experience with Visual Studio is stellar.
And the community is great. And for most domains there is generally only one library or framework everybody uses so you not only don't have to ask what to use for a new feature or project, but you also find very good examples and help if you need.
It feels like a better, more strait trough version of Java, less verbose and less boiler plate-y. So that's why .NET didn't need its own Kotlin.
Sure, it can't meet the speed of Rust or C# for some tasks because of the garbage collector. But provided you AOT compule, disable the garbage collector and do manual memory management, it should.
.NET has moved to being directly cross-platform today and is great at server/console app cross-platform now, but its support for cross-platform UI is still relatively nascent. The official effort is called MAUI, has mostly but not exclusively focused on mobile, and it is being developed in the open (as open source does) and leaves a lot to be desired, including by its relatively slow pace compared to how fast the server/console app cross-platform stuff moves. The Linux desktop support, specifically, seems constantly in need of open source contributors that it can't find.
You'll see a bunch of mentions of third-party options Avalonia and Uno Platform doing very well in that space, though, so there is interesting competition, at least.
.NET has some small cross platform abilities, but calling it totally cross platform is wrong.
In fairness this ignores a lot of embedded work.
Java gets to cheat here a bit because they have some custom embedded stuff, but they are also not actually running on all CPUs.
(Jk I love C#)
Operating Systems: Linux, macOS, Windows, FreeBSD, iOS, Android, Browser
Architectures: x86, x86_64, ARMv6, ARMv7, ARMv8/ARM64, s390x, WASM
Notes:
Mono as referred here means https://github.com/dotnet/runtime/tree/main/src/mono which is an actively maintained runtime flavor, alongside CoreCLR.
- Application development targets on iOS and Android use Mono. Android can be targeted as linux-bionic with regular CoreCLR, but it's pretty niche. iOS has experimental NativeAOT support but nothing set in stone yet, there are similar plans for Android too.
- ARMv6 requires building runtime with Mono target. Bulding runtime is actually quite easy compared to other projects of similar size. There are community-published docker images for .NET 7 but I haven't seen any for .NET 8.
- WASM also uses Mono for the time being. There is a NativeAOT-LLVM experiment which promises significant bundle size and performance improvements
- For all the FreeBSD slander, .NET does a decent job at supporting it - it is listed in all sorts of OS enums, dotnet/runtime actively accepts patches to improve its support and there are contributions and considerations to ensure it does not break. It is present in https://www.freshports.org/lang/dotnet
At the end of the day, I can run .NET on my router with OpenWRT or Raspberry Pi4 and all the laptops and desktops. This is already quite a good level given it's completely self-contained platform. It takes a lot of engineering effort to support everything.
There's a lot of options, but also the latest of .NET (not Framework) just runs natively on Linux, Mac and Windows, and there's a few open source UI libraries as mentioned by others like Avalonia that allow your UI to run on any OS.
if building for the web online, asp.net core runs on Linux servers as well as windows
and there's MAUI [2] ( not a fan of this), you are better-off with with the others.
in summary c# and .NET is cross-platform, third party developers build better frameworks and tools for other platform while Microsoft prefers to develop for Microsoft ecosystem, if you get
[0] https://avaloniaui.net/ [1] https://platform.uno/ [2] https://learn.microsoft.com/en-us/dotnet/maui/what-is-maui?v...
I will say MS has been obsessed with trying to take a slice of the mobile pie.
However their Xamarin/WPF stuff left so much to be desired and was such a Jenga Tower that I totally get the community direction to go with a framework you have ostensibly have more control over vs learning that certain WPF elements are causes of e.g. memory leaks...
I work at one of the few startups that uses C# and .NET.
Dev machines are all M1/M3 MacBook Pros and we deploy to a mix of x64 and Arm64 instances on GCP and AWS.
I use VS Code on macOS while the rest of the team prefers Rider.
Zero friction for backend work and certainly more pleasant than Node. (We still use Node and JS for all front-end work).
Mono was a third party glorified hack to get C# to work on other OS. .NET has been natively cross platform with an entirely new compiler and framework since mid 2016.
Indeed, this is what I didn't like back then. Java has official support for other OSes, which C# was lacking at the time. Good to hear that things changed now.
IL2CPP, Unity's C# to C++ compiler, does not help for any of this. It just allows Unity to support platforms where JIT is not allowed or possible. The GC is the same if using Mono or IL2CPP. The performance of code is also roughly identical to Mono on average, which may be surprising, but if you inspect the generated code you'll see why [2].
[1] https://xoofx.github.io/blog/2018/04/06/porting-unity-to-cor... [2] https://www.jacksondunstan.com/articles/4702 (many good articles about IL2CPP on this site)
https://discussions.unity.com/t/coreclr-and-net-modernizatio...
C# it's plenty fast for game programming.
The developers of Risk of Rain 2 were undoubtedly aware of the hitches, but it interfered with their vision of the game, and affected users were left with a degraded experience.
It's worth mentioning that when game developers scope of the features of their game, available tech informs the feature-set. Faster languages thus enable a wider feature-set.
This is true, but developer productivity also informs the feature set.
A game could support all possible features if written carefully in bare metal C. But it would take two decades to finish and the company would go out of business.
Game developers are always navigating the complex boundary around "How quickly can I ship the features I want with acceptable performance?"
Given that hardware is getting faster and human brains are not, I expect that over time higher level languages become a better fit for games. I think C# (and other statically typed GC languages) are a good balance right now between good enough runtime performance and better developer velocity than C++.
They probably create too much garbage. It’s equally easy to slow down C++ code with too many malloc/free functions called by the standard library collections and smart pointers.
The solution is the same for both languages: allocate memory in large blocks, implement object pools and/or arena allocators on top of these blocks.
Neither C++ nor C# standard libraries have much support for that design pattern. In both languages, it’s something programmers have to implement themselves. I did things like that multiple time in both languages. I found that, when necessary, it’s not terribly hard to implement that in either C++ or C#.
I think this is where the difference between these languages and rust shines - Rust seems to make these things explicit, C++/C# hides behind compiler warnings.
Some things you can't do as a result in Rust, but really if the rust community cares it could port those features (make an always stack type type, e.g.).
Code base velocity is important to consider in addition to dev velocity, if the code needs to be significantly altered to support a concept it swept under the rug e.g. object pools/memory arenas, then that feature is less likely to be used and harder to implement later on.
As you say, it's not hard to do or a difficult concept to grasp, once a dev knows about them, but making things explicit is why we use strongly typed languages in the first place...
In this game's case though they possibly didn't do much optimization to reduce GC by pooling, etc. Unity has very good profiling tools to track down allocations built in so they could have easily found significant sources of GC allocations and reduced them. I work on one of the larger Unity games and we always profile and try to pool everything to reduce GC hitches.
GC can work or not when writing a game engine. However everybody who writes a significant graphical game engine in a GC language learns how to fight the garbage collector - at the very least delaying GC until between frames. Often they treat the game like safety critical: preallocate all buffers so that there is no garbage in the first place (or perhaps minimal garbage). Without garbage collection might technically use more CPU cycles, but in general they are spread out more over time and so more consistent.
It's really not that hard to structure a game that pre-allocates and keeps per frame allocs at zero.
You have to jump through some hoops but it's really not that convoluted and miles easier than good C++.
I wish there was an attribute in C# that was "[MustNotAllocate]" which files the compilation on known allocations such as these. It's otherwise very easy to accidentally introduce some tiny allocation into a hot loop, and it only manifests as a tiny pause after 20 minutes of runtime.
That being said, .NET includes lots of performance-focused analyzers, directing you to faster and less-allocatey equivalents. There surely also is one on NuGet that could flag foreach over a class-based enumerator (or LINQ usage on a collection that can be foreach-ed allocation-free). If not, it's very easy to write and you get compiler and IDE warnings about the things you care about.
At work we use C# a lot and adding custom analyzers ensuring code patterns we prefer or require has been one of the best things we did this year, as everyone on the team requires a bit less institutional knowledge and just gets warnings when they do something wrong, perhaps even with a code fix to automatically fix the issue.
Even when allocations happen, .NET is much more tolerant to allocation traffic than, for example, Go. You can absolutely live with a few allocations here and there. If all you have are small transient allocations - it means that live object count will be very low, and all such allocations will die in Gen 0. In scenarios like these, it is uncommon to see infrequent sub-500us GC pauses.
Last but not least, .NET is continuously being improved - pretty much all standard library methods already allocate only what's necessary (which can mean nothing at all), and with each release everything that has room for optimization gets optimized further. .NET 9 comes with object stack allocation / escape analysis enabled by default, and .NET 10 will improve this further. Even without this, LINQ for example is well-behaved and can be used far more liberally than in the past.
It might sound surprising to many here but among all GC-based platforms, .NET gives you the most tools to manage the memory and control allocations. There is a learning curve to this, but you will find yourself fighting them much more rarely in performance-critical code than in alternatives.
Unity used Mono. Which wasn't the best C# implementation, performance wise. After Mono changed its license, instead of paying for the license, Unity chose to implement their infamous IL2CPP, which wasn't better.
Now they want to use CoreCLR which is miles better than both Mono and IL2CPP.
Would be nice to hear about a Rust Game engine, though.
Also, if you invoke GC intentionally at convenient timing boundaries (I.e., after each frame), you may observe that the maximum delay is more controllable. Letting the runtime pick when to do GC is what usually burns people. Don't let the garbage pile up across 1000 frames. Take it out every chance you get.
Manually invoking GC many times per second is a viable approach?
You're basically trading off worse throughput for better latency.
If you forcibly run the GC every frame, it's going to burn cycles repeatedly analyzing the same still-alive objects over and over again. So the overall performance will suffer.
But it means that you don't have a big pile of garbage accumulating across many frames that will eventually cause a large pause when the GC runs and has to visit all of it.
For interactive software like games, it is often the right idea to sacrifice maximum overall efficiency for more predictable stable latency.
It might be more useful to use OSU! approach as a reference: https://github.com/dotnet/runtime/issues/96213#issuecomment-...
OSU! represents an extreme case where the main game loop runs at 1000hz, so for much more realistic ~120hz you have plenty of options.
Magic, code or otherwise, sucks when the spell/library/runtime has different expectations than your own.
You expect levitation to apply to people, but the runtime only levitates carbon based life forms. You end up levitating people without their affects (weapons/armor), to the embarrassment of everyone.
There should be no magic, everything should be parameterized, the GC is a dangerous call, but it should be exposed as well (and lots of dire warnings issued to those using it).
If you have a bunch of objects in an array that you have a reference to such that you can pass it, then, by definition, those objects are not garbage, since they're still accessible to the program.
There should be some middle ground between RAII and invoking Dispose/delete and full blown automatic GC.
AFAIK it has been possible to replace the GC with alternative implementation for the past few years, but no one has made one yet.
EDIT: Some experimental alternative GC implementations:
https://github.com/kkokosa/UpsilonGC
https://www.codeproject.com/Articles/5372791/Implementing-a-...
> Unity devs run into
So it's viable but not perfect
They also have a C# subset called Burst, which could have been avoided if they were using .NET Core.
BUT it's definitely not a language designed for no-gc so there are footguns everywhere - that's why Rider ships special static analysis tools that will warn you about this. So you can keep GC out of your critical paths, but it won't be pretty at that point. But better than Java :D
Possibly prettier than C and C++ still. Every time I write something and think "this could use C" and then I use C and then I remember why I was using C# for low-level implementation in the first place.
It's not as sophisticated and good of a choice as Rust, but it also offers "simpler" experience, and in my highly biased opinion pointers-based code with struct abstractions in C# are easier to reason about and compose than more rudimentary C way of doing it, and less error-prone and difficult to work with than C++. And building final product takes way less time because the tooling is so much friendlier.
The article discusses ref lifetime analysis that does have relationship with GC, but it does not force you into using one. Byrefs are very special - they can hold references to stack, to GC-owned memory and to unmanaged memory. You can get a pointer to device mapped memory and wrap it with a Span<T> and it will "just work".
To ease the wait you could try Dunet (discriminated union source generator).
https://github.com/domn1995/dunet
Practical example in a short write up here: https://chrlschn.dev/blog/2024/07/csharp-discriminated-union...
if (s is string or s is int) { // what's the type of s here? is it "string | int" ? }
And not to mention that the BCL should probably get new overloads using DU's for some APIs. But there is at least a work in progress now, after years of nothing.
I assume you mean just the Windows Visual Studio? The Mac version is not exactly on par with the Windows. Yeah C# is great, but one would need Window's version of VS (NOT VS Code) to take full advantage of C#. For me that is a deal breaker, when the DX of a language is tight to a proprietary sourced IDE by MS.
https://blog.jetbrains.com/blog/2024/10/24/webstorm-and-ride...
[edit: I’ll note I’ve used successfully both Win and Linux]
So it seems at least that part of your critique is outdated.
I'm not sure what you mean about the inference, I've never had any problem with that that I can remember. And it can be a bit slow to start up or analyze a project at first load but in return it gives much better code completion and such.
I have been learning F# for a while now, and while the functional side that is pushed heavily is a joy to use, anything that touches the 'outside world' is going to have way more resources for C# as far as libraries, official documentation, general information including tutorials etc. You will need to understand and work with those.
So you really do need to understand C# syntax and semantics. Additionally there are a few concepts that seem the same in each language but have different implementations and are not compatible (async vs tasks, records) so there is additional stuff to know about when mentally translating between C# and F#.
I really want to love F# but keep banging my head against the wall. Elixir while not being typed yet and not being as general purpose at least allows me to be productive with it's outstanding documentation, abundance of tutorials and books on both the core language and domain specific applications. It is also very easy to mentally translate erlang to elixir and vice versa in the very few occasions needed.
Yeah. What's your opinion on Gleam?
I wish Anders was still in charge of C# :(
No, it isn't. The power of C++ templates is still astronomically far from C# generics.
Haskell promises to solve concurrency and the Rust boys are always claiming that it's impossible to write buggy code in Rust.. and the jump from C/C++/C#/Golang to Rust is much smaller than to Haskell..
Oh that's what I was getting at, that makes Rust pretty much a must-have tool to have in your tool-belt.
I'm not a templates/macro guy so I'm curious what's missing.
It's good that it is now, but how can it be implemented in a way that has truly separate instantiations of generics at runtime, when calls cross assembly boundaries? There's no single good place to generate a specialization when virtual method body is in one assembly while the type parameter passed to it is a type in another assembly.
There are no assembly boundaries under NativeAOT :)
Even with JIT compilation - the main concern, and what requires special handling, are collectible assemblies. In either case it just JITs the implementation. The cost comes from the lookup - you have to look up a virtual member implementation and then specific generic instantiation of it, which is what makes it more expensive. NativeAOT has the definitive knowledge of all generic instantiations that exist, since it must compile all code and the final binary does not have JIT.
Sorry for the snark, but I do think C# compile are just barely acceptable for me, so I'm happy they aren't adding more heavy compile time features.
No! It misses "typedef", both at module API level and within generics.
If you are looking at this through the lens of HN, I think much of this can be attributed to a certain ideological cargo cult that actively seeks to banish any positive sentiment around effective tools. You see this exact same thing with SQL providers, web frameworks, etc. If the tool is useful but doesn't have some ultra-progressive ecosystem around it (i.e., costs money or was invented before the average HN user's DOB), you can make a winning bet that talking about it will result in negative karma outcomes.
Everyone working in enterprise software development has known about the power of this language for well over a decade. But, you won't find a single YC startup that would admit to using it.
I suspect it is less about cargo culting, and more about two separate things:
First, the tooling for C# and really anything dotnet has been awful on any OS other than Windows until fairly recently. Windows is (to be blunt) a very unpopular OS in every development community that isn't dotnet.
Second, anthing enterprise is worth taking with a skeptical grain of salt; "enterprise" typically gets chosen for commercial support contracts, vendor lock-in, or astronaut architects over-engineering everything to fit best practices from 20 years ago. Saying big businesses run on it is a virtue is akin to saying that Oracle software is amazing or that WordPress engineering is amazing because so many websites run on it. Popularity and quality are entirely orthogonal.
I suppose there is probably another reason, which is the cluster fuck that has been the naming and churn of dot net versions for several years. ASP.NET, then core, then the core suffix got dropped at version 5, even though not everything was cross platform... So much pointless confusion.
My only issue with many of the improvements in C# is that all of them are optional for backwards compatibility reasons. People who don't know or don't care about new language features can still write C# like it's 2004 and all of the advantages of trying to modernize go out of the window. That means that developers often don't see the need to learn any of the new features, which makes it hard for projects to take advantage of the language improvements.
Instead of new platform libs and compilers simply defaulting to some reasonable cutoff date and saying "You need to install an ancient compiler to build this".
There is nothing that prevents me from building my old project with an older set of tools. If I want to make use of newer features then I'm happy to continuously update my source code.
Some examples of companies/products not implementing backwards compatibility are Delphi and Angular. Both are effectively dead. .NET Core wasn't backwards compatible with .NET Framework, but MS created .NET Standard to bridge that gap. .NET Standard allows people to write code in .NET core and will run in .NET Framework. It's not perfect, but apparently it was good enough.
Companies usually won't knowingly adopt a technology that will be obsoleted in the future and require a complete rewrite. That's a disaster.
But the compiler only consumes syntax (C#11, C#12 C#13 and so on) so I don't see why the compiler that eats C#13 necessarily must swallow C#5 without modification
As a guy who has worked in C# since 2005, a breaking change would make me pretty irate. Backwards compatibility has its benefits.
What issues do you have with backwards compatibility?
As a class library example (which is contrary to what I said earlier about .NET compatibility vs C# compatibility) is that it was a massive mistake to let double.ToString() use the current culture rather than the invariant culture. It should change to either required passing a culture always (breaking API change) or change to use invariantculture (behaviour change requiring code changes to keep old behavior)
I would imagine that's a carryover from the Win32/Client-Server days when that would have been a better choice.
Is that annoying? Yea. Is that annoying enough to force companies to collectively spend billions to look through their decades old codebases for double.ToString() and add culture arguments? Also keep in mind, this is a runtime issue, so the time to fix would be much more than if it were a compile issue. I would say no.
Just the move to Unicode (i.e. from 2007 to 2009) took some work, but otherwise I can't think of any intentional breaking changes...? In fact, it's one of the most stable programming environments I know of – granted, in part because of being a little stagnant (but not dead).
I've been using Delphi since Delphi 3. The only really breaking change I can recall was the Unicode switch. And that was just a minor blip really. Our 300kloc project at work took a couple of days to clean up the compiler errors and it's been Unicode-handling ever since. It's file integration and database heavy, so lots of string manipulation.
Most of my hobby projects didn't need any code changes.
In fact, the reason Delphi was late to the Unicode party was precisely because they spent so much time designing it to minimize impact on legacy code.
Not saying there hasn't been some cases, but the developers of Delphi have had a lot of focus on keeping existing code running fine. We have a fair bit of code in production that is decades old, some before y2k, and it just keeps on ticking without modification as we upgrade Delphi to newer versions.
The market has been ignoring Delphi for that long. It probably peaked with D5, once they changed their name from Borland to Inprise, it was over.
I hear it's still somewhat popular in Eastern European countries, but I heard that several years ago.
But is also not a trivial task.
I think it depends on location. In my part of the world .Net is something which lives in middle sized often stagnating companies. Enterprise around here is married to the JVM and they even tend to use more Typescript on the backend than C#. I’m not going to defend the merits of that in any way, that is just the way of things.
There being said I do get the impression that HN does know that Rust isn’t seeing much adoption as a general purpose language. So I wouldn’t count C# out here considering how excellent it has become since the transition into Core as the main .Net. I say this a an absolute C# hater by the way, I spent a decade with it and I never want to work with it again. (After decades of SWE I have fun with Python, C/Zig, JS/TS, and, no other language.)
Many developers already know Java, so it's easier to hire Java developers.
>There being said I do get the impression that HN does know that Rust isn’t seeing much adoption as a general purpose language. So I wouldn’t count C# out here considering how excellent it has become since the transition into Core as the main .Net. I say this a an absolute C# hater by the way, I spent a decade with it and I never want to work with it again. (After decades of SWE I have fun with Python, C/Zig, JS/TS, and, no other language.)
I didn't like the old C# and .NET. However, the new one is wonderful and I quite enjoy using it. More than Java or Go. On par with Python, but I wouldn't use Python for now for large web backend applications.
I tried Rust, bur for some reason I can't grow to like it. I'd prefer using C or Zig and even a sane subset of C++ (if such thing even exists).
Python is a horrible language, but it’s also the language I actually get things build in. I do think it’s a little underrated for large web apps since Django is a true work horse, but it takes discipline. C is for performance, embedded and Python/Typescript libraries and Zig is basically just better C because of the interoperability. Typescript is similar to Python for me, I probably wouldn’t use it if it wasn’t adopted everywhere, but I do like working with it.
We’ve done some Rust pocs but it never really got much traction and nobody really likes it. + I don’t think I’ve ever seen a single Rust job in my area of the world. C/C++ places aren’t adopting it, they are choosing Zig. That is if they’re going away from C/C++ at all.
I’m fairly confident that PHP, Python, JS/TS, Java and C/C++ will be what people still work on around here when I retire. Go is the only language which has managed to see some real adoption in my two decade career.
Python is the least fun language currently in use at any scale. Pretty much completely down to the lack of a coherent tool chain. When JS has better package management than you then you know you have a massive problem.
Microsoft probably added these features to push the language into new niches (like improving the story around Unity and going after Arduino/IoT). But it's of little practical appeal to their established base.
Not sure about that. Maybe there are? If you do web or mobile apps, C# would be an excellent choice. Go would be also an excellent choice for web.
For AI I wouldn't use C#. Even though it has excellent ML libraries, most research and popular stuff is done using Python and pytorch, so that's what I would chose.
For very low level, I'd take C or Zig. But I don't know many startups who are into very low level stuff.
>Everyone working in enterprise software development has known about the power of this language for well over a decade.
What is an enterprise? Is Google not an enterprise? Is Apple not an enterprise? Is Facebook not an enterprise? What about Netflix, Uber and any other big tech company? Weren't all enterprises start-ups at the beginning?
Does enterprise mean boring old company established long before the invention of Internet, which does old boring stuff, employs old boring people and use old boring languages? I imagine a grandpa with a long white beard staring at some CRTs with Cobol code and SAP Hana.
But I wouldn't say their choice of C# is due to them being old and boring. If it was that, they'd use Java (as many do). In my eyes choosing C# signals to me that you do want good technology (again, you could have gone with Java), but want that technology to be predictable and boring. A decent rate of improvement with minimal disruption, and the ability to solve a lot of issues with money instead of hiring (lots of professionally maintained paid libraries in the ecosystem).
And don’t bring up mono, etc. it was a dumpster fire then and it’s only recently gotten better. It tough for any tech to shed a very long negative legacy.
GUI libraries might have some potential for improvement but I would reach for C# for any task that didnt strictly require a different language.
Effective at what?
Want GC lang with lots of libraries? Use Java.
Want GC free lang with safety? Use Rust.
Otherwise just use C. Or C++.
For me C# lies in this awkward spot. Because of past decisions it will never have quite the ecosystem of Java. And because GC -free and GC libraries mix as well as water and oil, you get somewhat of a library ecosystem bifurcation. Granted GC-less libraries are almost non-existent.
Since we discuss C# here, it is a good jack of all trades language where you can do almost anything, with decent performance, low boilerplate. It's easy to read, easy to learn and you have libraries for everything you need, excellent documentation and plenty of tutorials and examples. A great thing is that for every task and domain there is a good library or framework that most developers use, so you don't have to ask yourself what to use and how and you find a lot of documentation, tutorials and help for everything.
Java it's a bit more boiler plate-y, had a bit less features and ease of use and had many libraries and frameworks that did the same thing. Had Java been better, Kotlin wouldn't need to be invented.
>Want GC lang with lots of libraries? Use Java. Want a fast to develop and easy to use language? Just use C#.
>Want GC free lang with safety? Use Rust. Want a language which you can use for almost eveything? Web front-end, web backend, services, microcontrollers, games, desktop and mobile? Use C#.
>Otherwise just use C. Or C++. Or whatever works for you. Whatever you like, find fun and makes you a productive and happy developer. There is nothing wrong in using C or C++. Or Python. Or Haskell.
Maybe slightly. But the difference is too marginal to change languages over.
> had many libraries and frameworks that did the same thing
Maybe, but it also has many more libraries doing the one obscure thing that you need for your domain.
In a vacuum, C# is a very good language, probably better than Java (as it should be given that it was able to learn some lessons from early Java). But in the wider world of programming languages they really are extremely close to each other, they're suitable for exactly the same problems, and Java has a substantially greater mass of libraries/tooling and probably always will do.
That's basically modern-day Java, with Lombok and other tidbits. Furthermore, if I recall correctly, Java has better performance on web benchmarks than C#.
> Had Java been better, Kotlin wouldn't need to be invented.
Kotlin was invented to make a sugary version of Java, and thus drive more JetBrains sales. It got popular because Oracle got litigious. As someone who's been on the Java train for almost two decades, what usually happens, if any JVM Lang becomes too popular, Java has the tendency to reintegrate its features into itself.
> Whatever you like, find fun and makes you a productive and happy developer. There is nothing wrong in using C or C++. Or Python. Or Haskell.
Sure, assuming it fits the domain. Like, don't use Python for kernel dev or Java for some obscure ML/AI when you could use Python.
I wouldn't call Lombok "modern", more like "a terrifyingly hacky way to tackle limitations in the language despite the unwillingness to make the language friendlier" and a far cry from what source generators can do in C#
As a fan of Records, this is a punch to the gut.
The ecosystem is years and years away from using records. Almost every huge monolith decade+ project is still on Java 8, those who moved to something new still can't be liberal with them, because oh look, none of the serialize/deserialize libs can work with them because everything, to this day, abuses reflection for generating objects like a giant fucking hack it is.
Apology for the rant, but I migrated a big project to 21 early this year, am in the middle of migrating another 1M+ line codebase to 21, and the sorry state of records is such a sad thing to witness.
I give a decade before records are anything but 'a fancy feature'.
With that said - lombok is not needed at any form there either, use a c-tor with fields and make the public final. If you have too many fields in a class, it's likely a good idea to split it regardless.
In all cases dumb getter/setters are just public fields but taking more meta space (and larger byte code, the latter has some consideration when it comes to inlining)
Also, if I had 1M LOC and my serialization/communication libraries didn't support whatever I've picked - I'd patch the libraries to support it.
And I'm saying that even after writing the most of the first project (closing in on 100kLOC now) in 21, I still can't have records where the make the most sense (service boundaries) because libs and larger ecosystem don't support them.
> Also, if I had 1M LOC and my serialization/communication libraries didn't support whatever I've picked - I'd patch the libraries to support it.
1MLOC in java land is.. not unusual. And if you're talking about patching libs like jackson/jaxb/whatever, my good person, you truly underestimate how much actual work people have (where Java upgrade is a distant afterthought, I only did it because I wanted to scratch the itch and see how far I could push processes in my org), or how much impact that might have for a drive-by contribution. Updating such core ecosystem libs in java is no small feat. They are used absolutely everywhere, and even tiny changes require big testing. There is a reason you find apache libs in every single project, because they have matured over past couple of decades without such drastic rug-pull of a change.
Also I'd actively remove all apache commons as well. Even in Java8 most of the functionality is redundant.
With all that I meant it should not be really underestimation.
I am part of the dark matter, although self-initiated java upgrades already put me on the right side of bell-curve.
> Also I'd actively remove all apache commons as well. Even in Java8 most of the functionality is redundant.
I used to think that. Then I had to decompress zip files in memory and selectively process the children. Of course Java has the functionality covered in stdlib, but they require so much boilerplate, and commons-compress was such a pleasure that I was done in 10 minutes. The same goes for other apache libs too.
OTOH, I wholeheartedly agree about Lombok being unjustified curse.
But even, if you account for that, the records in Java do most of what Lombok used to do - make class externally immutable, add default `toString`, `equals` and `hashCode` implementations, allow read-only access to fields.
> what source generators can do in C#
Having had the displeasure of developing source generators in C# (in Rider), what they do is make code impossible to debug while working on it. On top of relying on an ancient version of netstandard.
I cannot emphasize enough how eldritch working on them is. While developing, whatever change you write isn't reflected when you inspect codegen code, and caching can keep old code beyond even post re-compilation unless you restart the build server, or something.
So whenever you try to debug your codegen libs, you toss a coin:
- heads it shows correct code
- tails it's showing previously iteration of code gen code, but the new code is in, so the debugger will at some point get confused
- medusae it's showing previous iteration of code gen code, but new code hasn't been propagated, and you need to do some arcane rituals to make it work.
Hell, even as a user of codegen libs, updating codegen libs caused miscompilation because it was still caching the previous codegen version.
They require 2.0, which is the only version that is actually useful, since it supports .NET Framework 4.x.v
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
> web benchmarks
https://www.techempower.com/benchmarks/#hw=ph&test=composite...
The Tech Empower benchmarks do seem to reflect general state of Java Web Framework ecosystem with Vert.x being they hyper fast web framework and Spring being way slower.
If you take the standard template for any of these frameworks (both Java and C# and any other language) and you add authentication etc, the real performance will be 5-10% of the numbers reported in those benchmarks. Have a look through some of the weirdness in the implementations it's wild (and sometimes educational). The .NET team especially has done stuff specifically to get faster on those benchmarks.
Could you give me a pointer or two? I wondered about that myself, especially considering the massive improvement from "old" .NET to the core/kestrel based solutions - but a quick browsing a while ago mostly left me astonished how...well, for lack of a better word, banal most of the code was.
Agreed though, lack of all kinds of layers like auth, orm etc. are sadly a drawback of these kinds of benchmarks, if understandable - it would make comparability even trickier and has the danger of the comparison matrix of systems/frameworks/libraries exploding in size. But yeah, would be nice datapoints to have. :)
The custom BufferWriter stuff is pretty neat though, although also not really something most people will reach for. And there is more, like the caching of StringBuilders etc.
But it also doesn't use the actual HTTP server to build headers, but they just dump a string into the socket [2], feels a bit unrealistic to me. In general the BenchmarkApplication class [3] is full of non-standard stuff that you'd normally let the framework handle.
[1] https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast... [2] https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast... [3] https://github.com/TechEmpower/FrameworkBenchmarks/blob/mast...
.NET is perfectly capable of standing on its own, and if there are specific areas that need improvement - this should serve as a push to further improve DB driver implementations and make ASP.NET Core more robust against various feature configurations. It is already much, much faster than Spring which is a good start, but it could be pushed further.
I'd like to note that neither Go nor Java are viable for high-performance programming in a way that C# is. Neither gives you the required low-level access, performance oriented APIs, ability to use zero-cost abstractions and platform control you get to have in .NET. You can get far with both, but not C++/Rust-far the way you can with C#.
Yeah, except if you are working on Web servers the quality of the framework and its supporting libraries is much more important than what code could theoretically achieve. What is the point of being able to 200 mph when you only ever drive up to 30mph.
> Neither gives you the required low-level access, performance oriented APIs, ability to use zero-cost abstractions.
Java is working on high performance abstractions, see Vector API (Simd) and project Valhalla (custom primitive types).
Sure C# has a theoretical leg up (for which it paid dearly by causing backwards incompatibility with reified generics) but most of the libraries don't use low-level access or SIMD optimizations or what not.
2 of 10 (pidigits and regex-redux) allow use of widely available third party libraries — GMP, PCRE, RE2 — because there were language implementations that simply wrapped those libraries.
Look at all the programming language implementations that provide big integers by calling out to GMP. Why would it be "cheating" when available to all and done openly? Libraries matter.
>Most the Debian benchmarks for C# are cheaty too.<
Just name-calling.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
It’s not that easy. I assume other programs hide the use in macros and libraries, in ways far beyond my simple understanding.
Where there are few enough programs that readers should check that the programs they compare seem appropriate for their purpose.
Why? Did you mean both use intrinsics or both don't?
> Sometimes, you just want to see
As-it-says, look for more-secs less-gz-source-code -- probably less optimised.
If you meant BenchmarksGame, then it's the other way around - Java is most competitive where it relies heavily on GC[0], and loses in other areas which require capability to write a low-level implementation[1] that C# provides.
The only places where there are C calls are pidigts[2] and regex-redux[3] benchmarks, in both of which Java submissions have to import pre-generated or pre-made bindings to GMP and PCRE2 respectively. As do all other languages, with varying degrees of "preparation".
[0]: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
[1]: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
[2]: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
[3]: https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Look at all the programming language implementations that provide big integers by calling out to GMP. Why would it be "cheating" when available to all and done openly? Libraries matter.
Even if you prohibit PCRE2, the .NET submissions using out-of-box Regex engine end up being about 4 times faster than Java.
Surprisingly, even though .NET's BigInteger is known for its inefficiency, it ends up being more memory efficient and marginally faster at pidigits than a Java submission that does not use GMP. The implementations are not line-by-line equivalent so may not be perfectly representative of performance of each BigInt implementation.
My point being - if you look at the submissions closer, the data gives much clearer picture and only supports the argument that C# is a very usable language for solving the tasks one would usually reach for C, C++ or Rust instead.
Sure looks like it's written in Java!
Lombok is exceptionally backwards. You don't need getters/setters; and you should know how to try hashCode (and equals).
...and records exist
C# is better designed lang, has really strong tooling and ecosystem and well designed std lib