Heh, I agree with everything you said, but I’m afraid such a framework is impossible to create, let alone implement. It’s impossible to foresee the infinite possibilities for people to screw themselves through bad decisions, so all you’d create is a lot of bureaucracy to still end up in the same place.
That’s still a very major achievement! Do I understand correctly this means all target architectures supported by GCC are now unlocked for Rust too?
Which one should I pick then, that is both as fast as the std solutions in the other languages and as reusable for arbitrary use cases?
Because it sounds like your initial pick made you loose the machine efficiency argument and you can’t have it both ways.
Well, let’s be real: many C programs don’t want to rely on Glib, and licensing (as the other reply mentioned) is only one reason. Glib is not exactly known for high performance, and is significantly slower than the alternatives supported by the other languages I mentioned.
I would argue that because C is so hard to program in, even the claim to machine efficiency is arguable. Yes, if you have infinite time for implementation, then C is among the most efficient, but then the same applies to C++, Rust and Zig too, because with infinite time any artificial hurdle can be cleared by the programmer.
In practice however, programmers have limited time. That means they need to use the tools of the language to save themselves time. Languages with higher levels of abstraction make it easier, not harder, to reach high performance, assuming the abstractions don’t provide too much overhead. C++, Rust and Zig all apply in this domain.
An example is the situation where you need a hash map or B-Tree map to implement efficient lookups. The languages with higher abstraction give you reusable, high performance options. The C programmer will need to either roll his own, which may not be an option if time Is limited, or choose a lower-performance alternative.
I’m not arguing against that. Merely providing some counterweight to the idea that the author was “flinging shit in the trenches” 😅
I found the title of that section slightly triggering too, but the argument they lay down actually makes sense. Consistency helps you to achieve correctness in large codebases, because it means you don’t have to reinvent what is correct over and over in separate pockets of the codebase. Such pockets also make incremental improvements to the codebase harder and harder, so they do come back to bite you.
Your example of vendors doesn’t relate to that, because you don’t control your vendor’s code. But you do control your organisation’s.
Another data structure that you can consider is the red green tree: https://willspeak.me/2021/11/24/red-green-syntax-trees-an-overview.html
We use it in Biome too, and it’s great for building trees that are immutable and yet still need frequent updates, as well as traversal in all directions. Its implementation contains quite a bit of unsafe
to make it fast, though as a consumer you’re not really exposed to that.
But he did step in, albeit privately. I actually agree an earlier public statement would have helped, but we don’t know the specifics of what went on behind the scenes.
In any case, I don’t think it’s fair to assign blame for Marcan’s burnout to Linus, as the post above did. Marcan himself mentioned personal reasons too when he announced his departure. I think we should show understanding and patience with both sides, and assigning blame isn’t helping with that.
That now involves fixing Rust drivers, so you’re going to need to know Rust.
I also don’t think the latter follows from the former. You can continue to not know Rust as long as you’re willing to work with those that can. Problems only start if you’re unwilling to collaborate.
You’re implying that Linus is somehow responsible for burning out Marcan? I don’t think that’s a fair assessment.
So far, the only good argument I have really seen from the ones opposing the Rust4Linux effort comes down to: adding Rust to a C codebase introduces a lot of complexity that is hard to deal with.
But the argument offers no solution except to give up and not even attempt to address the real issues the kernel struggles with. It’s effectively a form of defeatism when you want to give up and don’t want to let others attempt to do what you don’t see as feasible.
Feel free to just use React on the frontend if you’re more familiar with it, but make sure you couple it with Redux. Then when the time comes you want to bring some Rust into the frontend, you can do so by writing your Redux reducers in Rust.
PS.: The blog post mentions using fp-bindgen for WASM bindings, but nowadays you’re probably better off using wasm-bindgen.
Sorry, but this mindset is hurting both Linux and security in general.
The reason we are seeing a lot of security vulnerabilities is because prior to about 10 years ago security wasn’t considered that important.
This is frankly quite obviously false. Microsoft started taking security more seriously around the release of Windows 2000. Are you saying the Linux kernel developers took another 15 years to realize security is important?
Security research shows that new code is more prone to common vulnerabilities than old code is. While old code may have been designed with weak (or no) security considerations, those are well-mitigated by now. On the contrary, new code still regularly contains exploitable memory safety issues that slip by review.
What we need is skilled programmers who understand security.
We have skilled programmers who understand security. Those also understand that we need more than that.
Continuing to use C doesn’t merely require skilled programmers, it requires programmers that never make any mistake ever. That’s an infeasible standard for any human to uphold, hence why C is considered a risk.
I agree the Linux kernel is just fine. But that’s only because despite the security risks of C, there’s no viable alternative kernel.
But development doesn’t stand still, so either Linux catches up, or gets replaced when a viable alternative arrives. Thankfully Linus sees the problem, so they’re working to make the kernel viable a while longer, but I also agree with the person you replied to that this work could definitely use a bit more help.
You’re ignoring the fact that for many projects it does work.
It only needs to be perfect if you want to run 100% Node.js software unaltered. While that may be a lofty goal, it’s also an infeasible one.
That doesn’t mean imperfect support is futile though. By your logic, Bun has no right to exist because it only supports Node.js APIs and doesn’t have noteworthy APIs of its own, and they’re not perfect either. Yet they seem to be at least as successful as Deno is.
Or for an example in a different domain: Your argument would state that a project like WINE shouldn’t exist because it doesn’t have perfect compatibility with Windows, and it disincentivizes development of Linux games. Yet it is largely thanks to WINE that Valve has been able to make the Steam Deck and that Linux gaming is finally taking off.
I think what your argument fails to take into account is that you need a significant amount of users to make any impact on the market. And many users have legacy requirements that they can’t throw out overnight, so you have to support those legacy environments. And even with imperfect legacy support you can support your users, especially if the users are willing to make a few changes here or there. But if you have no legacy support, you also get no users except those that have niche greenfield requirements.
So instead of trying to replace NodeJS or offering an upgrade path for existing Node projects, incentivize formation of ecosystem around Deno
They are incentivizing their own ecosystem. That’s what Jsr.io is all about. But the world isn’t black and white. They can do more than one thing.
I dunno, I still see a blog post. Which is hosted in their own issue tracker, which is of course odd, but also the point.
Maybe it went down for a bit?
Would you have a link to that? I know there are many third-party garbage collectors for Rust, but if there’s something semi-official being proposed or prototyped I’d be most curious :)
tsc
is (very) slow and there are also no convenient ways to interact with it from Rust.So it saves a lot development and CI time to roll our own. The downside is that our inference still isn’t as good as
tsc
of course, but we’re hopeful the community can help us get very close at least.