Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)B
Posts
12
Comments
457
Joined
3 yr. ago

  • Should have told the auditors that stripping symbols is stupid and counterproductive instead of playing along. That segfault a user managed to hit once and only once with their self-built binary, and that useless core file that was left behind, shall hunt you in your dreams forever.

    And I love how that commit was merged with the comment "A further reduced binary size! 🎉". Exhibit number #5464565465767 why caring that much about "dependency bloat" and binary sizes always was, and always will be, a result of collective mania in action.

  • Initial implementation is written in pure-Rust using only 6 tokens. Impressive.

  • Yes. That is in part why I mentioned that it's not a real alternative, and mentioned cargo-vendor as a possible basis for a less manual serviceable solution.

    Serviceable, but not necessarily good still. Anti-crates.io extremists would still be better off using an alternative crates registry*.

    * That's something that already exists btw. True extremists don't have to wait for the HN leak-promised Good Stuff.

    • GitHub wasn't always owned by Microsoft. At least get your dates right.
    • Yes, GH shouldn't be the sole auth provider.
  • crates.io: Dropping support for non-canonical downloads

    Jump
  • version can be passed with git actually. And it will need to match with the version set in Cargo.toml from the git source.

    I wouldn't call that an alternative to crate registries though (of which, crates.io is only one impl).

    Also tangentially related, cargo-vendor is a thing.

  • See! Unlike the post from last week, this post actually provides useful and actionable info.

    Good luck to the Ferrocene team.

  • Is everyone genuinely liking this!

    This is, IMHO, not a good style.

    Isn't something like this much clearer?

     
        
    // Add `as_cstr()` to `NixPath` trait first
    
    let some_or_null_cstr = |v| v.map(NixPath::as_cstr)
      .unwrap_or(Ok(std::ptr::null()));
    
    // `Option::or_null_cstr()` for `OptionᐸTᐳ`
    // where `T:  NixPath` would make this even better
    let source_cstr = some_or_null_cstr(&source)?;
    let target_cstr = target.as_cstr()?;
    let fs_type_cstr = some_or_null_cstr(&fs_type)?;
    let data_cstr = some_or_null_cstr(&data)?;
    let res = unsafe { .. };
    
      

    Edit: using alternative chars to circumvent broken Lemmy sanitization.

  • If the matches are causing too much nesting/rightward drift, then that could be an indicator that you're doing something wrong.

    If it's the opposite, then you're probably doing something right, except maybe the code needs some refactoring if there is too much clutter.

    If there isn't much difference, then it's a matter of style. I for example sometimes prefer to match on bools in some contexts because it makes things look clearer to me, despite it being not the recommended style. I'm also a proud occasional user of bool::then() and bool::then_some() 😉

    Also, if you find yourself often wishing some API was available for types like bool, Option, and Result, then you don't have to wish for long. Just write some utility extension traits yourself! I for example have methods like bool::err_if(), bool::err_if_not(), Option::none_or_else(), and some more tailored to my needs methods, all available via extension traits.

    Macros can also be very useful, although some people go for them too early. So if everything else fails to declutter your code, try writing a macro or two.

    And it's worth remembering, there is no general rule, other than if the code is understandable for you and works, then you're probably okay irregardless of style. It's all sugar after all, unless you're really doing some glaringly wrong stuff.

  • It’s not like RHEL

    The parallels between this and RHEL (including RHEL derivatives like Oracle Linux) are maybe longer than you think.

    it has nothing to do with licensing or even packaging/distribution.

    Not sure what you mean. I don't think I implied that that's the point of certification.

    But:

    • Isn't Ferrocene going to be a downstream* certified compiler?
    • Won't that compiler need a software license?
    • Won't that compiler be packaged and distributed (a cloud-only offering would presumably be off the table, at least for "serious clients")?

    I think all of this is very much relevant info to know.

    * "downstream" is the fifth word in the article/ad.

    It’s also not something that Ferrocene needs to “sell”

    Something is being sold by Ferrous Systems. I don't think that's a point of dispute by anyone!

    Now, what that something exactly looks like will depend, in part, on the answers to the questions above, no?

    in the sense of convincing users to migrate to it from rustc

    I didn't argue that. I don't think anyone argued that.

    In case you didn't realize, the quintessential Arch user wouldn't be the target of a RHEL sales pitch either 😉 .


    And now any remnants of a joke are ruined by all the explaining.

  • First of all, sometimes I write in a stream of consciousness half-jerking style. It's not meant to be taken 100% seriously. I thought the writing style itself, when I do that, makes that clear!

    Secondly, whatever that is of real technical value from the Ferrocene project, wasn't sold well by that ad. This could be by design, and maybe no one here would fall under the target audience of it. But then, I would question the point of posting this ad here in the first place.

    Thirdly, the ad mentions nothing regarding Ferrocene's general availability (binaries, source, source+binaries, neither), nor is there any mention of software licenses. I think you would agree that mentioning this directly in the ad would have made it infinitely more informative for readers around here.

  • bureaucratic procedure

    Not all verification/certification efforts fall under that banner of course. And some of them do provide value.

    But the answer to your question is simply:bureaucratic procedure

  • Funny you should mention that. Recently, glibc 2.38 was released with broken performance in its allocator API. Hell, one of the developers tried to argue the regression is good to force people to stop using the regressed API unnecessarily (the argument didn't go far, regressions got fixed).

    Reports in the wild:https://github.com/mpv-player/mpv/issues/12076https://bugs.archlinux.org/task/79300

    Links to the libc-alpha relevant threads can be found there.

    Speaking of libc allocators, musl's allocator is also shit. That's why some Rust projects use another global allocator in their musl builds. Some of them probably haven't noticed yet that those builds are not static anymore because of that 😉 .

  • Ah, the good old RHEL promise of quality, stability, and security.

    Can't wait for the CentOS Alma version... oh wait, no copyleft!

    I will stick with arch rustc, thank you very much.

  • FYI, I had some time to check out hyper v1 today.

    The new API of hyper proper is very low-level. Directly usable API is apparently moved to a hyper-util crate, which, as expected, does have a hard dependency on the tokio runtime.

    So, while it's good that hyper proper won't hard-depend on tokio rt, it appears that non-tokio-rt usage will either depend on a higher-level 3d party crate, or a lot of effort from direct hyper dependants to write a lot of glue code themselves.

  • As someone who has tried doing multithreaded design with Arc/Mutex, this is a complete nightmare.

    I myself often use channels. And pass owned data around.

    I don't think anyone argues that Arc+interior-mutation is ideal. But it's the over-the-top language like "complete nightmare" that some may take issue with.

    Note that I again make the distinction between Mutex, and all interior mutation primitives. Because Mutex is indeed a bad choice in many cases. and over-usage of it may indeed be a signal that we have a developer who's not comfortable with Rust's ownership/borrowing semantics.

    You constantly get into deadlocks and performance is abysmal, because Mutex is serializing access (so it’s not really async or multithreaded any more).

    • Same note about Mutex specifically as above.
    • Avoiding deadlocks can definitely be a challenge, true. But I wouldn't say it's often an insurmountable one.
    • What crate did you use for interior mutability, async-lock, tokio, parking_lot, or just std?
  • A good topic to chat about.

    But before that, I should mention that I had to use the browser's inspector to disable that background color. Light and dark backgrounds are both fine (I have color inversion bindkeyed after all). It's the ones in the middle like this one that are a no-no.

    And while you could exclusively use the runtime and ignore the rest, it is easier and more common to buy into the entire ecosystem.

    Stats? User surveys?

    I would expect that many users (including myself) actually just use the runtime, directly or indirectly (e.g. via async-global-executor with the tokio feature), while API usage is covered by crates like async-io, async-fs, async-channel, blocking, ...etc.

    In fact, I would hypothesize that, from an ecosystem adoption POV, tokio's only killer feature is hyper's dependence on it. If it wasn't for that dependence, tokio's adoption, I would expect, would be much lower. But this is admittedly anecdotal.

    Any time we reach for an Arc or a Mutex it's good idea to stop for a moment and think about the future implications of that decision.

    This and the quote above it seem to be Rust intellectuals' flavor of the month position.

    It kind of reminds me of the lowering dependencies' compile times craze from a few years ago. Stretching the anti-MT-RT narrative to the point of making bad advice like recommending sync channels for an IO task adds to the parallels.

    The choice to use Arc or Mutex might be indicative of a design that hasn't fully embraced the ownership and borrowing principles that Rust emphasizes.

    Or maybe you know, using APIs that already existed in std for a reason, as intended!

    It's worth reconsidering if the shared state is genuinely necessary or if there's an alternative design that could minimize or eliminate the need for shared mutable state.

    Generally true. But probably false here.

    People reaching for Arc+Mutex/RwLock/... (doesn't have to be a Mutex) in asyc code are probably aware of alternative designs, but still decide to use Arc+Mutex/RwLock/... when it's either necessary, or when it actually makes things simpler, and the downsides (performance) are of no measurable relevance, or the alternative design is actually slower (i.e. the multi-threaded runtime is actually helping).

    I would consider arguments that talk about "killing all the joy of actually writing Rust" in this context as either intellectual posturing, or simply disingenuous.


    Otherwise, there is not much to disagree with.

    • Enabling features via Cargo.toml => bad
    • Blanket enablement of preview features => bad
    • Preview features in stable => bad (anyone taking bets on what will be the first perma-preview feature)!

    There is prior art that wasn't mentioned that I think is relevant. It's rather obscure though, so it's understandable that it was missed. It's called "The Alpha Version".

    Alright let me explain it a bit, since I expect no one heard of this before. Alpha versions are like pre-beta versions that are not nightly. They are used at the start of a release cycle, but without any stability guarantees. You don't need a very good reason to pull a feature, like with betas. Just a reason would do.

    If everything goes well, the release cycle continues with the new to-be-stabilized features. If it doesn't, stabilization candidates can be pulled and their inclusion postponed to the next release cycle or beyond.

    So instead of immediate Stability Proposals, we can have Alpha Inclusion Proposals for the V+1 release cycle, at the start of the V release cycle. Accepting/Rejecting proposals happens at the start of the V+1 cycle. A Stabilize/Postpone decision is made before the first beta. The release cycle can be extended from 6 to 8 weeks if needed so people have enough time to play with alphas.

    So instead of nightly => stable (beta) => stable? (stable).

    We can have nightly => stable candidate (alpha) => stable? (beta) => stable? (stable)

    Pretty novel stuff, I know. This way, people can test stabilization candidates alone in a non-nightly setting, and report feedback. Lack of enough feedback may be used as a "good enough" reason to postpone stabilization!

    Here you go, preview features in a release, but not a stable release. And no possibility (ok, less possibility) of previewise-and-forget and perma-preview stuckness for some low-visibility features.

  • Especially when dealing with ffis things get very muddy very fast.

    True. But an ffi biding API leaking resources (memory or otherwise) is a bug in that binding. This holds true for any RAII-using language, including C++. I don't think faulty Drop implementations have anything to do with the subjects covered by the article!