Skip Navigation

Posts
310
Comments
220
Joined
3 yr. ago

I'm a robotics researcher. My interests include cybersecurity, repeatable & reproducible research, as well as open source robotics and rust programing.

  • Oh, neat! Is this the project you're referring to?

    Looks like Bazzite is listed as an example derivative image. I've heard good things about that OS from newer Linux users' perspectives. But is ublue something an individual user could personally customize, or more like something a development team or community project would build up from?

    The landing page referencing layers and the Open Container Initiative, so is this more like a bootable container using overlay file system drivers?

    One attraction I appreciate with Nix is the ability to overlay or overload default software options from base packages, without having to repeat/redefine everything else upstream, e.g. enabling Nvidia support for btop to visualize GPU utilization via a simple cuda flag. Replicating lazy-level-evaluation with something buildkit ARGs would be hectic, so do they have their own Dockerfile/Containerfile DSL?

  • It's a steep learning curve, but because much of the community publishes their personal configs, I find it a lot simpler to browse public code repos with complete declarative examples to achieve a desired setup than it is to follow meandering tutorials that subtly gloss over steps or omit prerequisite assumptions and initial conditions.

    There are also plenty of outcroppings and plateaus buttressing the learning cliff that one can comfortably camp at. Once you've got a working MVP to boot and play from, you can experiment and explore at your own pace, and just reach for escape hatches like dev containers, flatpacks or appomages when you don't feel like the juice is worth the squeeze just yet.

  • However, you then don't have to mentally remember every change you made when you eventually migrate to a new machine or replicate your setup across your laptop and desktop while keeping them synchronized. It takes me a few hours to setup and verify that everything is how I need on a normal distro, though that may be a byproduct of my system requirements. Re-patching and packaging kernel modules on Debian for odd hardware is not fun, nor is manually fixing udev and firewall rules for the same projects again and again.

  • Not really confident, but is the LSP running from a separate child process? Is that child process inheriting all of the anticipated environment variables from the shell that launched Neovim?

  • I've been using TTS systems for decades with accessibility use cases, so other than quality audio books that necessitate a skilled performing narrator, I no longer mind.

    In fact, I prefer legacy Bayesian phonetic models over the newer convolutional and recurrent neural networks, as their hard consonants and robotic consistency in pronunciations and intonation are much easier to listen and discern at higher words per minute, like at 3x or 4x natural speech rates for everyday blind reading, as compared to modern mumbling/slurring of syllables or artificial stridor and other breathy sounds.

  • I recall the author saying they're not a native English speaker, and preferring international intelligibility over regional voice-over, plus the production convenience while traveling and script writing without a quiet audio recording environment. See around 7 min mark:

  • Do you know of a complete example or live config I could read through as a reference for that first method recommended?


    I'd also be interested in complete examples for a working pair of remote builder and local client (both NixOS multi user), as all the documentation I've come across thus far are either:

  • Mainly the official git CLI for controlling branches and sub modules, and sometimes the GitHub CLI if quickly checking out a pull request from a forked repo.

    Also use the source control tab in VSCode rather often, as it's really convenient to review and stage individual line changes from its diff view, and writing commit messages with a spell check extension.

    If it's a big diff or merge conflict, I'll break out the big guns like Meld, which has better visualizations for comparing file trees and directories.

    About a decade ago, I used to use SmartGit, then tried GitKraken when that came around, but never really use much of the bells and whistles and wasn't keen on subscription pricing. Especially as the UX for GitHub and other code hosting platforms online have matured.

  • If you just want to quickly create a local python develop environment on the side using Python module metadata, installing uv from nixpkgs, and enabling nix-ld to run pre-compiled binaries from PyPI keeps things simple, and replicating the same virtual python environment workflow you to have on any other distribution:

    https://discourse.nixos.org/t/i-want-understanding-nix-packages-and-flake-basics/67365/3

    If you wanted to package a python module for nix or for proper distribution via nixpkgs, you'd want to add a nix derivation file that encapsulates all the inputs, i.e. software building materials (SBOM). There are existing nix library functions that can automate most of the packaging, not unlike Debian macros:

    https://wiki.nixos.org/wiki/Python

    The second approach is more rigorous, and combined with something like flakes for pinning the exact hash for all inputs via lock file ensures reproducibility, like when sharing with other nix users. While as the first approach is more subject to your current system, i.e. linking to whatever system wide libraries are presently installed, but it's less upfront effort to reuse existing python-package-managers than to nixify everything.

  • Thank you for your work!I would not have liked encountering this bug in the wild.

  • What user software are folks using to monitor and log PSU sensor data?

    Would be nice to correlate power usage and efficiency with system load percentage over time and type (CPU vs GPU).

  • I'll preface that NixOS may not be for everyone, as deviating from a conventional hierarchical file system is a radical departure from conventional distributions; but for those that want precise control over their system environment, it has a good deal of appeal.

    For example, I appreciate being able to use the latest bleeding release of a number of tools while sticking with older trusted versions of other utilities, but if both relied upon different versions of similar dependencies, such package conflicts can be troublesome to resolve, as few Linux package managers gracefully deal with multi version installs.

    For NixOS using the nix store, installing leaf packages that traditionally conflict is trivial, and as a user, I can spend less time managing every transitive dependency in order to use the software I want. Not having to wait for a disjointed ecosystem of packages to synchronize around dependencies, or resorting to compromise in package version selection is very liberating.

    The functional language and documentation for nix itself is a bit quirky, as I wish it was stronger typed, but being able to declaratively express and version control my setup across workstations has been a time saver; installing/configuring something once and then have-done with it.

  • I think it stands for Managed Service Providers.

  • Nix-ld is only used by unpatched executables that use the link loader at /lib or /lib64. If you use for example python from nixpkgs than it will not pick up NIX_LD_LIBRARY_PATH and NIX_LD since these types of binaries are configured to use a glibc from the nix store.

    Ah, I guess that's why I've seen folks recommend just having uv install the python interpreter as well, so everything python uses the same link loder from nix-ld.

  • I'm still using an old UC Gateway, doubt my homelab will outgrow it.