Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)L
Posts
439
Comments
601
Joined
3 yr. ago

  • I think nobody talks about it because it doesn’t show history

    What do you mean it doesn't show history? It's perhaps the only thing it handles better than most third-party git GUIs.

  • It’s not and no one cares about numbers anymore.

    The only people who ever cared about svn's numbering scheme were those who abused it as a proxy to release/build versions, which was blaming the tools for the problems they created for themselves.

  • I initially found git a bit confusing because I was familiar with mercurial first, where a “branch” is basically an attribute of a commit and every commit exists on exactly one branch.

    To be fair, Mercurial has some poor design choices which leads to a very different mental model of how things are expected to operate in Git. For starters, basic features such as stashing local changes were an afterthought that you had to install a plugin to serve as a stopgap solution.

  • think the lack of UI

    Even git ships with git-ui. It's not great, but just goes to show how well informed and valid your criticism is.

    https://git-scm.com/docs/git-gui/

  • Sure if you never branch, which is a severely limited way of using git.

    It's quite possible to use Git without creating branches. Services like GitHub can automatically create feature branches for you as part of their ticket-management workflow. With those tools, you start to work on a ticket, you fetch the repo, and a fancy branch is already there for you to use.

    You also don't merge branches because that happens when you click on a button on a PR.

  • I think a common misconception is that there’s a “right way to do git” - for example: “we must use Gitflow, that’s the way to do it”.

    I don't think this is a valid take. Conventions or standardizations are adopted voluntarily by each team, and they are certainly not a trait of a tool. Complaining about gitflow as if it's a trait of Git is like complaining that Java is hard because you need to use camelCase.

    Also, there is nothing particularly complex or hard with gitflow. You branch out, and you merge.

  • Nonetheless

    You didn't provided a single concrete example of something you actually feel could be improved.

    The most concrete complain you could come up was struggling with remembering commands, again without providing any concrete example or specific.

    Why is it so hard for critics to actually point out a specific example of something they feel could be improved? It's always "I've heard someone say that x".

  • It baffles me that you can advertise something as “unlimited” and then impose arbitrary limits after the fact.

    I didn't saw anything on the post that suggests that was the case. They start with a reference to a urgent call for a meeting from cloud flare to discuss specifics on how they were using the hosting provider's service, which sounds a lot like they were caught hiding behind the host doing abusive things,and afterwards they were explicitly pointed out for doing abusing stuff that violated terms of service and jeopardized the hosting service's reputation as a good actor.

  • First communication, because they clearly were confused about what was happening and felt like they didn’t have anyone technical explain it to them and it felt like a sales pitch.

    I don't think that was the case.

    The substack post is a one-sided and very partial account, and one that doesn't pass the smell test. They use an awful lot of weasel worlds and leave about whole accounts on what has been discussed with cloud flare in meetings summoned with a matter of urgency.

    Occam's razor suggests they were intentionally involved in multiple layers of abuse, were told to stop it, ignored all warnings, and once the consequences hit they decided to launch a public attack on their hosting providers.

  • Git is ugly and functional.

    I don't even think it's ugly. It just works and is intuitive if you bother to understand what you're doing.

    I think some vocal critics are just expressing frustration they don't "get" a tool they never bothered to learn, particularly when it implements concepts they are completely unfamiliar with. At the first "why" they come across, they start to blame the tool.

  • Git is no different. But it sure feels like it never took the idea of a polished user experience seriously.

    I've seen this sort of opinion surface often,but it never comes with specific examples. This takes away from the credibility of any of these claims.

    Can you provide a single example that you feel illustrates the roughest aspect of Git's user experience?

  • Programming @programming.dev

    Google - Site Reliability Engineering (2017)

    sre.google /sre-book/table-of-contents/
  • Programming @programming.dev

    The Wrong Abstraction (2016)

    sandimetz.com /blog/2016/1/20/the-wrong-abstraction
  • Cloud @programming.dev

    LZW and GIF explained

  • Cloud @programming.dev

    Amazon SES introduces new email routing and archiving features

    aws.amazon.com /blogs/messaging-and-targeting/mail-manager-amazon-ses-introduces-new-email-routing-and-archiving-features/
  • Data Structures and Algorithms @programming.dev

    Counted B-Trees (2017)

    www.chiark.greenend.org.uk /~sgtatham/algorithms/cbtree.html
  • Data Structures and Algorithms @programming.dev

    What is RCU, Fundamentally? (2007)

    lwn.net /Articles/262464/
  • Git @programming.dev

    Clone arbitrary single Git commit

    blog.hartwork.org /posts/clone-arbitrary-single-git-commit/
  • Data Structures and Algorithms @programming.dev

    CRDT: Text Buffer - Made by Evan

    madebyevan.com /algos/crdt-text-buffer/
  • it’s about deploying multiple versions of software to development and production environments.

    What do you think a package is used for? I mean, what do you think "delivery" in "continuous delivery" means, and what's it's relationship with the deployment stage?

    Again, a cursory search for the topic would stop you from wasting time trying to reinvent the wheel.

    https://wiki.debian.org/DebianAlternatives

    Deviam packages support pre and post install scripts. You can also bundle a systemd service with your Deb packages. You can install multiple alternatives of the same package and have Debian switch between them seemlessly. All this is already available by default for over a decade.

  • Cloud @programming.dev

    Uber Migrates 1 Trillion Records from DynamoDB to LedgerStore to Save $6 Million Annually

    www.infoq.com /news/2024/05/uber-dynamodb-ledgerstore/
  • Programming @programming.dev

    How to Create a Simple Debian Package

    www.baeldung.com /linux/create-debian-package
  • I feel this sort of endeavour is just a poorly researches attempt at reinventing the wheel. Packaging formats such as Debian's .DEB format consist basically of the directory tree structure to be deployed archived with Zip along with a couple of metadata files. It's not rocket science. In contrast, these tricks sound like overcomplicated hacks.

  • Logging in local time is fine as long as the offset is marked.

    I get your point, but that's just UTC with extra steps. I feel that there's no valid justification for using two entries instead of just one.

  • I’ve had mixed results with ccache myself, ending up not using it.

    Which problems did you experienced?

    Compilation times are much less of a problem for me than they were before, because of the increases in processor power and number of threads.

    To each its own, but with C++ projects the only way to not stumble upon lengthy build times is by only working with trivial projects. Incremental builds help blunt the pain but that only goes so far.

    This together with pchs (...)

    This might be the reason ccache only went so far in your projects. Precompiled headers either prevent ccache from working, or require additional tweaks to get around them.

    https://ccache.dev/manual/4.9.1.html#_precompiled_headers

    Also noteworthy, msvc doesn't play well with ccache. Details are fuzzy, but I think msvc supports building multiple source files with a single invocation, which prevents ccache to map an input to an output object file.

  • Programming @programming.dev

    Logging Best Practices: The 13 You Should Know (2019)

    www.dataset.com /blog/the-10-commandments-of-logging/
  • Data Structures and Algorithms @programming.dev

    Visualizing algorithms for rate limiting

    smudge.ai /blog/ratelimit-algorithms
  • Here’s the sauce

    I don't buy it. Unauthorized access attempts are a constant on the internet in general, and in AWS endpoints in particular. When anyone exposes an endpoint, it's a matter of minutes until it starts to get prodded by security scanners. I worked on a project where it's endpoints were routinely targeted by random people running FLOSS security scanners resulting in thousands of requests that were blocked either by rate-limiting or bad/lack of credentials. I don't believe that a single invoice of $1k would trigger such a sudden and massive change of heart, when accidental costs in AWS easily reach orders of magnitude above that price tag.

  • This seems like something they just never concidered until a really big client that was getting hammered told them they can stick the bill.

    Yes, this indeed screams "Cloudflare does not pull this sort of shit", and now they are spinning this as something they do out of kindness.