Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)L
Posts
439
Comments
601
Joined
3 yr. ago

  • It violates the principle of least surprise.

    It really doesn't. I recommend you get acquainted with what undefined behavior is, and how it's handled by developers.

    You don’t expect the compiler to delete your bounds checking etc.

    By design, undefined behavior has a very specific purpose. Newbies are instructed to consider code that leads to undefined behavior as a bug they introduced. For decades compilers and static code analysis tools can detect and flag undefined behavior as errors in your code.

    As I said before, sometimes it seems clueless developers parrot on about "undefined behavior" as some kind of gotcha although they clearly have no idea what they are talking about. Sometimes it sounds like they heard it somewhere and just mindlessly repeat it as if it meant something.

    The way c and c++ define and use UB is like finding an error at compile time and instead of reporting it, the compiler decides to exploit it.

    What are you talking about? Compilers can and do flag undefined behavior as errors. I recommend you read up on the documentation of any compiler.

    Also, I don't think you fully understand the subject. For example, as an example, some compiler implementations leverage UB to add failsafes to production code such as preventing programs from crashing when, say, null pointers are dereferenced. We can waste everyone's time debating whether null pointers should be dereferenced, but what's not up for discussion is that, given the choice, professional teams prefer that their code doesn't crash in users' machine if it stumbles upon one of these errors.

  • The most asinine thing i encountered is that the bracket operator on std::map writes 0 value if the key is not found.

    That's a "you're using it wrong" problem. The operator[] is designed to "Returns a reference to the value that is mapped to a key equivalent to key, performing an insertion if such key does not already exist. "

    The "0 value" just so happens to be the result you get from a default initializer whose default initialization corresponds to zero-initialization.

    If you want to use a std::map to access the element associated with a key, you need to either use at and handle an exception if no such key exists, or use find.

  • Should focus on getting rid of undefined behavior.

    What problem do you believe is presented by undefined behavior?

  • Not sure exactly what you mean, could you elaborate or rephrase?

    There is nothing to rephrase. I asked what problem do you think that undefined behavior poses. That's pretty cut-and-dry. Either you think undefined behavior poses a problem, and you can substantiate your concerns, or you don't and talking about undefined behavior being a concern is a mute point.

  • It sounds like you’ve never had to do real work in a language kind C++ where the compiler is always trying to play gotcha with undefined behavior.

    I have over a decade of professional experience working with C++, and it's likely you already used software I worked on.

    Throughout those years, the total number of times where undefined behavior posed a problem in any of the projects I worked on was zero.

    Please enlighten me about the insurmountable challenges posed by undefined behavior.

  • How do you succinctly call a language that has all behavior defined or equivalently no undefined behavior (aside from designated regions)?

    I don't understand this fixation with undefined behavior. Its origins are in the design decision of leaving the door open for implementations to employ whatever optimization techniques they see fit without the specification get in the way. This is hardly a problem.

    In practical terms, developers are mindful to not rely on those traits because as far as specifications go they have unpredictable implications, but even so they are never a problem. I mean, even in C and C++ it's trivial to tweak the compiler to flag undefined behavior as warnings/errors.

    Sometimes it sounds like detractors just parrot undefined behavior as some kind of gotcha in ways I'm not even sure they fully understand.

    What problem do you think that undefined behavior poses?

  • my point was that I was able to find answers for everything else before I had to resort to posting a question.

    That's not a SO problem per se. Some projects do use SO as their semi-official customer support channel, but in general posting a question on SO is not supposed to be better than posting a question on Reddit: you get what you paid for.

  • As a rule of thumb, I would say that recursion should never be used in place of a for loop.

    If you don't know what you're doing with a recursive function then you risk pushing stuff to your call stack proportionally to the number of items you want to iterate over.

    If your collection and/or the size of the stuff you're pushing to the stack is large enough, your app will crash.

    If you know enough to avoid growing the call stack then you know enough to not rely on third parties to figure out if you need an iteration of recursion.

  • C++ @programming.dev

    C++ Should Be C++

    www.open-std.org /jtc1/sc22/wg21/docs/papers/2023/p3023r1.html
  • Rust @programming.dev

    Memory Safety is a Red Herring

    steveklabnik.com /writing/memory-safety-is-a-red-herring
  • I think you're succumbing to the belief that if a solution isn't perfect then it doesn't work. That's not how things work. You can have incremental improvements that replace a big problem with a smaller problem, and a smaller problem is easier to solve.

    Also, StackOverflow already suffers from old and stale replies for years, and that's not hurting the site. Oddly enough, that's a problem that's mitigated with the way that data is queried,and that's handled quite well with large language models.

  • They’ve not been closed as duplicates or anything, just no answers.

    This suggests your questions are a factor. Perhaps the topic is too niche? Perhaps the questions are too specialized?

    Recently I gave SO a try with a tricky but low-hanging fruit question, and the problem I faced was the exact opposite of yours: I received too many comments and answers. The problem was that 99% of those replying were clearly clueless newbies and seemed to be piling on to try to farm reputation points. Some of them were even not reading the question at all, using a strawmen of sorts to dump the answer they had, and even presenting code snippets that were broken.

  • I don’t have an account there because of that reputation, and why would I now that I have access to chatgpt?

    I think StackOverflow is rolling out a GPT-based service that generates answers to your questions based on SO's data.

    You need to train ChatGPT to get it to output decent results. SO seems to be working to do that for you.

  • FTA:

    A few months earlier, the engineering team at Amazon Prime Video posted a blog post explaining that, at least in the case of video monitoring, a monolithic architecture has produced superior performance than a microservices and serverless-led approach.

    I recall reading a write-up of Amazon Prime's much talked about migration away from server less and into monoliths.

    The key take is that Amazon Prime's problem is being wrongly pinned on microservices by the anti-microservices crowd. It was mainly an utter failure in analysis and architecture, where system designers failed to take into account basically the performance penalty of sending data over a network and followed a cargo cult mentality of expecting a cloud provider to magically scale out to buy back the throughput that their system design killed.

    Of course when they sat down and actually thought things through, eliminating the need to shuffle data around over random networks ended up avoiding the penalty caused by sending data over a network.

    The important thing is that they can pin the blame of a design failure in an architecture, and the anti-microservices crowd eats it up. Except that it says nothing about either microservices or even server less architectures.

  • Web Development @programming.dev

    Urchin Tracking Module (UTM)

    support.google.com /urchin/answer/28307
  • Cloud @programming.dev

    Year-in-Review: 2023 Was a Turning Point for Microservices

    thenewstack.io /year-in-review-was-2023-a-turning-point-for-microservices/
  • PostgreSQL @programming.dev

    Transaction Isolation in Postgres, explained

    www.thenile.dev /blog/transaction-isolation-postgres
  • C++ @programming.dev

    MISRA C++:2023 published

    forum.misra.org.uk /thread-1668.html
  • Perhaps I'm being dense and coffee hasn't kicked in yet, but I fail to see where is this new computing paradigm that's mentioned in the title.

    From their inception, computers have been used to plug in sensors, collect their values, and use them to compute stuff and things. For decades each and every single consumer-grade laptop has adaptive active cooling, which means spinning fans and throttling down CPUs when sensors report values over a threshold. One of the most basic aspects of programming is checking if a memory allocation was successful, and otherwise handle an out-of-memory scenario. Updating app states when network connections go up or down is also a very basic feature. Concepts like retries, jitter, exponential back off have become basic features provided by dedicated modules. From the start Docker provided support for health checks, which is basically am endpoint designed to be probed periodically. There are also canary tests to check if services are reachable and usable.

    These exist for decades. This stuff has been done in production software since the 90s.

    Where's the novelty?

  • Having said this, I'd say that OFFSET+LIMIT should never be used, not because of performance concerns, but because it is fundamentally broken.

    If you have rows being posted frequently into a table and you try to go through them with OFFSET+LIMIT pagination, the output from a pagination will not correspond to the table's contents. Fo each row that is appended to the table, your next pagination will include a repeated element from the tail of the previous pagination request.

    Things get even messier once you try to page back your history, as now both the tip and the tail of each page will be messed up.

    Cursor+based navigation ensures these conflicts do not happen, and also has the nice trait of being easily cacheable.

  • For the article-impaired,

    Using OFFSET+LIMIT for pagination forces a full table scan, which in large databases is expensive.

    The alternative proposed is a cursor+based navigation, which is ID+LIMIT and requires ID to be an orderable type with monotonically increasing value.

  • Programming @programming.dev

    5000x faster CRDTs: An Adventure in Optimization (2021)

    josephg.com /blog/crdts-go-brrr/
  • You don’t need any of it, relevant experience is worth in the region of 5x-10x for every hiring manager I’ve known, and for myself.

    The only time I had to brush up on data structures and algorithms is when I apply to job ads, and recruiters put up bullshit ladder-pulling trivia questions to pass to the next stage of a recruiting process. It's astonishing how the usefulness of a whole body of knowledge is to feed gatekeepers with trivia questions.

  • The decimal representation of real numbers isn’t unique, so this could tell me that “2 = 1.9999…” is odd.

    I don't think your belief holds water. By definition an even number, once divided by 2, maps to an integer. In binary representations, this is equivalent to a right shift. You do not get a rounding error or decimal parts.

    But this is nitpicking a tongue-in-cheek comment.

  • "Appendable” seems like a positive spin on the (...)

    I don't think your take makes sense. It's a write-only data structure which supports incremental changes. By design it tracks state and versioning. You can squash it if you'd like but others might see value in it.

  • It’s usually easier imo to separate them into different processes (...)

    I don't think your comment applies to the discussion. One of the thread pools mentioned is for IO-bound applications, which means things like sending HTTP requests.

    Even if somehow you think it's a good idea to move this class of tasks to a separate process, you will still have a very specific thread pool that can easily overcommit because most tasks end up idling while waiting for data to arrive.

    The main take is that there are at least two classes of background tasks that have very distinct requirements and usage patterns. It's important to handle both in separate thread pools which act differently. Some frameworks already do that for you out of the box. Nevertheless it's important to be mindful of how distinct their usage is.

  • C++ @programming.dev

    Trip report: Autumn ISO C++ standards meeting (Kona, HI, USA)

    herbsutter.com /2023/11/11/trip-report-autumn-iso-c-standards-meeting-kona-hi-usa/
  • C++ @programming.dev

    In C++, how can I make a member function default parameter depend on this? - The Old New Thing

    devblogs.microsoft.com /oldnewthing/20231206-00/
  • I'd love to see benchmarks testing the two, and out of curiosity also including compressed JSON docs to take into account the impact of payload volume.

    Nevertheless, I think there are two major features that differentiate protobuff and fleece, which are:

    • fleece is implemented as an appendable data structure, which might open the door to some usages,
    • protobuf supports more data types than the ones supported by JSON, which may be a good or bad thing depending on the perspective.

    In the end, if the world survived with XML for so long, I'd guess we can live with minor gains just as easily.

  • C++ @programming.dev

    CMake 3.28.0 available for download

    www.kitware.com /cmake-3-28-0-available-for-download/
  • Programming @programming.dev

    GitHub - couchbase/fleece: A super-fast, compact, JSON-equivalent binary data format

    github.com /couchbase/fleece
  • C++ @programming.dev

    Supercharging VS Code with C++ Extensions

    www.kdab.com /supercharging-vs-code-with-c-extensions/
  • Programming @programming.dev

    Open source tools and guidelines for sending webhooks easily, securely and reliably

    github.com /standard-webhooks/standard-webhooks/blob/main/spec/standard-webhooks.md
  • Programming @programming.dev

    Introducing Wikifunctions: first Wikimedia project to launch in a decade creates new forms of knowledge – Wikimedia Foundation

    wikimediafoundation.org /news/2023/12/05/introducing-wikifunctions-first-wikimedia-project-to-launch-in-a-decade-creates-new-forms-of-knowledge/
  • C++ @programming.dev

    Integer overflow and arithmetic safety in C++

    orodu.net /2023/11/29/overflow.html
  • C++ @programming.dev

    Hands-On Design Patterns with C++ by Fedor Pikus (2022)

    www.sandordargo.com /blog/2022/07/23/hands-on-design-patterns-by-fedor-pikus
  • Git @programming.dev

    TIL git rebase allows merging commits in the middle of a branch commit history

    stackoverflow.com /questions/39023360/git-squash-commits-in-the-middle-of-a-branch
  • C++ @programming.dev

    The Myth of Smart Pointers

    www.logikalsolutions.com /wordpress/information-technology/smart-pointers/
  • Web Development @programming.dev

    REST Guidelines

    www.belgif.be /specification/rest/api-guide/