Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)R
Posts
11
Comments
19
Joined
3 yr. ago

  • Agreed. My copy lost this documentation link in the original which gives more detail about the horizontal scaling: https://join-lemmy.org/docs/administration/horizontal_scaling.html.

    It seems really straightforward (which is a good thing), each backend Lemmy_server handles incoming requests and also pulls from a shared queue of other federation work.

  • Time zones are an endless source of frustration, this one doesn’t sound too bad though:

    Going forward, all timestamps in the API are switching from timestamps without time zone (2023-09-27T12:29:59.113132) to ISO8601 timestamps (e.g. 2023-10-29T15:10:51.557399+01:00 or Z suffix). In order to be compatible with both 0.18 and 0.19, parse the timestamp as ISO8601 and add a Z suffix if it fails (for older versions).

    https://github.com/LemmyNet/lemmy/pull/3496

  • This data structure uses a 2-dimensional array to store data, documented in this scala implementation: https://github.com/twitter/algebird/blob/develop/algebird-core/src/main/scala/com/twitter/algebird/CountMinSketch.scala. I’m still trying to understand it as well.

    Similar to your idea, I had thought that by using k bloom filters, each with their own hash function and bit array, one could store an approximate count up to k for each key, which also might be wasteful or a naïve solution.

    PDF link: http://www.eecs.harvard.edu/~michaelm/CS222/countmin.pdf

  • I haven’t used them in Spark directly but here’s how they are used for computing sparse joins in a similar data processing framework:

    Let’s say you want to join some data “tables” A and B. When B has many more unique keys than are present in A, computing “A inner join B” would require lots of shuffling if B, including those extra keys.

    Knowing this, you can add a step before the join to compute a bloom filter of the keys in A, then apply the filter to B. Now the join from A to B-filtered only considers relevant keys from B, hopefully now with much less total computation than the original join.

  • Collage sounds really interesting , will check it out. Another variation on bloom filter I recently learned about is count-min-sketch. It allows for storing/incrementing a count along with each key, and can answer “probably in set with count greater than _”, “definitely not in set”.

    Thanks for adding more detail on the DB use-cases!

  • Data Engineering @programming.dev

    Bloom filters: real-world applications

    llimllib.github.io /bloomfilter-tutorial/
  • Programming @programming.dev

    Bloom filters: real-world applications

    llimllib.github.io /bloomfilter-tutorial/
  • Cool project and post! There’s also !flashlight@lemmy.world if you’d like to cross post.

    gets hot enough to burn you if you leave it on for too long

    That Fenix looks like a reliable light and is designed with temperature regulation but the limit might be pretty high and of course it is being used in an enclosure.

  • I am learning Python, R, and SQL

    SQL is an excellent skill to invest in. Even though your current role doesn’t allow you to use it, there’s no substitute when pulling data from a relational db.

    It sounds like you’re currently focused on data quality. Automatic data quality checks are common features of (good) data workflows so this could be spun as relevant experience if you end up seeking a role writing data transformation code.

  • Data Engineering @programming.dev

    Hollow (toolset for disseminating in-memory datasets)

    hollow.how
  • Pliers+Philips is a great combination. I was using a Trailblazer (Tinker Deluxe + Metal Saw/Chisel) and recently switched to a Mechanic (Tinker Deluxe - scissors/hook)

  • PrintSF @literature.cafe

    Remote Control – Nnedi Okorafor

    nnedi.com /books/remote-control/
  • flashlight @lemmy.world

    Sofirn IF30 with 32650 battery

  • the science is spot-on

    Good to know! This is a favorite of mine as well.

    It gets recommended a lot, but Red Mars has a lot if science content, especially geology-related, in case you haven’t read it yet.

  • Thank you very much! Luckily, I have not yet read either of these — looking forward to them.

  • Although your current role wouldn’t seem very senior at a large organizational, “senior“ is a relative term and at this company it seems like you are the engineer with ownership responsibilities over the end-to-end software development of a production system. So it might still be reasonable to use a senior title if there are other benefits

  • PrintSF @literature.cafe

    Throwing Rocks: Without spoiling the plot, what are some good books that involve the threat of weaponizing a planet’s gravity well against its population?

  • PrintSF @literature.cafe

    (Cross-post) The writing in The Three-Body Problem (Cixin Liu) feels tacky

  • This kind of debunking is much appreciated! And doesn’t detract from my enjoyment of the book (and it’s sequels).

    You must have a high bar for science fiction as a scientist, do you have any recommendations tangential to this book? Thanks.

  • the author is more interested in how humanity as a whole would react to his fictional scenario than he is with writing characters with depth

    This was my impression as well and I think it works only because the fictional scenarios are extremely creative along with sometimes gratuitous science-fiction details from the author’s imagination. And even though most characters seemed unrealistic as people I still liked them as characters and found them memorable.

    I also read (listened to) Voyagers by Ben Bova recently and while the fictional scenario was interesting, the character development leaned heavily on the relationship between the hero scientist and the promiscuous young scientist, a writing style which I found more boring.

  • Focusing on code coverage (which doesn't distinguish between more and less important parts of the code) seems like the opposite of your very good (IMO) recommendation in another comment to focus on specific high-value use-cases.

    From my experience it’s far easier to sell the need for specific tests if they are framed as “we need assurances that this component does not fail under conceivable usecases” and specially as “we were screwed by this bug and we need to be absolutely sure we don’t experience it ever again.”

    Code coverage is an OK metric and I agree with tracking it, but I wouldn’t recommend making it a target. It might force developers to write tests, but it probably won’t convince them. And as a developer I hate feeling “forced” and prefer if at all possible to use consensus to decide on team practices.

  • We can’t test yet, we’re going to make changes soon

    This could be a good opportunity to introduce the concept of test-driven development (TDD) without the necessity to “write tests first”. But I think it can help illustrate why having tests is better when you are expecting to make changes because of the safety they provide.

    “When we make those changes, wouldn’t it be great to have more confidence that the business logic didn’t break when adding a new technical capability?”

    You shouldn’t have to refactor to test something

    This seems like a reasonable statement and I sort of agree, in the sense that for existing production code, making a code change which only adds new tests yet also requires refactoring of existing functionality might feel a bit risky. As other commenters mentioned, starting with writing tests for new features or fixes might help prevent folks feeling like they are refactoring to test. Instead they’re refactoring and developing for the feature and the tests feel like they contribute to that feature as well.

  • It’s probably not going to work as a defense against training LLMs (unless everyone does it?) but it also doesn’t have to — it’s an interesting thought experiment which can aid in understanding of this technology from an outside perspective.

  • Data Engineering @programming.dev

    (2017) Rise of the Data Engineer

    medium.com /free-code-camp/the-rise-of-the-data-engineer-91be18f1e603
  • I agree with how you characterized it and the term “ai engineer” didn’t resonate with me as defined by the author. If such an engineer doesn’t need to know about the data involved (“nor do they know the difference between a Data Lake or Data Warehouse”) then I don’t think they will be able to ship an AI/ML product based on data.

    New titles can be helpful for sorting out different roles with some shared skillsets such as the distinction which emerged between Data Scientist and ML Engineer at some companies to focus the latter on shipping production software using ML.

  • flashlight @lemmy.world

    Recommendation request: What is a good minimalist soldering iron setup for flashlight mods? (ANSWERED: Pinecil v2)

  • flashlight @lemmy.world

    A few small lights