Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)A
Posts
264
Comments
34
Joined
3 yr. ago

  • Another resource is PLDB which is more of an index: it has many more languages, but less detail and curation.

  • You probably need to annotate recursive function return values. I know in some languages like Swift and Kotlin, at least in some cases this is required; and other languages have you annotate every function’s return value (or at least, every function which isn’t a lambda expression). So IMO it’s not even as much of a drawback as the other two.

  • Code is data.

    I would just transmit the program as source (literally send y = mx + b) instead of trying to serialize it into JSON or anything. You'll have to write a parser anyways, printing is very easy, and sending as source minimizes the points of failure.

    Your idea isn't uncommon, but there's no standard set of operations because it varies depending on what you code needs to achieve. For instance, your code needs to make plots, other programs send code for plugins, deployment steps, or video game enemies.

    There is a type of language for what you're trying to achieve, the embedded scripting language ("embedded" in that it's easy to interpret and add constants/hooks from a larger program). And the most well-known and well-supported embedded scripting language is Lua. Alternatively, assuming you're only using basic math operators, I recommend you just make your own tiny language, it may actually be easier to do so than integrate Lua.

  • IMO the 1 and only important reason is that PHP today is much different than PHP of the past. PHP's notoriety comes from its early days, but now I hear it's another general-purpose language with modern design, good IDE support, and tons of online resources. Plus it's a explicitly designed for server-side scripting, so if that's your goal it will be the best (most straightforward and supported) choice.

    https://www.reddit.com/r/webdev/comments/wt6wam/newbie_here_is_using_php_still_fine/

    https://phptherightway.com/

  • Yes. Unfortunately I’m not very familiar with GHC and HLS internals.

    What I do know is that when GHC is invoked with a certain flag, as it compiles each Haskell file, it generates a .hie file with intermediate data which can be used by IDEs. So HLS (and maybe JetBrains Haskell) works by invoking GHC to parse and type-check the active haskell file, and then queries the .hie file to get diagnostics, symbol lookups, and completions. There’s also a program called hiedb which can query .hie files on the command line.

    Rust’s language server is completely different (salsa.rs). The common detail is that both of them have LLVM IR, but also higher-level IRs, and the compiler passes information in the higher-level IRs to IDEs for completions/lookups/diagnostics/etc. (in Haskell via the generated .hie files)

  • The main drawback is being a new incomplete language. No solid IDE support, huge online resources, training data for LLMs; and you have a not-tiny chance of the compiler crashing or producing the wrong output.

    IDE-friendliness and most of the other features, I don’t think there are any major flaws. The main drawback with these is implementation complexity, possibly slower compile time (and maybe runtime if this IR prevents optimizations). There are also unknown drawbacks of doing new and experimental things. Maybe they have the right idea but will take the wrong approach, e.g. create an IR which is IDE-friendly but has issues in the design, which a different IDE-friendly IR would not.

    Some of these features and ideas have been at least partly tried before. The JVM bytecode is very readable. LLVM IR is not, but most languages (e.g. Rust, Haskell) have some higher intermediate-representation which is and the IDE does static analysis off of that. And some of the other features (ADTs, functor-libs, blocks) are already common in other languages, especially newer ones.

    But there are some features like typed-strings and invariants, which to me seem immediately useful but I don’t really see them in other languages. Ultimately I don’t think I can really speak to the language’s usefulness or flaws before they are further in development, but I am optimistic if they keep making progress.

  • Whoever makes the Advent of Code problems should test all them on GPT4 / other LLMs and try to make it so the AI can't solve them.

  • It's funny because, I'm probably the minority, but I strongly prefer JetBrains IDEs.

    Which ironically are much more "walled gardens": closed-source and subscription-based, with only a limited subset of parts and plugins open-source. But JetBrains has a good track record of not enshittifying and, because you actually pay for their product, they can make a profitable business off not doing so.

  • I’m not involved in piracy/DRM/gamedev but I really doubt they’ll track cracked installs and if they do, actually get indie devs to pay.

    Because what’s stopping one person from “cracking” a game, then “installing” it 1,000,000 times? Whatever metric they use to track installs has to prevent abuse like this, or you’re giving random devs (of games that aren’t even popular) stupidly high bills.

    When devs see more installs than purchases, they’ll dispute and claim Unity’s numbers are artificially inflated. Which is a big challenge for Unity’s massive legal team, because in the above scenario they really are. Even if Unity successfully defends the extra installs in court, it would be terrible publicity to say “well, if someone manages to install your game 1,000 times without buying it 1,000 times you’re still responsible”. Whatever negative publicity Unity already has for merely charging for installs pales in comparison, and this would actually get most devs to stop using Unity, because nobody will risk going into debt or unexpectedly losing a huge chunk of revenue for a game engine.

    So, the only reasonable metric Unity has to track installs is whatever metric is used to track purchases, because if someone purchases the game 1,000,000 times and installs it, no issue, good for the dev. I just don’t see any other way which prevents easy abuse; even if it’s tied to the DRM, if there’s a way to crack the DRM but not remove the install counter, some troll is going to do it and fake absurd amounts of extra installs.

  • Usually when I buy bigger packs of salmon, the amount varies and the only thing roughly consistent is the weight. So if they decreased the weight, you might either get 2 bigger fillets or 3 smaller fillets depending on the package

  • But aren’t the GPUs used by AI different than the GPUs used by gamers? 8GB of RAM isn’t enough to run even the smaller LLMs, you need specialized GPUs with 80+GB like A100s and H100s.

    The top-tier consumer models like the 3090 and 4090 have 32GB, with them you can train and run smaller LLMs locally. But there still isn’t much demand to do that because you can rent GPUs on the cloud for cheap; enough that the point where renting exceeds the cost of buying is very far off. For consumers it’s still too expensive to fine-tune your own model, and startups and small businesses have enough money to rent the more expensive, specialized GPUs.

    Right now GPU prices aren’t extremely low, but you can actually but them from retailers at market price. That wasn’t the case when crypto-mining was popular