Lol, there are smaller versions of Deepseek-r1. These aren't the "real" Deepseek model, but they are distilled from other foundation models (Qwen2.5 and Llama3 in this case).
For the 671b parameter file, the medium-quality version weighs in at 404 GB. That means you need 404 GB of RAM/VRAM just to load the thing. Then you need preferably ALL of that in VRAM (i.e. GPU memory) to get it to generate anything fast.
For comparison, I have 16 GB of VRAM and 64 GB of RAM on my desktop. If I run the 70b parameter version of Llama3 at Q4 quant (medium quality-ish), it's a 40 GB file. It'll run, but mostly on the CPU. It generates ~0.85 tokens per second. So a good response will take 10-30 minutes. Which is fine if you have time to wait, but not if you want an immediate response. If I had two beefy GPUs with 24 GB VRAM each, that'd be 48 total GB and I could run the whole model in VRAM and it'd be very fast.
They're probably referring to the 671b parameter version of deepseek. You can indeed self host it. But unless you've got a server rack full of data center class GPUs, you'll probably set your house on fire before it generates a single token.
If you want a fully open source model, I recommend Qwen 2.5 or maybe deepseek v2. There's also OLmo2, but I haven't really tested it.
Mistral small 24b also just came out and is Apache licensed. That is something I'm testing now.
Most open/local models require a fraction of the resources of chatgpt. But they are usually not AS good in a general sense. But they often are good enough, and can sometimes surpass ChatGPT in specific domains.
Don't know about "always." In recent years, like the past 10 years, definitely. But I remember a time when Nvidia was the only reasonable recommendation for a graphics card on Linux, because Radeon was so bad. This was before Wayland, and probably even before AMD bought ATI. And it was certainly long before the amdgpu drivers existed.
Yeah, it would be a good idea. Not to auto-update functions, because that would be very very bad, but to at least indicate there's an update available.
Had a team lead that kept requesting nitpicky changes, going in a FULL CIRCLE about what we should change or not, to the point that changes would take weeks to get merged. Then he had the gall to say that changes were taking too long to be merged and that we couldn't just leave code lying around in PRs.
Yeah, it was something along those lines. I don't remember the exact specifics. I don't really understand why that is. I guess it's because they're copying and pasting nutritional information from the tubs where it's more properly measured by volume. But one would think that regulations would require the same units for serving size and nutritional information. Or at least the same type of unit (mass/volume).
My personal favorite experience relating to this was buying some ice cream with nutritional information by the milliliter, but with serving size by the gram...
It's a systemic issue going back decades. To me, it seems the Dutch government always wants to fix it with a hammer. Repeatedly. Discrimination increases, no REAL effort for integration is made (forcing people to take totally-not-racist "civic integration exams" is not an effort), and over the years the divide increases. Tell people they are monsters long enough, and that's what they'll become. But no one wants to hear that fixing it would take years or even decades of sustained effort and change. They just want it fixed. And fixed now.
There is no one magic bullet solution, unfortunately. And then it all comes to a head with the events in Amsterdam. The instigators need to be arrested and tried, but society needs to take a close look at what caused this to happen to begin with. And I doubt that will happen. Just more hammers.
1 scenario tested is better than 0 tested.