Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)A
Posts
51
Comments
581
Joined
3 yr. ago

  • I think if we lived in a sane world this would be a constant discussion in every corner of daily life

  • I find llama.cpp with Vulkan EXTREMELY reliable. I can have it running for days at once without a problem. As far as tokens/sec that's that's a complicated question because it depends on model, quant, sepculative, kv quant, context length, and card distribution. Generally:

    Models' typical speeds at deep context for agentic use. Simple chats will be faster

    ModelQuantPrompt Processing (tok/s)Token Generation (tok/s)HardwareQuality
    Qwen 3.5 397BQ2_K_M100-12018-222 x 7900 + 4 x Mi50★★★★★
    Gemma4 31B or Qwen3.5 27BQ8_0400-80020-252 x 7900xtx★★★★
    Qwen 3.6 35BQ5_K_M1000-250060-1002 x 7900xtx★★★★
    Qwen 3.5 122BQ4_0200-30030-354 x MI50★★★★
    gpt-oss 120bmxfp4 (native)500-80050-603 x Mi50★★
    Nemotron 3 Nano 30BIQ3_K_XXS2500-3000150-1801 x 7900xtx
  • I wish I bought an epyc board last year instead of my rig. Would have been far fewer headaches and, with the price of RAM, I would have quintupled in value now!

  • This is something I learned the hard way.

    Consumer hardware is limited by multiple factors when it comes to PCIe connectivity.

    • Physical layout. Easy how many slots you have to plu into, their size, and configuration.
    • Supported lanes from the CPU
    • chipset (motherboard) limitations

    Your graphics card might be a 16 lane card (referred to as "x16"), but sometimes, not all of them are used. Aforementioned 5060ti - I believe only uses x8. Some devices like graphics cards can use a physically smaller slot with an adapter for a loss in performance (a few frames in game play performance)

    Similarly, your motherboard might have a x16 slot and another x16 at the bottom. That second slot might only function as x8 or even x4. Does this matter? Sort of. Inta-card communication aka peer to peer communication can affect affect performance and that can compound with multiple cards.

    Even worse, some motherboards may have all sorts of connectivity but may have limitations like only 2 out of the bottom 4 slots, PCIe and m.2, can work at a time. ASK ME HOW I KNOW.

    Your CPU controls PCIe. It has a hard cap in how many PCIe devices it can handle and what speed. AMD tends to be better here.

    Enterprise gear suffers from none of this bs. Enterprise CPUs have a ton of PCIe lanes and enterprise motherboards usually match the physical size of their PCIe slots to their capacity and support full bifurcation*

    PCIe lanes are used up by and consumable by m.2, MCIO, and occulink to name a few. That means that you can connect a graphics card to either one is those of you can figure out the wires and power**

    • ** Bonus: bifurcation and how my $200 consumer motherboard runs 6 graphics cards.

    Bifurcation is a motherboard feature that lets you split PCIe capacity, so a 16x slot can support two x8 devices. My motherboard lets me do this on just the main slot and in a strange x8x4x4 configuration. I have an MCIO adapter (google it) which plugs into the PCIe and gives me 3 PCIe adapters with those corresponding speeds.

    it also has 2 m.2 slots which connect to the CPU. One is them, I use for a nvme ssd like a normal person. The other is an m.2 to PCIe adapter which gives me an x4 PCIe slot. For those keeping track, that's 24 PCIe lanes so far. That's the maximum my processor Intel 265k can handle

    But wait! The motherboard also has a kind of PCIe router and that thing can handle 8 more lanes! So I use the bottom 2 PCIe lanes on my motherboard for 2 cards at x4 each. The thing that kills me is that there are more m.2 ports. But the mobo will not be able to use any more than 2 devices at once. AND even though that bottom PCIe slot is sized at x16, electrically, its x4.

    Do your research (level1techs is great) and read the manuals to really understand this stuff before you buy

    My mobo for reference ASUS: TUF GAMING Z890-PRO WIFI

  • Vulkan helps with speed. Must benchmarks prove that out. Concurrency is a mixed bag. You can get some with llama.cpp bit vllm is concurrency king.

    Just a couple of weeks ago llama.cpp released tensor parallelism which helps, but its still a experimental feature.

    Unfortunately, I don't know of any diffusion runners that work in vulkan. If someone has expertise, let me know!

  • I'm going to be brutal with you. I spent a few thousand dollars on 176GB of AMD vram because I was happy with getting vram for cheap and I hate Nvidia. It works and its nice to be able to run bigger models at usable performance, but if you need serious concurrency or good support for diffusion, you NEED Nvidia. AMD(and likewise Intel) just doesn't have the environment support for non-server GPUs. Again, coming from someone who's using this shit daily.

    If you understand this limitation, then yes those B70s are cool as are AMD Pro 9700 which might have slightly better support rn. You may consider nvidia V100s which are old and cheap. I always recommend people start with 3090s (as a general powerhouse) or a pair of 5060tis (for really hood llm support) though. It will make your life easy if you can afford the vram limitation

  • Highly recommend it

  • Thank you, JD.. I'm so glad you finally found time to end the war. I know you've been extremely busy for the past week serving the American people by... campaigning for the reelection campaign of extremely unpopular and corrupt right-wing leader of Hungary. Good stuff.

  • That's a cool write up! I suspect a lot of this will have to do with quantization, how much quality is getting lost, and how each model behaves under different quants.

  • You're comparing apples and oranges. Qwen3.5 27B is a dense model. Gemma4 26B is a mixture of experts model with 4B parameters activated at once. The equivalent would be Gemma4 31B, which is the Gemma4 dense model.

    Both dense models are EXTREMELY good. From my testing, they can code and work agenticly with similar performance as a cutting edge model (Gemini, ChatGPT, Claude Sonnet) from 4-5 months before their release. Usually, Gemma models are better at prose and the Qwen model has scores a little bit better on coding and logic tests. These models being dense, they require more computation and memory bandwidth than mixture of experts (moe) models, which means they're slower or more expensive to run.

    If you purely are comparing the models you originally listed, the Qwen model will crush Gemma4 26B but it will run at a quarter of the speed. :)

  • Yeah. People will notice. People will speculate. Wild differences between people's INTERESTS tend to lead to relationship problems... usually. I think wild age differences are only weird when combined with differences in power and interest. Imo

  • So RAM, GPUs, and SSDs will become cheap and available again...

    ...

    Right...

    ...

    Right!?!?!

  • Three for one dad joke? Gee golly willikers! That's a steal!

  • Popular imagery?

    Edit. Ugh sorry for posting AI without warning. Can't believe I was fooled by a signature

  • "unlimited PTO"

    *looks inside

    ”4 weeks of PTO unless you have VP approval except you'll never get it”

  • I'd never heard of it but it make sense once you read the Wikipedia article on it

  • Holy potatoes, Batman! That's an ancient meme

  • Bojack Horseman predicted (?) this almost a full decade ago in Season 3 (July 2016l)

  • If my guy was out there to subvert my expectations, he got me. Still bad. But bad with a purpose?

  • linuxmemes @lemmy.world

    2025, My Year of The Linux Desktop

  • Photography @lemmy.world

    Big Steel Ball

  • Photography @lemmy.world

    New York Wildlife

  • Photography @lemmy.world

    Shiny Buildings

  • Photography @lemmy.world

    Rockefeller Center

  • Photography @lemmy.world

    Central Park

  • Photography @lemmy.world

    Building

  • Photography @lemmy.world

    Times Sqare

  • Photography @lemmy.world

    Zero Fucks

  • Photography @lemmy.world

    New York Skyline 2025

  • Linux @lemmy.ml

    Is a daily-driver computer built on top of a hypervisor a bad idea?

  • linuxmemes @lemmy.world

    Inspired by a Lemmy post

  • LocalLLaMA @sh.itjust.works

    AI and You Against the Machine: Guide so you can own Big AI and Run Local - YouTube

  • Photography @lemmy.world

    Griddy

  • birding @lemmy.world

    Rufous Hummingbird

  • Photography @lemmy.world

    Hummingbird

  • Selfhosted @lemmy.world

    Ideas for Hosting on a 2009 Netbook?

  • homeassistant @lemmy.world

    Help Me Satisfy My Wife Using HA

  • homeassistant @lemmy.world

    Long time lurker to newb arc

  • pics @lemmy.world

    Waterhole Slot Canyon