The mental model I have about performance is that the higher abstraction usually beats the lower level abstraction.
So in that sense, a well architected software with proper caching, multithreading where it matters etc. will beat badly architected software (ex: one that brute forces everything). Then, that being equal, good algorithms and solutions beat bad ones. Only then faster runtimes make more of a difference, and at the bottom things like more efficient processor architectures, more efficient compiler etc. beat slower ones.
A good example is Lemmy itself, which as far as I know was made in Rust to be super fast, but then at the beginning was being DDOSed quite easily because of the way the database was designed and lots of queries were very slow. Once they fixed that, Lemmy became actually usable.
It helps to look up certain concepts in the Wiki (Arch Wiki is probably the most complete and well explained) as you come across them. The idea is to increase knowledge little by little, but over time it compounds.
That’s the same thing I’d do when o used Arch. Always kept up to date to announcements of something major like a DE upgrading and usually would reset all the settings just in case. It avoided me any problems during the years I ran it.
I’ve been really into learning about BSD lately and even setup a VM with OpenBSD here to try it. I also like the concept of “immutable” base system and everything else is a user-version package that takes precedence.
This kind of thing can be easily automated nowadays. It’s not really a problem.