Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)W
Posts
3
Comments
590
Joined
3 yr. ago

  • I'm from Austria and I got the "house" idea, but it's clear as day what they did and in Austria I'm pretty sure a Geschworenengericht would have sentenced them (Verbotsgetzt). I'm not sure about the laws in Germany but given the history I guess you have similar laws.

  • Holy shit. And that's legal in Germany? What about "never again"?

  • That Deadpool is trapped inside that gamer chair?

  • Grundsätzlich zwar nicht aber bisschen schon. Vor allem weil du dir unendlich viele Dinge ausdenken kannst die unwiederlegbar sind.

    Nur weil jemand behauptet dass wir alle, bunte, unsichtbare, Schutzeinhörner haben die uns überall begleiten und es weder Beweise dafür oder dagegen gibt, ist die Antwort trotzdem nicht "wir können es nicht wissen, weil Abwesenheit von beweisen kein Beweis für Abwesenheit ist", sondern dass die Idee kompletter Stumpfsinn ist, und falsch.

  • I

  • Humans are also "pattern recognition engines". That's why optical illusions and similar completely mess with our brains. There are patterns that we perceive as moving/rotating even though the pattern is completely stationary.

    But nobody would claim that you can't trust your eyes in general just because optical illusions exist.

  • Your boss expects you to weld with good quality but they don't expect you to answer every question there is, without any mistakes. The problem with LLMs is that they are trained purely on text found on the internet but they have no "life experience" and thus there world model is very different from ours. There are overlaps (that's why they can produce any coherent output at all) but there are situations that make perfect sense in its world model, that's complete bogus in the real world.

    It's a bit like the shadows in Platons cave allegory. LLMs are practically trained only on the shadows and so the output is completely based on that shadow world. LLMs can describe pain (because it was in the training data) but it was never smacked in the face.

  • I think you are right. IMHO the room actually does speak/understand Chinese, even of the robot/human in the room does not.

    There are no neurons in your brain that "understand" English, yet you do. Intelligence is an emergent property. If you "zoom-in" enough everything is just laws of physics and those laws don't understand English or Chinese.

  • To be fair all of what you've said applies to humans too. Look how many flat earthers there are and even more people that believe in homeopathy or think that vaccines give you autism, think that aliens built the pyramids.

    But nobody calls that "hallucinations" in humans. Are LLMs perfect? Definitely not. Are they useful? Somewhat; but definitely extremely far from PhD level intelligence as some claim.

    But there are things LLMs are why better than any single human already (not collectively). Giving you a hint (doesn't have to be 100% accurate) what topics to look up if you can just describe is vaguely but don't know what you would even search for in a traditional search engine.

    Of course you can not trust it blindly, but you shouldn't trust humans blindly either, that's we we have the scientific method, because humans are unreliable too.

  • The training process evolves models to do predictions. The actual underlying mechanisms are not too relevant because the prediction function is an emergent property.

    You brain is just biochemistry and biochemistry isn't intelligent and yet you are. Think of the number three and all you know about it. There is not a single neuron in your brain that has any idea what the concept of three even means. It's an emergent behavior.

  • Black holes are GR and it wouldn't make the calculations much different. Take the moon for example, the orbit would be exactly the same no matter if earth is a rocky planet, a black hole or a point mass.

  • Sphere with radius zero. Problem solved 🤣

  • I'm not a native speaker, but that sounds like semantics to me. How would you, when chatting, differentiate if the other end is "knowledgeable" or if it "merely contains knowledge"?

  • I don't get your analogy. Put your brain through a shredder. Is it still intelligent? All the atoms are still there.

  • Vermutlich wegen der Plasmagefahr 🤣

  • Die Spannung ist kaum zu ertragen.

  • The last line says 2 pets max. So two small Snow Leopards maybe?

  • Most sciences don't care about "truth" but care about models that predict the outcome of experiments. Even a model that works perfectly doesn't mean that this model is how the universe works. The universe could work completely different but the model happens to be very accurate anyway. Think about Newtowns laws of motion. They do not describe how the universe really works but the model is still pretty accurate and useful in many situations.

    Even if we some day find a theory of everything, that still doesn't mean we know anything about the true nature of the universe. Just that everything we can observe is described by the model we developed.

  • every god