This is obvious for people who understand the basics of LLM. However, people are fooled by how intelligent these LLM sounds, so they mistake it for actually being intelligent. So, even if this is an open door, I still think it's good someone is kicking it in to make it clear that llms are not generally intelligent.
- Posts
- 407
- Comments
- 271
- Joined
- 3 yr. ago
- Posts
- 407
- Comments
- 271
- Joined
- 3 yr. ago
Rust @programming.dev Rust Analyzer Changelog #224
Rust @programming.dev Rust Analyzer Changelog #223
Rust @programming.dev A nice review of time, chrono and hifitime
Programmer Humor @programming.dev Should I file a bug report? 😀
Rust @programming.dev Should I file a bug report? 😀
Rust @programming.dev Rustls Now Using AWS Libcrypto for Rust, Gains FIPS Support
Rust @programming.dev This Week in Rust 536 · This Week in Rust
Rust @programming.dev Lessons learnt from building a distributed system in Rust
Rust @programming.dev Nice collection of AreWe*Yet , to track various aspects of Rust
Rust @programming.dev Asynchronous clean-up
Rust @programming.dev Graphite internships: announcing participation in GSoC 2024.
Rust @programming.dev rustc_codegen_gcc: Progress Report #30
Rust @programming.dev : “Precious” files and
core.precomposeUnicodesupportRust @programming.dev Rust participates in Google Summer of Code 2024 | Rust Blog
Rust @programming.dev This Week in Rust 535 · This Week in Rust
Rust @programming.dev arighi's blog: Writing a scheduler for Linux in Rust that runs in user-space
Rust @programming.dev Rust Analyzer Changelog #221
Rust @programming.dev The Linux Kernel Prepares For Rust 1.77 Upgrade
Rust @programming.dev Building an Async Runtime with mio
Rust @programming.dev This Week in Rust 534 · This Week in Rust








I actually asked chatGPT about a specific issue I had and solved a while back. It was one of these issues where it looked like a simple naive solution would be sufficient, but due to different conditions that fails, you have to go with a more complex solution. So, I asked about this to see what it would answer. And it went with the simpler solution, but with some adjustments. The code also didn't compile. But it looked interesting enough, for me to question my self. Maybe it was just me that failed the simpler solution, so I actually tried to fix the compile errors to see if I could get it working. But the more I tried to fix its code the more obvious it got that it didn't have a clue about what it was doing. However, due to the confidence and ability to make things look plausible, it sent me on a wild goose chase. And this is why I am not using LLM for programming. They are basically overconfident junior devs, that likes mansplaining.