It’s still very impressive. The EEG she uses only reads general thoughts: e.g. thinking about pushing a boulder. She can only really do specific actions with that: there’s no level of analog control (how much should this move), it’s just a single action (fire a fireball). The brain chip is likely much higher fidelity and therefore can read much finer signals. All the credit goes to the researchers, of course, who’ve spent the last decade researching and fine tuning this technology.
Samsung Galaxy Tab S8/9 Ultra. Publically made fun of Apple’s notch, then released their tablets with a notch a few months later. (Although tbf nowhere near as pronounced as Apple’s and mostly justified due to the extremely thin bezels).
I did this with many languages. Spoke Hindi, but convinced people I could speak the other related languages (Telegu, Marathi, etc.) by just saying random things in my little fake accent. Usually ended it with some small “sharp” words (like “tittu”, just sounds “sharp”) to really sell it.
Maybe I’m just lazy, I’ve only invested 10-15 hours total into my config.
Once I got it working, I’ve never bothered to really even touch it. (I probably should, it’s most likely months of out of date…just like my NixOS config…)
Next time I make changes will probably be when I update to 0.10 for inlay hints and set that up along with attempting to fix that error message that randomly pops up every time I start Neovim.
Also probably not the typical Neovim config experience, but I’ve configured it enough to get of my way, now I just want to write code.
I mean, it will be. The AI friend is always available, always knows what to say, never fights with you, and never messes up (ideally).
However, all those things are part of the human element: and at the end, you’re still talking to a computer. The AIs are just trying to please you. A person can actually love you, and that’s something else. And I’d take that over the perfect chatbot any day.
AI’s not bad, it just doesn’t save me time. For quick, simple things, I can do it myself faster than the AI. For more big, complex tasks, I find myself rigorously checking the AI’s code to make sure no new bugs or vulnerabilities are introduced. Instead of reviewing that code, I’d rather just write it myself and have the confidence that there are no glaring issues. Beyond more intelligent autocomplete, I don’t really have much of a need for AI when I program.
It’s still very impressive. The EEG she uses only reads general thoughts: e.g. thinking about pushing a boulder. She can only really do specific actions with that: there’s no level of analog control (how much should this move), it’s just a single action (fire a fireball). The brain chip is likely much higher fidelity and therefore can read much finer signals. All the credit goes to the researchers, of course, who’ve spent the last decade researching and fine tuning this technology.