Skip Navigation

Posts
21
Comments
709
Joined
3 yr. ago

It's not always easy to distinguish between existentialism and a bad mood.

  • Good find, this is never-take-me-seriously stupid, and also does the beigeness thing of trying to gradually work around an accepted definition in order to almost make a point at the last minute, here being that since (we have apparently concluded that) (because of uh hypothetical brain surgery and stuff) accountability = improvability + punishability and nothing else so of course software can be held "accountable" in all the ways that matter.

    His big mistake is not doing it at novel length so it's really obvious that he's being willfully stupid about it.

  • Is that the guy who's always trying to use LessWrong as preemptive conversion therapy to cure him of having trans thoughts, and they're actually having none of it?

  • Neither 'untraceable' nor 'money' describe crypto very well though, that's just how it's marketed.

  • I mean it's so cut and dried you had to invent a disadvantage for pushing the red button.

    Maybe the catch is that picking red means you are basically ok with offing people who don't think like you do en masse, even though it's posited like a dilemma between securing the lives of your family vs giving a chance to hypothetical people who are heavily OCD in favor of blue buttons.

  • If this isn't pure engagement bait, what's the real world situation this is supposed to map to? Pressing red means you always live, and if everyone pushes red everyone lives so...

    I mean if blue is supposed to be a proxy for altruism, that usually doesn't come with a certain death conditional.

  • Apparently, you buy some currency type thing called AI Units and this is the rate the different LLMs consume them. The multipliers used to represent requests I think, i.e. times you triggered inference, but ai units are a proxy for token burn in a somewhat vague way, which makes me think there will be rate limit related controversies similar to what's now happening with anthropic.

    Existing enterprise users will get double the AIUs for three months to ease them to the new pricing model, so autumn (when the enterprise AIU pools get effectively halved) is gonna be fun.

  • ZSNES makes a comeback, has No Vibe Coding stipulation front and center.

  • most of them were not able to share their favourite brainworm without being infected by the others which were being passed around

    The good old cultic milieu.

  • This makes so much sense, and also explains why siskind's readers are fine with him being openly disingenuous sorry I meant amenable to straussian readings.

  • I use AI sparingly to make sure the company-paid subscription is a net loss for the AI vendor.

    Hey, it could happen.

    Overall, I think it was a bit cookie cutter for an article of this type, but maybe It's just the preaching to the choir effect. Even the fact that he ostensibly quit his job over this stuff doesn't hit as hard as it should, it comes off as if he could have done so at any time but this way he gets to grandstand about it.

    Also stuff like this:

    It wasn’t a bad job, not by most metrics. It ticked the boxes a job is supposed to tick: good pay. Health insurance. Remote work. Time off. Nice coworkers.

    sounds like it should be in a how do you do, fellow workers copypasta.

  • The joke is bitcoin, not that sometimes quasi-financial instruments diminish in price.

  • The poster goes on to whine about how it sucks it's not the same as solving puzzles and they're just QAing all day now and calls the LLM a slot machine, so at least they're not boosting.

    Still, they don't go so far as to say they were forced to work this way, so their not even looking at the code either means they're a lazy bum or that it's too far past incoherent to be worth it.

  • Saw a remarkable take on the pro-AI parts of bsky, that since DeepSeek420.69 can offer the model at like 15% of Claude's pricing, that must mean that Anthropic is operating at an at least 80% positive margin on inference, so things will work out.

    In the same thread they complained about Zitron's math being dodgy.

  • Nadella told colleagues he has started testing Clawdbot

    Nadella? Satya I run ten agents to read the news and my mails and explain them to me before sending answers in my place Nadella? Weird he's still around, thought he'd be a bot by now.

    Clawpilot sounds completely cursed, can't wait.

  • Their heart seems to be in the right place, police interrogation will be exploitative and brainwashy with no real consequences for the interrogators, but they sure chose the dumbest possible way to make their point:

    Despite the claims of AI evangelists, chatbots aren’t people and haven’t achieved sentience. The differences between a chatbot and a real person, however, make Heaton’s ability to elicit a false confession more disturbing, not less.

    "ChatGPT lacks many of the vulnerabilities that make people more likely to falsely confess — like stress, fatigue, and sleep deprivation,” said Saul Kassin, a professor emeritus at John Jay College who wrote the book on false confessions. “If ChatGPT can be induced into a false confession, then who isn’t vulnerable?"

  • In the same period, the English-language alignment and AI ethics literature produced no substantive engagement. No citations. No rebuttal.

    Wow it's almost like alignment and AI ethics studies is less a serious academic field and more like a prank capital likes to play on consumers.

    But I also think Zhao Tingyang's take that alignment will make AI evil because people are evil falls too much into the the-people-deserve-to-be-disempowered totalitarian state funny business side of things to be especially influential down these parts.

  • I assume that's more because they used AI to lift zootopia's art style wholesale and so that's just how rabbits are now.

  • It's not so much he fails the purity test than it is he thinks all gen-ai works like whisper on a laptop and open source LLMs grow on trees.

    edit: and also that big company engineers are encouraged to be discreet in using them, like what?

  • CEV is what he would want if he were wiser and less confused

    Isn't that just steelmanning?

    I gathered the "idealized version of myself" was because it's supposed to be applied to a superintelligence, because of course it's an alignment thing.

  • SneerClub @awful.systems

    Storytime with Rationalist Rabbi Scott Alexander

    web.archive.org /web/20260310100727/https://slatestarcodex.com/2018/04/01/the-hour-i-first-believed/
  • TechTakes @awful.systems

    Apparently Anthropic may be about to be on the receiving end of some major banana republic shit from the Trump admin -- Update: Anthropic labeled supply chain risk by DoD.

    archive.is /20260226063523/https://www.axios.com/2026/02/25/anthropic-pentagon-blacklist-claude
  • TechTakes @awful.systems

    Peter Thiel Antichrist lecture: We asked guests what the hell it is

    sfstandard.com /2025/09/16/peter-thiel-antichrist-san-francisco/
  • TechTakes @awful.systems

    Albania appoints AI bot as minister to tackle corruption

    www.reuters.com /technology/albania-appoints-ai-bot-minister-tackle-corruption-2025-09-11/
  • SneerClub @awful.systems

    Where Scoot makes the case about how an AGI could build an army of terminators in a year if it wanted.

    www.reddit.com /r/slatestarcodex/comments/1kp3qdh/how_openai_could_build_a_robot_army_in_a_year/
  • TechTakes @awful.systems

    OpenAI scuttles for-profit transformation

    www.axios.com /2025/05/05/opena-nonprofit-altman-chatgpt
  • TechTakes @awful.systems

    "If a man really wants to make a million dollars, the best way would be to start his own social network." -- L. Ron Altman

    www.theverge.com /openai/648130/openai-social-network-x-competitor
  • TechTakes @awful.systems

    UK creating ‘murder prediction’ tool to identify people most likely to kill

    www.theguardian.com /uk-news/2025/apr/08/uk-creating-prediction-tool-to-identify-people-most-likely-to-kill
  • NotAwfulTech @awful.systems

    Advent of Code 2024 - Historian goes looking for history in all the wrong places

    adventofcode.com
  • SneerClub @awful.systems

    New article from reflective altruism guy starring Scott Alexander and the Biodiversity Brigade

    reflectivealtruism.com /2024/10/31/human-biodiversity-part-4-astral-codex-ten/
  • TechTakes @awful.systems

    It can't be that the bullshit machine doesn't know 2023 from 2024, you must be organizing your data wrong (wsj)

    archive.ph /dlyen
  • TechTakes @awful.systems

    Generating (often non-con) porn is the new crypto mining

    www.pcgamer.com /hardware/an-ai-company-has-been-generating-porn-with-gamers-idle-gpu-time-in-exchange-for-fortnite-skins-and-roblox-gift-cards/
  • SneerClub @awful.systems

    SBF's effective altruism and rationalism considered an aggravating circumstance in sentencing

    www.citationneeded.news /sam-bankman-frieds-sentencing/
  • SneerClub @awful.systems

    Rationalist org bets random substack poster $100K that he can't disprove their covid lab leak hypothesis, you'll never guess what happens next

  • SneerClub @awful.systems

    Hi, I'm Scott Alexander and I will now explain why every disease is in fact just poor genetics by using play-doh statistics to sorta refute a super specific point about schizophrenia heritability.

    www.astralcodexten.com /p/some-unintuitive-properties-of-polygenic
  • SneerClub @awful.systems

    Reply guy EY attempts incredibly convoluted offer to meet him half-way by implying AI body pillows are a vanguard threat that will lead to human extinction...

    nitter.net /AndrewYNg/status/1736577228828496179
  • SneerClub @awful.systems

    Existential Comics on rationalism and parmesan

    existentialcomics.com /comic/526
  • TechTakes @awful.systems

    Turns out Altman is a lab-leak covid truther, calls virus 'synthetic' according to Spectator piece on AI risk.

    archive.is /20231123095653/https://www.spectator.co.uk/article/virology-poses-a-far-greater-threat-to-the-world-than-ai/
  • SneerClub @awful.systems

    Rationalist literary criticism by SBF, found on the birdsite