I'm convinced that if all the left-leaning non-voters who opposed Biden because of Gaza had gone and voted "Uncommitted" in the primary, the DNC strategists would have run a different campaign.
I misunderstood what you were saying. Yeah I believe you could be right about this. I'm on board with your analysis now.
He's not selling anything specific and not to end users. You're talking about something completely different. The way Sam and investors and corporate customers talk about AI is pretty misleading, but it's not misleading users. No one looks at AI replacing CSRs and inventing new sciences, whatever the fuck that means, and jumps to it can unerringly diagnose a rash. And even if they did, the bot explicitly says not to trust it.
If some dirt farmer asks it how to avoid losing his family farm in a drought and takes ChatGPT's advice to plant chocolate chips and loses the farm anyway, I suggest that's a user error.
We already know that candidates ignore non-voters, and we saw in 2024 that people voting explicitly for "not these people" has an effect
Does it? If every one of those people had voted for Harris, she still would've lost by a landslide. If I'm a democratic strategist, I'm looking elsewhere to gain voters.
Which isn't to say the morality of opposing genocide isn't a worthy goal in itself, of course, but when it comes to winning the election Gaza doesn't move the needle enough.
So my question is what else? Don't get hung up on Gaza like it's a solution. It's a small piece of any winning strategy.
They don't. They say we made this thing, see what you can do with it. They also put disclaimers on ChatGPT to say not to rely on it to be correct.
One can infer from that, that any use for which you are relying on accuracy is incorrect use. Which is why it's critical to have any output filtered through a domain-capable human.
A pencil is a tool with a pretty wide open purpose within the writing ecosystem. It can be used to document history or remember a phone number or draw a picture.
You can also stab yourself in the eye with it or plan a murder.
That's obviously bullshit but he's not telling users they can develop time travel or something. That's the distinction I would draw. He's selling investment. That's not where the end users that are misusing ChatGPT are at.
My observation is that largely it's the downstream AI consumers who repackage it irresponsibly. That said, I don't hang on the words of Sam Altman and it's certain they are pushing the idea that AI is more capable than it is, but mostly what I see is them saying they built this thing and it does neat stuff and it can probably do neat stuff for you, use your imagination.
I believe a lot of the folks developing these tools would be horrified at the irresponsible ways vendors and end users are using it.
I have not seen OpenAI advertise ChatGPT as capable of medical diagnosis or therapy or anything like that. If you want therapy, and you can't afford better — because I think we can agree that AI is terrible at it, then there should be a therapy app with explicit safety controls.
The problem is someone created a screwdriver which is handy for lots of screwdriver shaped purposes and someone is trying to carve a ham.
They put lives at risk the same way every single product at your local home improvement store does. When you misuse a tool for a purpose it wasn't intended and isn't good at, you're going to get bad results.
This is an issue for the educational system, not the legal system.
If Lemmy were my entire window to the world, I'd be worried about an echo chamber. It's not though, so I keep it as a safe place. I don't owe anyone my attention just because they have feels about something I say or we comment on the same boards.
Names come and go. Givessomefucks is the only one that comes to mind. Mostly I just respond to the words. My client shows an upvote total. I guess I start to recognize names when someone states running that upvote total up.
Words are more important than identity. That's why I'm on anonymous social media in the first place. I might be a cantankerous asshole in one thread because mood, and the next I'm offering advice with authenticity and lived experience. Those two posts might as well be written by different people.
If you are asking because you are afraid someone would put together a profile of you engaging in or admitting to criminal activity, then I would suggest you don't do that.
If you are asking because you are trying to figure out how to build such a profile, I would suggest fucking directly off. I use exclusively anonymous social media for a reason. I don't try to promote myself, nor do I leverage my identity to lend any sort of authority to my posts here. By interacting with me, you are taking the chance I'm a bot or sociopathic liar or whatever. Either my words speak for themselves and serve as my bonafides or they do not.
Trying to strip away any of that anonymity, even for legitimate cause (such as to identify a bad actor) , could also be used maliciously. I can't answer your question, but if I could, I would not.
You can use Cline with a local AI. It doesn't work great for enterprise level stuff because the number of tokens quickly swamps my MacBook, but qwen code can easily handle bash/ Python scripts and the like. Then you can use .clinerules to shape the agents but it's all mostly vibe from there on out
At work, we use a highly structured folder of agent prompts and infrastructure information with Claude. Because it's so highly structured it feels more like intentional code/config but I couldn't quantify any improvement metric at this time. There might be, but it would be premature to suggest you'd see any improvement over very be prompts.
I'm a technical lead for an AI-based startup and enthusiast about AI. I've been in software development for about 30 years. I'm responsible for making sure my teams use AI in their development process and enabling them and measuring the results. So from the perspective of your average lemming, I am biased towards AI and all of the terrible things it heralds, and probably literally Satan. I want you to keep that perspective in mind as you read my thoughts.
AI can create simple applications well. Of there is a tedious part of your job that takes time and focus away from your key job duties, AI can probably write a Python script to automate that for you.
The capabilities of AI are continuing to expand through breaking your ask up into multiple smaller tasks and executing them and verifying the output. However the ability of AI is growing at a smaller exponent than the cost. AI is not sustainable currently. At some point, the true cost of all the data center construction, hardware, electricity, etc will have to be passed on to customers and AI development projects will become vastly more expensive.
AI doesn't think and doesn't learn (though RAG pipelines can make it more effective) which means it can't learn through failure. The number of times it has led me in a circle because it doesn't know how to fix something and keeps trying different things until it has spent $10-20 in tokens just to reinvent the original problem is high.
The hardest parts of development aren't working the code. The hardest parts are translating requirements into code. Identifying and reasoning about edge cases. Planning and architecting. Identifying design tradeoffs and recommending / picking the right one. Coordinating with stakeholders.
AI can help with those tasks but it can't do those tasks. AI might slightly reduce the number of CSEs in the world a bit, but it will never, ever replace a significant number of us. It can't. The code it produces sucks without knowledgeable human guidance.
My teams are seeing a 10-12% self-reported productivity gain (or will take a few months before we have verifiable velocity management so take that with a grain of salt). We are aspiring to maybe 25% productivity gains on greenfield development. But to be honest that's the company line. I'm hopeful but skeptical we will see even that. I use AI every day and it is helpful in lots of ways, but you have to recognize when it's going off the rails or doing the wrong thing.
I'm actually in the middle of reviewing a draft acceptance criteria for a project I'm leading. It read all of the technical requirements and diagrams. It missed a bunch of stuff, got a bunch of stuff wrong, and most of what's left is not written for the right audience — this should be a product owner document that doesn't require examining code or databases to determine success, but because much of what we have is technical documentation, that's what it wrote everywhere.
I know this is getting long, but I want you to understand CSE jobs aren't going anywhere for a bunch of reasons. It remains a great field. There is likely to be some pain in the industry over the next few years as CEOs learn we cannot be replaced so easily, but if you are just getting started, I have a feeling you might enter the market on the other side of that just as there is a big hiring boom as they realize they've fucked up.
I misunderstood what you were saying. Yeah I believe you could be right about this. I'm on board with your analysis now.