Skip Navigation

  • I said I was focusing on copyleft, cool that you ignored the entire post though. 😑

  • I think the reimplementation stuff is a separate question because the argument for it working looks a lot stronger, and because it doesn't have anything to do with the source material having LLM output in it. Also if this method holds as legally valid, it's going to be easier to just do that than justify copying code directly (which would probably have to only be copies of the explicitly generated parts of the code, requiring figuring out how to replace the rest), which means it won't matter whether some portion of it was generated.

    Is it a separate question, though?

    Both works are copyrighted, one is just copyrighted as "all rights reserved" (our leaked commercial code) and the rest is licensed as LGPL. We're putting both pieces of code inside the LLM and then asking the LLM to make a new version.

    What makes the action of leaking different from the act of putting it on the web? Rights are reserved in either case.

    If they aren't entirely generated, you can't make a full fork, and why would a partial fork be useful?

    Well, people are contributing to copyleft codebases expecting that when people build on their work, that work (the derivative works) are also licensed in the same way. You don't need to fork for the value to be lost. People expected virality to be part of their contribution, and clearly the new derivative works are partially non-copyleft.

    Beyond that, as more of the codebase is LLM produced, the less of it is protected by the copyleft license, until we have a ship of Theseus situation where the codebase is available, but no longer copyleft. That is clearly not what was intended by e.g. the GPL. Just look at the Stallman quote in post.

  • ianal but does it even work like that? Is there any specific reason to think it does? I don't believe you really get credit for purity and fairness vibes in the legal system. Same goes for the idea that code where it is ambiguous whether it is AI output could be considered public domain, seems kind of implausible, is there actually any reason to think the law works that way? If it did, then any copyrighted work not accompanied by proof of human authorship would be at risk, uncharacteristic for a system focused on giving big copyright holders what they want without trouble.

    I'm mostly just playing along with your thought experiment. As I said, we know that projects are already accepting LLM code into projects that are nominally copyleft.

    There is no way, leaks happen, big tech companies have massive influence, a situation where their code falls into the public domain as soon as the public gets their hands on it just isn't realistic.

    If that is the case, is chardet 7.0.0 a derivative work of chardet, or is it a public domain LLM work? The whole LLM project is fraught with questions like these, but it seems that the vendors at least are counting on not copying leaked software and instead copying open source code that is publicly hosted.

    Why is it okay to strip copyright from open source works but not from leaked closed source works?

    We know that Disney is suing to protect its works - if it is true that LLM outputs are transformative, they should lose, as should any vendor whose leaked code was "transformed" by an LLM.

  • But any source code leak is also open sourcing in that world.

    I don't see how that helps free software, though. Those programmers got paid. Volunteers didn't.

    It ends up with a weird reverse robin hood situation. LLM vendors steal from the poor, sell that to the rich. Do the rich give back? Only if it is stolen from them.

  • Making use of the non-copyrightability of AI output to copy code in otherwise unauthorized ways does not seem like a straightforward or legally safe thing to do. That's especially the case because high profile proprietary software projects also make heavy use of AI, it doesn't seem likely the courts will support a legal precedent that strips those projects of copyright and allow anyone to use them for whatever.

    I think what may happen in practice could be worse - basically if we can't tell whether some code is the work of a human, but the project accepts AI code, if there we forego the analysis of whether something was produced by a human, the entire project may be deemed public domain -- perhaps after a certain date (when LLM contributions were welcomed).

    Beyond that, by integrating LLM code into those projects, the projects are signifying assent to their works to be consumed by LLMs for infringement of the whole work - not just the LLM produced portions - it is hard to be doctrinaire about adherence to the open source license when the maintainers themselves are violating it.

    We may see a future where copyrights for works become more like trademarks - if you don't make any attempt to protect your work from piracy, you may simply lose the right to contest its theft.

    Obviously, it is as you say - today the courts may smile upon a GPL project where a commercial vendor copied and released as their own without sharing alike - but if they instead say that they copied the work into their LLM and produced a copy without protections (as chardet has done), the courts might be less willing to afford the project copyright protections if the project itself was making use of the same copyright stripping technology to strip others' work to claim protections over copied work.

    Besides which, "authored by Claude" seems like a pretty easy way to find public domain code, and as Malus presents, the only code that may ultimately be protected is closed source code - you can't copy it if you don't have the source.

    The diversion of "people may try to pass of LLM code as their own" is a nice diversion, but ancillary to the existing situation where projects are incorporating public domain code as licensed. We can start there before we start worrying about fraud.

  • Can I legally reverse engineer AI generated software?

    If you have the source, why would you need to?

    Can you even put terms and conditions on this supposed public domain copyright free compiled software product?

    You can put terms on anything, but you can't protect the underlying asset if someone breaks your terms. Think of the code produced by Grsecruity that they put behind a paywall -- people were free to release the code (since it was licensed as open source as a derivative work), but obviously Grsecruity was able to discontinue their agreement with their clients who would do so.

    Is the compiled version even different than the raw AI generated source code in its ability to be licensed?

    People aren't generally licensing compiled binaries as open source, since you can't produce derivative works from them. But I think that if there is no copyright protection for the work, compiling it doesn't change the copyrightability. Curious what you think.

    What rights does one have to AI generated code? Be it compiled or source. It’s surely not just communal.

    Why is that surely the case? It is public domain - that is the most "communal" you can get for copyright.

  • I have seen this sentiment, but I don't know what the world looks like without copyright protections for creative works.

    Does open source exist in your vision? How?

    My imagination for this topic may not be as expansive as yours, but my interpretation is that if people contribute code to the commons, it will immediately available for any use - including for use by massive corporations.

    So it ends up looking like people working for big companies for free.

  • as soon as it's modified by a human in nontrivial ways

    is doing a lot of heavy lifting here.

    We know that people are using coding LLMs as slot machines - pull the handle and see if it solves your problem. Where is the human modifying anything? That is a "straight dump" of AI output without modifications.

  • Honestly, if AI destroys copyright, it's the best thing it can do.

    I have seen this being said, but I really don't understand it. Just because copyright can be abused doesn't mean (to me) that we ought to throw the baby out with the bathwater.

    If copyright no longer exists, what incentive do people have to share copyleft code at all? It clearly would no longer exist, so can you help me understand how both copyright can be dead and open source exist? Or are you simply accepting that rather than copyright, we are using trade secrets (like the KFC chicken recipe) to protect works?

  • I don't really think we need to go down the copyfraud path to see that AI code damages copyleft projects no matter what - we know that some projects are already accepting AI generated code, and they don't ask you to hide it - it is all in the open.

  • How does this apply to software made by, say, Anthropic? They proudly say Claude Code is written by AI. If it can’t be copywritten, or licensed, then it’s just a matter of figuring out how to acquire a copy of the source code, and you could do whatever with it. Right?

    If you were on Mastodon last week when the Claude source code was released (by Claude, accidentally), people were joking about how Anthropic was trying to use the DMCA to get the source removed from websites -- even though clearly, copyrights don't apply, since the code is clearly in the public domain.

    If the LLM wrote the code, it is uncopyrightable.

  • All works created by a person are copyright by default, so people need to release their works to allow others to build on it or use it (except for the limited uses allowed by fair use). Like-minded people have come up with various licenses that allow people to release their works in ways that people prefer.

  • Except for the fact that it is public domain and not protected by the open source license that the code is ostensibly submitted under.

  • Fuck AI @lemmy.world

    AI Code is Hollowing Out Open Source, and Maintainers are Looking the Other Way

    www.quippd.com /writing/2026/04/08/ai-code-is-hollowing-out-open-source-and-maintainers-are-looking-the-other-way.html
  • Free and Open Source Software @beehaw.org

    AI Code is Hollowing Out Open Source, and Maintainers are Looking the Other Way

    www.quippd.com /writing/2026/04/08/ai-code-is-hollowing-out-open-source-and-maintainers-are-looking-the-other-way.html
  • Linux @programming.dev

    AI Code is Hollowing Out Open Source, and Maintainers are Looking the Other Way

    www.quippd.com /writing/2026/04/08/ai-code-is-hollowing-out-open-source-and-maintainers-are-looking-the-other-way.html
  • Technology @programming.dev

    Link Preview Manifest: A Proposal for the Fediverse

    www.quippd.com /writing/2026/03/09/link-preview-manifest-a-proposal-for-the-fediverse.html
  • What is the purpose of this message?

    But also, yes, i love the cliche because people just moan on lemmy about things instead of actively addressing the issue with the appropriate people on the appropriate mediums.

    Airing your grievances out here is fun though. I get it. but asking the people upset about it to do some work is also fun.

    Awfully ironic to say this, now that "code is free". What work are we talking about?

  • Well, you could import the same policies into Firefox that LibreWolf uses, or it might be some workaround in your graphics driver that acts on the filename of the Firefox executable.

    Obviously you don't need to test, but just throwing out ideas if you would want to.

  • Might just be a Firefox bug, since it's a very light fork. Try Firefox and see if it does the same thing.

  • Why do you want people to stop discussing this? Are you running community management for Mozilla? If not, why is discussing it not "addressing the issue"? People are engaging in discourse.

  • I'm seeing closed bugs from 2021 here, are we supposed to take these seriously?

  • Firefox @fedia.io

    Firefox’s AI Kill Switch is a Trap: How Mozilla Made AI Your Problem

  • Privacy @lemmy.world

    Firefox’s AI Kill Switch is a Trap: How Mozilla Made AI Your Problem

  • Fuck AI @lemmy.world

    Firefox’s AI Kill Switch is a Trap: How Mozilla Made AI Your Problem

  • Free and Open Source Software @beehaw.org

    Firefox’s AI Kill Switch is a Trap: How Mozilla Made AI Your Problem

  • Linux @programming.dev

    Firefox’s AI Kill Switch is a Trap: How Mozilla Made AI Your Problem

  • Technology @lemmy.world

    Firefox’s AI Kill Switch is a Trap: How Mozilla Made AI Your Problem

  • Firefox @fedia.io

    Launching Interop 2026

    hacks.mozilla.org /2026/02/launching-interop-2026/
  • @linux on Linux.Community @linux.community

    Fifteen Years of Waterfox: Alex Kontos on Independence, AI, and the Future of Browsers

    www.quippd.com /writing/2026/02/02/fifteen-years-of-waterfox-alex-kontos-on-independence-ai-and-the-future-of-browsers.html
  • Technology @programming.dev

    Fifteen Years of Waterfox: Alex Kontos on Independence, AI, and the Future of Browsers

    www.quippd.com /writing/2026/02/02/fifteen-years-of-waterfox-alex-kontos-on-independence-ai-and-the-future-of-browsers.html
  • Fuck AI @lemmy.world

    Fifteen Years of Waterfox: Alex Kontos on Independence, AI, and the Future of Browsers

    www.quippd.com /writing/2026/02/02/fifteen-years-of-waterfox-alex-kontos-on-independence-ai-and-the-future-of-browsers.html
  • Waterfox @programming.dev

    Fifteen Years of Waterfox: Alex Kontos on Independence, AI, and the Future of Browsers

  • Free and Open Source Software @beehaw.org

    Fifteen Years of Waterfox: Alex Kontos on Independence, AI, and the Future of Browsers

    www.quippd.com /writing/2026/02/02/fifteen-years-of-waterfox-alex-kontos-on-independence-ai-and-the-future-of-browsers.html
  • Linux @lemmy.world

    Fifteen Years of Waterfox: Alex Kontos on Independence, AI, and the Future of Browsers

  • Technology @lemmy.world

    Fifteen Years of Waterfox: Alex Kontos on Independence, AI, and the Future of Browsers

    www.quippd.com /writing/2026/02/02/fifteen-years-of-waterfox-alex-kontos-on-independence-ai-and-the-future-of-browsers.html
  • Privacy @lemmy.world

    Fifteen Years of Waterfox: Alex Kontos on Independence, AI, and the Future of Browsers

    www.quippd.com /writing/2026/02/02/fifteen-years-of-waterfox-alex-kontos-on-independence-ai-and-the-future-of-browsers.html
  • Firefox @fedia.io

    Fifteen Years of Waterfox: Alex Kontos on Independence, AI, and the Future of Browsers

    www.quippd.com /writing/2026/02/02/fifteen-years-of-waterfox-alex-kontos-on-independence-ai-and-the-future-of-browsers.html