Skip Navigation

Posts
102
Comments
141
Joined
2 yr. ago

  • i stated off with a version i created manually without AI. i know how to do this old-school (i tried). that was a different kind of slop.

    https://github.com/positive-intentions/chat

    i use AI in a way i think is appropriate. i check as much as i can myself too. i post online about details and questions. i can iterate with AI. im may naive to think i know how to inpect what is created, so i share it online. im not sharing slop. this is the best i can do. of couse there are countless points of improvement, but there are only so many hours in the day.

    youre sharing a valid opinion, but its difficult for me to quantify my efforts. im sure you dont think i just asked AI something basic (e.g. "verify this code is correct").

  • Most I want is transparency.

    i agree with all youre saying. especially this which is why i entertain the idea of open source at all. what does transparency look like to you? code? documentation? open discussion? transparency is undermined when im trying to talk about something clearly complicated in order to seek feedback.

    cryptography code… Isn’t that a bit dangerous?

    in software dev we have thing like unit test (you already know that)... but when diving into cryptography we have formals proofs and verification we can use. it doesnt need AI to extract abstraction from the code implementation to run verification on. the tooking there is common practice and if we question if AI is doing it ptoperly we bring into question if the tooling used is good enough.

    • security audit
    • unit tests
    • formal proof
    • formal verification
    • documentation

    individually, they are all easily AI slop. but combined i hope it can serve as a starting point for a proper review. i dont mean a proper review from you either... im was seeking a review from orgs that specialise in such review.

    https://www.reddit.com/r/CyberSecurityAdvice/comments/1su8lir/security_audit_feedback_from_radically_open

    you make a lot of assumptions about how i code and what i understand about my project. enumerating what ive done and plan to do wouldnt do it any justice... but i will say this project is the result of a long-term effort. i created the project without AI originally. the idea is unique around client-managed cryptography (https://github.com/positive-intentions/chat).... ultimately it was clear that open-source is dead and so ive started introducing less transparency in the project as i introduce a close-source UI. i still keep the cryptography related modules open for transparency (whatever thats worth when people see that AI was involved).

    i wouldnt put my project out there if i didnt have faith in the implementation. i have actively seeked feedback and recieved good advice from which i iterated and improved. particularly concerning if im being banned from from communities for posting slop.

  • i vibecode a lot of things. my project is not inherently dangerous. people can use any software irresponsible. in my project and all my communications about it, i make it clear to users to use it cautiously and that its presented for testing and demo purpose. its mentioned in all of my post and i also have terms and condition within my projects the explain as much.

    nobody is being tricked into sharing sensitive information... in fact i made a proactive attempt to create something that doesnt need any personal information.

    dont tell me what i should and shouldnt be coding. i put time and effort into testing and verifying. this is the issue about mentioning AI is that it undermines all other efforts. its the low-hanging-fruit of critisism.

  • perfect. you get it. you understand that generating an AI audit is wild!

    https://www.reddit.com/r/CyberSecurityAdvice/comments/1su8lir/security_audit_feedback_from_radically_open

    the AI audit comes after a long time of to-and-fro from the various communities that asked for an audit... of course they asked for a professional one... but those that ask, must know that they are all prohibitively expensive. especially for a solo vibecoding dev like myself.

    i also understand that people would prefer a project with a team of experts... sorry to break it to you, a team of experts are not going to hire themselves on an unfunded project like this.

    while the security audit, unit test, formal proofs and verification are not good enough when its done with AI, my hope was that it could serve as a starting point for anyone like ROS to perform an actual review. i cant offer more transparancy that open source, documented and discussions.

  • to call it slop just undermines the time and effort i put into the project. its not just code, i put efforts towards testing and documentation. but sure... if you want to believe you're poking holes on big-tech's practices here.

  • in the recent post that got me banned it was a copy of this post here:

    https://www.reddit.com/r/cybersecurityai/comments/1sxvrmu/browserbased_file_encryption_no_install_or/

    i make a point in all my posts to be clear with the caveats. im not promoting this to replace anything. details to find out more is there along with advice to not use it for sensitive data.

    for me messaging app, the caveats are similarly mentioned: https://positive-intentions.com/docs/technical/p2p-messaging-technical-breakdown

    my projects are reasearch and development projects which i make sure to make clear when i post about them. im fairly consistent with advice around cautious use... knowing full well that it will deter people. im proactively seeking critisism in order to improve it.

    It produces such vast quantities of code (and often unnecessarily) that it becomes infeasible for a human to review it, immediately requiring us to place trust in the machine to both generate it and review it, and to continue maintaining it while the human operator probably does not even have full understanding of what’s changing.

    bingo!... youre framing as a negative understandable, but unless im mistaken, that the way its going to have to go. software development broadly speaking (for better or worse) is going to be AI generated. the tooling and methodologies have to keep up.

    horrible impacts it has on our world

    thats pretty vague, im sure it does some good too. AI is a tool. its easy to talk about how AI is impacting people badly. personally ive been unemployed for the past few months. its a horrible experience to go through countless interview thinking i aced it, but still come up with a rejection because the field has become so competative. but i dont blame AI on that. its a tool that i need to be learn how to use. perhaps others use it better than me.

  • i used opencode (various models), cursor (claude, composer)

    how these models are trained is arguably not ethical. the disregard of licences of code is not something i can influence.

  • completely understandable and so the proactive attempt to get a professional security audit so i can avoid asking to "trust me".

    its completely understandable that you want to use something established. i cant offer more than open source and transparency in the implementation. if "trust" is behind the "paywall" of a security audit, its simply not an option without support.

    i used AI to generate an audit. it took several days of my time and effort to get it to where it is. i made a genuine attempt to be objective.

    in SWE we already have things in place for this like unit tests. if we dive further into cryptography we have things like formal proofs and verification.

    formal verification has tooling to help make sure things work and behave how it should. (without AI) it can take a look at the code and create abstractions that can be used for verification. if we question if AI can be used with such tooling, we start discussing if the tooling we use is good enough (its pretty widely used!).

    if the conversation cant move past that i used AI, then we're not really having a discussion.

  • AI involvement = slop

    thats the part that seems disconnected from reality. im sure there are still people cranking out code manually, but lets be real; it isnt normal anymore.

    in cybersec, there is scrutiny than most against the use of AI... i simply cant believe that the folks at Whatsapp, Signal or simpleX are not using AI in their daily workflow.

  • AI-slop is easy to generate, but there needs to be a recognition that at some point ai-generated code is no longer slop. the failure to recognise that is the issue that seems to have got me banned.

  • Programming @programming.dev

    We can't just call AI-generated code slop anymore

  • DevOps @programming.dev

    Multi Vendor Deployment with Infrastructure as Code

  • Secure Coms @programming.dev

    P2P WhatsApp Clone – No Setup or Signup

  • Cybersecurity @sh.itjust.works

    P2P WhatsApp Clone – No Setup or Signup

  • thanks for the tip. it seems nlnet seem to use radically open security. so i pinged them an email.

  • Opensource @programming.dev

    What are my options for a securit audit for my FOSS project?

  • Web Development @programming.dev

    JSX for Web Components

  • JavaScript @programming.dev

    JSX for Web Components

  • Programming @programming.dev

    JSX for Web Components

  • JavaScript @programming.dev

    WhatsApp Clone – No Setup or Signup

    positive-intentions.com
  • Secure Coms @programming.dev

    ML-KEM (Kyber) and X3DH for a P2P WebApp in JavaScript

  • I'm upfront about it. I'm sure you can imagine how ai can help in software development. I can't be more transparent than it being open source.

  • Privacy @programming.dev

    Signal Protocol for a Web-Based Messenger

  • That's why it's kinda the first thing I mention on the post. How do you think I could make this more clear? It's also on the readme and terms and conditions in the app.

    In my open source version, it's at the top of every page. It isn't a good look and I don't want to slap people on the face with words of caution.

  • Thanks for taking an interest.

    I think the most stable version on my app is here: https://p2p.positive-intentions.com/iframe.html?globals=&id=demo-p2p-messaging--p-2-p-messaging&viewMode=story

    I would suggest clearing all site-data before creating a new connection. I hope the UI is intuitive for which link needs to be copied and where it should be pasted on the peer side.

    (If that doesn't work, try locally with different browsers or incognito)

    Can you tell me the features you are interested in? They are all "coming soon" and a matter of more time and effort. I could spend all my time on a nice UI, but that takes away from working on the cryptography and documentation. It's important to be clear that it's testable, but far from finished.

  • i hope the latter. its provided as a testable demo. it isnt finished, but i see its working as i expect. i post about it to encourage feedback.

    if you're interested, theres technical documentation here: https://positive-intentions.com/docs/technical . feel free to reach out for clarity on any details.

    its provided as a demo and i try to be clear about it NOT being ready for your trust (there could be breaking changes, bugs)... but i hope its clear that gaining user trust is the general aim when i share open-source code and documentation.

    having prefessionals review would be great... i think im being realistic that it isnt going to be an option anytime soon.

  • i have applied to some grants (some specific for security sudits for open source projects). so far, all rejections.

    if youre asking for one, you must know a professional security audit is pretty expensive. best i can offer is open source transparency.

    its important maintain the wording around "work-in-progress" because there may be breaking changes. ultimately, making it so its far from ready for an audit.

  • Secure Coms @programming.dev

    Signal Protocol for a P2P PWA

  • The Signal messenger and protocol. @lemmy.ml

    Signal Protocol for a P2P PWA

  • i think use it an appropriate amount. im not sure how to quantify that. i use different AI models on different tasks in the code as well as the documentation.

    its worth repeating its far from finished and i hope with feedback i can make it better. i have put efforts towards directing it towards unit-tests, an audit and formal-proofs. none of that is good-enough, but i hope it can act as a starting point for verifying the implementation is correct.

    i get the whole semantic versioning rhetoric and branching strategies, etc. this project is a while from being promoted as "perfect". this is still a work-in-progress.

    im sure people have better things to do with their time than review unstable and unfinished code. as a solo dev on this, there isnt anyone reviewing my code. if i dont share it like this, no one with come across it. i hope you can understand i get pushback when i promote my messaging app is "secure", so this transparency is nessesary.

  • Rust @programming.dev

    Signal Protocol in Rust for Frontend Javascript

  • Cybersecurity @sh.itjust.works

    SimpleX Clone - No Setup or Signup

  • Programming @programming.dev

    Decentralized Microfrontend Module Federation Architecture

  • JavaScript @programming.dev

    WhatsApp Clone... But Decentralized and P2P Encrypted Without Install or Signup

  • Web Development @programming.dev

    WhatsApp Clone... But Decentralized and P2P Encrypted Without Install or Signup

  • Programming @programming.dev

    Quantum-Resistant Encryption in JavaScript

  • Cybersecurity @sh.itjust.works

    WhatsApp Clone... But Decentralized and P2P Encrypted Without Install or Signup