The backdoor of the xz utils program(s) was in the tarball release, but not the main source code:
https://en.wikipedia.org/wiki/XZ_Utils_backdoor
If debian had dodged the upstream tarball, then they wouldn't have been affected by this.
The backdoor of the xz utils program(s) was in the tarball release, but not the main source code:
https://en.wikipedia.org/wiki/XZ_Utils_backdoor
If debian had dodged the upstream tarball, then they wouldn't have been affected by this.
Is this because of the xz utils thing? The backdoor was included into the tarball, but it wasn't in the git repo.
By switching away from tarballs they pribably hope to prevent that, although this article doesn't mention that. It's possible this shift has been happening since before the xz utils.
This is exactly why syncthing is problematic as a backup solution.
If I delete a file on one host and syncthing is doing the default two way sync, the deletion is also replicated to the other machine.
They acknowledge this in their faq: https://docs.syncthing.net/users/faq.html#is-syncthing-my-ideal-backup-application
You can mitigate some of these issues with file versioning, or one way syncs, but ultimately it's just not really the tool for the job.
Did you post this right as I edited the title? Lol.
Late reply but I also recommend going through flathub for screenwriting apps if you want more. I saw some options that looked pretty good, although many were proprietary.
Not really? From this page, all it looks like you need is a salsa.debian.org account. They call this being a "Debian developer", but registration on Debian Salsa is open to anybody, and you can just sign up.
Once you have an account, you can use Debian's Debusine normally. I don't really see how this is any different from being required to create an Ubuntu/Launchpad account for a PPA. This is really just pedantic terminology, Debian considers anybody who contributes to their distro in any way to be a "Debian Developer", whereas Ubuntu doesn't.
If you don't want to create an account, you can self host debusine — except it looks like you can't self host the server that powers PPA's. I consider this to be a win for Debusine.
Make sure you stream with the "linux" tag so thag people who follow that tag around like me can find you!
Lmao I love the opening choice used in the demo.
Okay, it's good to know that your work with redis is on a different setup.
Can you confirm what version of nextcloud and collobra is being deployed? Another user in the original thread mentioned that they stopped encountering this issue after cp-25.04.8-1.
It could be a bug that has been patched, but only after a specific version.
Proxmox is based on debian, with it's own virtualization packages and system services that do something very similar to what libvirt does.
Libvirr + virt manager also uses qemu kvm as it's underlying virtual machine software, meaning performance will be identical.
Although perhaps there will be a tiny difference due to libvirt's use of the more performant spice for graphics vs proxmox's novnc but it doesn't really matter.
The true minimal setup is to just use qemu kvm directly, but the virtual machine performance will be the same as libvirt, in exchange for a very small reduction in overhead.
If this is the thread you are referring to, this is far from "vitreol" or being "combatative". You said it yourself, there are two others users testing and were able to reproduce your issue. And the person who was unable to reproduce your issue is still being helpful, because we confirm that their specific setup (powerful server + ubuntu snap) doesn't encounter this issue. Of course they are not going to offer any further troubleshooting advice, what can they do? They aren't encountering the issue so they can't really help you in the hands on way the other commenters are. So instead they pointed you to some other places you could ask for further troubleshooting. "I can't help you" is very, very different than "fuck off!".
Look, I get it. You're tired, and probably frustrated. Just take a break or something. It's clear that making this post didn't advance your goal of troubleshooting this issue.
Now, let me take a crack at it. Nextcloud is one of like 3 software that I know off, off the top of my head that can encounter performance issues when it is deployed in a manner that doesn't include an in memory cache of some sort. It looks like you were trying to install redis here, although I don't know how far you got, or if this was even the same nextcloud setup?
But many people frequently encounter performance issues with the manual install, that they don't encounter with "distributions" of Nextcloud that include Redis or other performance optimizations like the docker-AIO installl... or the Snap version that the person who wasn't encountering the issue used. So yes. Knowing that someone doesn't encounter an issue is useful information to me.
Can you confirm what deployment method your hosting provider is using for nextcloud? Both here and in the original thread, that would isolate a lot of variables, and it would allow people to give you more precise advice on debugging the service, since debugging a docker or snap version will be different from debugging a raw LAMP stack install. Right now, we are essentially flying blind, so it's no wonder that no progress has been made.
have you considered contacting hosting support?
Of course not. I came to the available discussion forum to investigate a situation which may or may not be a flaw, and is clearly not a hosting company’s responsibility. Besides the fact that they would likely tell me exactly that if I get a response at all, I always explore all other avenues before opening tickets and GitHub issues.
Lmao. You pay them for a service of seamless nextcloud, and that includes support. But to be blunt, we can't really help you if we don't know what the hosting provider is doing.
If this is a performance optimization problem, you may not have the privileges on the server you would need to finetune nextcloud in order to fix this.
If this is a bug, you can't really see granular logs from the nextcloud host, same thing.
Idk what to tell you. You are trying to manage managed nextcloud like it is selfhosted nextcloud and you are getting frustrated when people tell you that you might not have the under the hood access needed to fix what you want to fix.
Probably the binary blobs.
Ventoy uses binary blobs which can't be trusted to be free of malware or compliant to their licenses. https://github.com/NixOS/nixpkgs/issues/404663 See the following Issues for context: https://github.com/ventoy/Ventoy/issues/2795 https://github.com/ventoy/Ventoy/issues/3224
Source: https://github.com/NixOS/nixpkgs/blob/c6f52ebd45e5925c188d1a20119978aa4ffd5ef6/pkgs/by-name/ve/ventoy/package.nix#L213 (nixpkgs git repo)
I will admit that I still use ventoy though.
Idk what to tell you. I linked to sources showing that flathub signs everything, and that flatpak refuses to install unsigned packages by default.
If you have anything contrary feel free to link it.
Also you multi replied to this comment. Sometimes I had this issue with eternity.
To copy what I said when this was posted in another community:
The png didn't do shit. Users where compromised by a malicious extension.
Steganagrophy (hiding data in a png) is a non issue and cannot do anything independently. It is also impossible to really stop.
Which is probably why the cybersecurity news cycle likes to pretend that steganagrophy is a risk on it's own, so that they can sell you products to stop this "theat".
I hate the clickbait title is what I'm trying to say. But the writeup is pretty interesting.
Although the real solution to this problem is probably only letting users install known safe extensions from an allowlist, instead of "pay us for consulting!".
Permanently Deleted
The png didn't do shit. Users where compromised by a malicious extension.
Steganagrophy (hiding data in a png) is a non issue and cannot do anything independently. It is also impossible to really stop.
Which is probably why the cybersecurity news cycle likes to pretend that steganagrophy is a risk on it's own, so that they can sell you products to stop this "theat".
I hate the clickbait title is what I'm trying to say. But the writeup is pretty interesting.
Although the real solution to this problem is probably only letting users install known safe extensions from an allowlist, instead of "pay us for consulting!".
I have a similar setup, and even though I am hosting git (forgejo), I use ssh as a git server for the source of truth that k8s reads.
This prevents an ouroboros dependency where flux is using the git repo from forgejo which is deployed by flux...
From flahubs docs: https://docs.flathub.org/blog/app-safety-layered-approach-source-to-user#reproducibility--auditability
The build itself is signed by Flathub’s key, and Flatpak/OSTree verify these signatures when installing and updating apps.
This does not seem to be optional or up to the control of each developer or publisher who is using the flathub repos.
Of course, unless you mean packages via flatpak in general?
Hmmm, this is where my research leads me.
https://docs.flatpak.org/en/latest/flatpak-builder.html#signing
Though it generally isn’t recommended, it is possible not to use GPG verification. In this case, the --no-gpg-verify option should be used when adding the repository. Note that it is necessary to become root in order to update a repository that does not have GPG verification enabled.
Going further, I found a relevant github issue where a user is encountering an issue where flatpak is refusing to install a package that is not signed, and the user is asking for a cli flag to bypass this block.
I don't really see how this is any different from apt refusing to install unsigned packages by default but allowing a command line flag (--allow-unauthenticated) as an escape hatch.
To be really pedantic, apt key signing is also optional, it's just that apt is configured to refuse to install unsigned packages by default. So therefor all major repos sign their packages with GPG keys. Flatpak appears to follow this exact same model.
This is not true. Flatpaks from flathub are signed with a gpg key.
Now admittedly, they use a single release key for all their signing, which is much weaker than the traditional distro's model of having multiple package maintainers sign off on a release.
But the packages are signed.
Edit: snaps are signed in a similar way.
flashpoint nano is a tiny flashpoint implementation that is just a simple shell script that downloads and launches games. It mainly targets Linux but it's simple to use.
When combined with the flashpoint database search: https://flashpointproject.github.io/flashpoint-database/search/ , it can be used to play games (flashpoint nano can't search or launch for games by name, only download them by their game ID.
Another thing to note about flashpoint, is that flashpoint also archives modern web games... but only select ones, like winners of competitions and other high quality choices. Because of this, I've found that flashpoint is not just an archive of flash games, but also a curated selection of newer browser based games from itch.io.