Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)B
Posts
2
Comments
33
Joined
2 yr. ago

  • I can't see your full setup / config from here, but a) you are not overengineering that. Using VLANs to segment networks is a very good practice. And although Docker (nor Podman) allow macvlan when running rootless, my gutfeeling tells me that segmenting my network takes priority over running rootless, because I think that attack vectors by traversing networks are much more common that breaking out of a container into the host. But this is just my gutfeeling. b) I think I run here what you want to achieve, so I try to explain what I did.

    My Setup is similar to yours. OPNsense (OpenWRT before that), a Switch that is capable of VLAN and a Ubuntu Server with a single NIC that hosts all the Compose stacks.

    1. You already configured your VLANs in OPNsense, so I will just mention that I created mine via Interface -> Devices -> VLAN on the LAN Interface of my OPNsense and then used the Assignments to finally make them available. On the OPNSense each one gets a static IP from the respective Network I defined for the VLAN.
    2. On the Docker Host, in Netplan I configured the single NIC I have as a Bridge. I cannot remember if that was necessary or if I just planned ahead, should I add a 2nd NIC later on, to prevent that I need to reconfigure the whole networking again. Of course that Bridge sits in my LAN and the Netplan Config looks like this:
     yaml
        
    network:
      ethernets:
        eno1:
          dhcp4: no
      version: 2
      bridges:
        br0:
          addresses:
          - 192.x.x.3/24
          nameservers:
            addresses:
            - 192.x.x.x
            search:
            - my.lan
            - local
          routes:
          - to: default
            via: 192.x.x.1
          interfaces:
            - eno1
    
    
      
    1. Now that the Docker Containers can use the VLANs, I had to create Docker Networks as macvlan like this:
     yaml
        
    docker network create -d macvlan --subnet=192.x.10.0/24 --gateway=192.x.10.1 -o parent=br0.10 vlan10
    docker network create -d macvlan --subnet=192.x.20.0/24 --gateway=192.x.20.1 -o parent=br0.20 vlan20
    
    
      
    1. Now for a Container to make use of those Networks, you have to define them as External in the Compose Stack like this:
     yaml
        
    services:
      my-service:
        image: blah
        ...
        networks:
          vlan10:
    
    networks:
      vlan10:
        name: vlan10
        external: true
    
      

    In 4. you have the option to not define an ipv4_address in the networks section. Then Docker will just pick its own addresses when the containers start. Letting OPNsense assign IP addresses dynamically in such a VLAN is something that did not work for me. So either you let Docker pick the IPs when starting a stack, or you define your IP addresses in the stack. If you do the latter, you have to do it for every stack that ever joins that VLAN, otherwise Docker might pick an IP that you already assigned manually and that stack will not start.

    I also wanted to have some services running directly in the LAN via Docker. This setup is a bit more involved and requires you to create a SHIM Network, otherwise the Docker Host itself will not be capable of accessing Containers running in the LAN Network. This was the case for my Pi-Hole for example, that I wanted to have an IP in my LAN Network and had to be reachable by the Docker Host itself too. There is a very good post about Macvlan and SHIM Networks in this blog: https://blog.oddbit.com/post/2018-03-12-using-docker-macvlan-networks/

    I hope this helps. Do not give up. Segmenting your Networks is important, especially if you plan to publish some services over the Internet.

  • Just a thought about the Switch. Maybe you could plug it into a Powersocket that is monitored in HA, like a NOUS A1-T? If power draw goes up, the Switch is on. This could easily be "hacked" by the kids by plugging the switch into another Powersocket. But if the Switch has a reliable Standby power draw, that could be monitored too. If that is zero, the someone is cheating :-)

    Monitoring the Switch when not docked, could maybe done via WiFi? Check if the MAC (or the IP if fixed via DHCP) is online or not. Of course this also helps if the Switch is always Online when used. I do not have one, so I do not know.

  • Since you are German, don´t forget the Impressum. I think it is mandatory and some people are dicks.

  • Not exactly what you asked, but for old-school TV I use Tvheadend and it is not a Jellyfin Plugin. Instead, everything (including Jellyfin) plugs into Kodi in our Household and relatives.

    • Oldschool TV (Scheduling, Recordings, Live TV) is run via Tvheadend and a USB TV Cable adapter.
    • Youtube runs via a self-hosted Invidious instance.
    • My own Music lies on a NAS, indexed by Kodi and Jellyfin.
    • Some TV Shows and Movies lie also on a NAS indexed by Kodi and Jellyfin.

    Media is scattered over various places and services and Kodi pulls them all together.

    • TV via Tvheadend=Added to Kodi via the Tvheadend Plugin.
    • Youtube/Invidious=Added to Kodi via the Invidious Plugin.
    • Jellyfin=Added to Kodi via the Invidious Plugin.
    • Other media that´s on the NAS=Added to Kodi via Library scanning.
    • Mediathekview=Added to Kodi via the Mediathekview Plugin.

    Kodi serves all Media with a unified interface, no matter what it is or where it comes from. When using the TV we see Kodi and we pick what we want to consume.

    Additionally, other family members can connect via Wireguard to my network, have one of those old Ex-Android TV Boxes on the TV on which Kodi (CoreElec or Libreelec) with the necessary Plugins is installed and they have the same unified experience. No need to "learn" how to use different UIs / Software for different Media. For the consumer it is all in one place --> Kodi :-)

  • I am back playing since a few weeks after bring absent for 5 years. Got myself Odyssey finally and do exobiology and you can build your own Colonies now!

  • There was a brief, very sad time when Jon left Opera and it started to go down the drain. His Opera Browser was the first and only Browser I was happy to pay money for. I was super excited when he started Vivaldi and use it since day one. It is fantastic and I hope they manage to keep out the toxic Google changes to Chromium, although I do not have high hopes, because Browser engines nowadays seem to be super complex and difficult to maintain. We`ll see... I will stick to Vivaldi as long as possible.

  • Thanks for pointing out Simplex Chat, I did not know that it exists. It looks very interesting, but reading more about it, they will have to implement some kind of business model in the future. My fear is, that even when self-hosting, some features will be behind a paywall in the future, so it is not a solution I would switch to... switching to a new messenger is a long-term endeavour. It is hard to convince friends to move over too, let alone switching to a new one every few years. That's near impossible. But the technology of Simplex looks really interesting and reading through the Docs it makes the impression that it is very polished.

  • Thank you very much for the technical insight. It makes clear why it is how it is and it is good to see that you can host Activitypub services on Subdomains... so the issue I thought that exists is not that big of an issue anymore. Also I love the discussion under your post, very interesting!

    Thanks also to everyone else who replied!

  • Selfhosted @lemmy.world

    Is my domain "burnt" when hosting my first Fediverse technology?

  • Yarr matey! Us fossers and selfhosters gotta stay togetharrr :-) (Thanks for your replay, made my day, Haha!) :-)

  • Back on my PC and a few more words about https://davideshay.github.io/groceries/ (Specifically Clementines).

    Why do we use it and why do we think it is the best?

    • You have list groups under which you have your stores that fit to your list groups (like food, gardening...). Items you create belong to a list group and can be shown in various lists (shops) under this group. Did not get your favourite Cheese in Shop A? Item will stay on the list for Shop B. Found it in Shop A already? The item is gone on the list for Shop B.
    • For each list, you can create aisles and sort them like they are in that particular shop. This makes it possible to run through the shop in one line and get everything I need in the quickest time possible. No distractions or backtracking.
    • It has real-time sync. We both go shopping in 2 shops for the same lists? Items get ticked off in real-time. Partner puts something on the list? I see it immediately.
    • It has offline functionality. No cell reception in the shop? You shop offline. Cell reception back? It syncs automatically.
    • It has a native Android app and a responsive Web UI, whatever fits to you. And both support offline usage.
    • You can add pictures to items. Partner wants THAT particular cheese and then you stand in front of the 1km long cheese shelf and have no idea how that thing looks? Just add a picture to an item, problem fixed.
    • You ticked off an item by accident from the list and you have no idea what it was? Ticked off items stay on the list ticked off and you can bring them easily back. You are done with shopping? You can then fully clear the list of all ticked off items if you want to clean it up.

    The only downside is: It is a bit difficult to set up, but this is true for all services that use CouchDB as a database I ever set up. But it is worth it. This solution is super stable and the live sync was super usefull so many times.

    We use it for more than just shopping now. It also works great for a packing list when you go on vacation for example or basically everything else you need to "tick off a list".

  • I think I tried all of them. My partner and me are using Specifically Clementines and never looked back. It is like someone found out what we want and made a solution. Can weite more when I am on my PC later.

  • Thank you very much. I spent another two hours yesterday reading up on that and creating other VMs and Templates, but I was not able yet to attach the Boot disk to a SCSI controller and make it boot. I would really liked to see if this change would bring it on-par with Proxmox (I wonder now what the defaults for Proxmox are), but even then, it would still be much slower than with Hyper-V or XCP-ng. If I find time, I will look into this again.

  • I am neither working professionally in that field. To answer your question: Of course I would use whatever gives me the best performance. Why it is set like this is beyond my knowledge. What you basically do in Apache Cloudstack when you do not have a Template yet is: You upload an ISO and in this process you have to tell ACS what it is (Windows Server 2022, Ubuntu 24 etc.). From my understanding, those pre-defined OS you can select and "attach" to an ISO seem to include the specifics for when you create a new Instance (VM) in ACS. And it seems to set the Controller to SATA. Why? I do not know. I tried to pick another OS (I think it was called Windows SCSI), but in the end it ended up still being a VM with the disks bound to the SATA controller, despite the VM having an additional SCSI controller that was not attached to anything.

    This can probably be fixed on the commandline, but I was not able to figure this out yesterday when I had a bit spare time to tinker with it again. I would like to see if this makes a big difference in that specific workload.

  • I just can't figure out how to create a VM in ACS with SCSI controllers. I am able to add a SCSI controller to the VM, but the Boot Disk is always connected to the SATA controller. I tried to follow this thread (https://lists.apache.org/thread/op2fvgpcfcbd5r434g16f5rw8y83ng8k) and create a Template, and I am sure I am doing something wrong, but I just cannot figure it out :-(

  • I had a rough start with XCP-ng too. One issue I had was the NIC in my OptiPlex, which worked... but was super slow. So the initial installation of the XO VM (to manage XCP-ng) took over an hour. After using a USB NIC with another Realtek Chip, Networking was no issue anymore.

    For management, Xen-Orchestra can be self-built and it is quite easy and works mostly without any additional knowledge / work if you know the right tools. Tom Lawrence posted a Video I followed and building my own XO is now quite easy and quick (sorry for being a YT link): https://www.youtube.com/watch?v=fuS7tSOxcSo

  • Sure, ESXi would have been interesting. I thought about that, but I did not test it because it is not interesting to me anymore from a business perspective. And I am not keen of using it in my Homelab, so I left that out and use that time to do something relaxing. It's my holiday right now :-)

  • That's a very good question. The testsystem is running Apache Cloudstack with KVM at the moment and I have yet to figure out how to see which Disk / Controller mode the VM is using. I will dig a bit to see if I can find out. Would be interesting if it is not SCSI to re-run the tests.

    Edit: I did a 'virsh dumpxml

    <vmname>

    ' and the Disk Part looks like this:

     xml
        
      <devices>
        <emulator>/usr/bin/qemu-system-x86_64</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='none'/>
          <source file='/mnt/0b89f7ac-67a7-3790-9f49-ad66af4319c5/8d68ee83-940d-4b68-8b28-3cc952b45cb6' index='2'/>
          <backingStore/>
          <target dev='sda' bus='sata'/>
          <serial>8d68ee83940d4b688b28</serial>
          <alias name='sata0-0-0'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
    
      

    It is SATA... now I need to figure out how to change that configuration ;-)

  • It would be cool to see how linux centric workloads behave on those Hypervisors. Juuust in case you plan to invest some time into that ;-)

  • Yes, it is Windows centric because that is where the workload is based on I need to run. It would be cool to see a similar comparison with a workload under Linux that puts strain on CPU, Memory and Disk.

  • Selfhosted @lemmy.world

    Performance comparison between various Hypervisors

    dasite.quattrofan.de /posts/hypervisor-comparison/