You are not logged in.
Hi.
I'd like to avoid exposing services running in Docker containers and libvirt guests to my LAN and the Internet by blocking all their incoming connections.
I used to run Docker with ufw but Docker ignore its rules by default. So, while doing a clean Arch Linux installation on my laptop, I decided to use firewalld and set it up to have the public (default) zone assigned to my wireless interface to block all incoming traffic. According to its wiki page, firewalld rules should take precedence over Docker rules and I shouldn't need to do anything in my case (I never change its network driver so it defaults to using a bridge). And, looking at a recent firewalld article and the Docker documentation about firewalls, blocking incoming traffic seems to require more tweaking.
I have a similar question about libvirt after reading its documentation about its firewalld support. And, with the recent change to the iptables backend, I'm even more confused as Docker and libvirt use it (Docker has experimental support for nftables though).
Both "docker" and "libvirt" zones have their target set to "ACCEPT", which shouldn't be an issue if the firewalld rules for my wireless interface take precedence ("libvirt-routed" zone exists too but is set to "default") if I get this right. But, should I do anything to harden that configuration? I feel like I missed something or misunderstood how firewalld zones work.
Last edited by Ambos (Yesterday 04:03:24)
Offline
when you use a bridge each container/vm is connected/"exposed" to the lan as if they were real devices - and your physical nic becomes merely a dumb layer 2 switch
i see a few ways:
- change the layer 2 bridge setup to layer 3 routing by set the connection to nat instead of bridge - this way the host becomes a proper router and can act as a firewall
- accept the layer 2 "exposition" and install firewalls in each container/vm
- remove networking altogether
i may miss something - but what's the issue with "exposing" containers/vms to the lan when using bridge mode? if you don't trust your lan this sounds like a different issue
Offline
I updated my post as my main concern was exposing services to the Internet. I usually trust the LANs I connect to but I'd like to avoid exposing them needlessly.
Offline
I updated my post
don't do that - to quote a moderator of another forum:
Don't alter posts after others have replied to - it make thier replies look dumb.
anyway - as for your actual concern, to not expose services provided by the container/vm:
that's very valid - as it can, potentialy, an attack vector evenwithin trusted LANs but even more so in untrusted ones (like public open wifi)
it's also valid to "jail" services in containers or VMs - or could sometimes just be necessary (like windows-only software on a linux host) or the way how thier devs provide them
the question seem to shift: do you use the provided services only local on the host - or are they also sometimes used remote by others with a specific LAN?
for local only i would just remove the bridge - if the container/vm needs access outbound to the internet go for nat
as for the other case when the services are also used by others, like game servers or a LLM, hmm ... i guess i would still go either nat with specific forwarding on demand or if you want to keep the bridge setup then i likely would install firewalls in the container/vm and just set them to full block and either enable or disable depending on if the service should be exposed
i get the overall gist - and maybe it actually is more straight forward - in the end it's all just a few software stacks layered ontop eachother - but the above is what i would come up with
Offline
Not sure but maybe this will help, first check current zones "sudo firewall-cmd --get-active-zones". Try set the default zone for your external interface to public (or drop) and make sure your network interface uses the public zone (blocks incoming by default)
sudo firewall-cmd --zone=public --change-interface=wlan0 --permanent # or enp*, eth0 etc.
Reload: sudo firewall-cmd --reload
Disable the default ACCEPT behavior of docker and libvirt zones:
sudo firewall-cmd --zone=docker --set-target=DROP --permanent
For libvirt (the important one for VMs):
sudo firewall-cmd --zone=libvirt --set-target=DROP --permanent
Also for libvirt-routed if it exists:
sudo firewall-cmd --zone=libvirt-routed --set-target=DROP --permanent 2>/dev/null || true
I think no need to reboot, you can do a reload again:
sudo firewall-cmd --reload
Check things:
sudo firewall-cmd --list-all-zones | grep -E 'docker|libvirt'
Really nice feeling with a new fresh Arch installation, isn't it? They say ignorance is bliss, I would change that to "Arch is bliss" (because I'm coming from Windows 11 *yuck*) Haha
Last edited by noesoespanol (Yesterday 09:44:42)
Offline
cryptearth, I usually use temporary web servers for development and they're not accessed by anybody else. As for NAT, it seems to be what libvirt does by default:
Every standard libvirt installation provides NAT based connectivity to virtual machines out of the box. This is the so called 'default virtual network'. [...] By default, guests that are connected via a virtual network with <forward mode='nat'/> can make any outgoing network connection they like. Incoming connections are allowed from the host, and from other guests connected to the same libvirt network, but all other incoming connections are blocked by iptables rules.
To clarify, I know you can avoid exposing containers to the external network. But I treat the host firewall as a safety net in the case I leave something exposed by mistake. I also prefer to manage ports and connections directly from a firewall if the configuration required isn't too hard to maintain or differ too much from default.
noesoespanol, I already configured firewalld. I didn't change anything related to the "libvirt" and "docker" zones though and my wireless interface is already assigned to the public zone as it is the default zone. I'm just confused because I don't know if Docker and libvirt actually bypass the firewall rules set by the public zone (especially with the recent iptables packaging change).
Last edited by Ambos (Yesterday 10:40:16)
Offline
for this reply I had to consult AI - so take it with about a metric ton of salt - you be warned
according to the employed LLM controlling inbound traffic in a libvirt/kvm setup with a bridge it actually IS possible to utlize the hosts firewall ...
... BUT: it's not as straight forward as to just set a few firewall rules
reason (that's the part I used tge LLM for): traffic in a bridge setup is handled at layer 2 and therefor MIGHT/CAN bypass the hosts ip stack (layer 3 and up) and its firewall (the LLM was somewhat vague here) - therefor additional configuration is needed like enable layer2 filtering (this has to be done explicit)
as for how to do this and tgen how to configure the firewall rules I didn't dug deeper
other options:
- use a dummy bridge not connected to the physical nic and assign an ip to it, then add the vms to it and communication flows over the virtual dummy bridge without exposing the vms to the physical lan
- use nat mode
- use a firewall inside the guest instead of the host
although I'm not into containers I did ask the LLM how to do it with Docker: the LLM replied that Docker traffic is always handled at layer 3 and thereby is subject to the hosts firewall
to limit access from the host only start a container by specify 127.0.0.1:PORT instead of only the port
reason: without specify localhost a container binds to 0.0.0.0 by default and is reachable by everyone
summarized: as far as i understand tge LLMs explanation: for libvirt with a bridge additional work is needed - containers should work out of the box or by specifically bind to localhost
Offline
"iptables-legacy" (formerly known as "iptables") manipulates the (older) xtables kernel API and "nftables" the (newer) nftables kernel API (and uses nftables rule matching). "iptables" (formerly known as "iptables-nft") manipulates the nftables kernel API but keeps the "iptables" rule matching. So - depending on which userland/middleware tool you use - your rules may land in either kernel API (or in the opposite one you thought). I personally witnessed xtables kernel API rules take precedence over nftables kernel API rules concerning the same matching packets (which is AFAIK by design). Don't mix APIs.
Packages that manipulate firewall rules expect them to stay intact. A second or third package that manipulate firewall rules may disrupt the ruleset. If you need to run multiple packages which each implement a firewall ruleset you have to carefully assess the possible scenarios for that to work.
Possible problems include:
- Colliding rules in different APIs (real precedence)
- Colliding rules in one API due to multiple "firewall ruleset" packages (fake precedence)
- Rules that seem to work randomly (race condition in the startup order of "firewall ruleset" packages)
While cryptearth is correct in the assessment of a "bridge" as a "dumb" OSI level 2 switch - neither firewall framework is incapable. For the older xtables framework you have to use "ebtables" to write rules for bridges and "nftables" has "bridge" chains for that (only for filtering though).
Offline