Unraid setup bridge
Think of a linux bridge as a virtual L2 managed switch with more or less infinite ports, by default you get one wire out of it going to the host is networking stack, and you get to configure the host os and that one bridge port how you want. Even the host os, doesn’t have to have a network configuration setup for its bridge interface. These bridges can, but don’t have to have physical interfaces connected to them at all. You can have more than one network interface on a container or a VM - that’s how folks get to run routers in VMs.
#Unraid setup bridge software
Both are ~2010 era 32bit ARM designs with a similar overall feature set that made their way into Qualcomm/Broadcom SoC alongside various network specific stuff.Įither would do basic routing well, rutx09 just happens to have better software on it that lets you control more routing aspects - if you don’t lose wifi on x8, you don’t really lose anything by removing it from your network… If anything, you get to simplify things dramatically. Rutx09 says “Quad Core ARM Cortex A7 717 MHz CPU”, that “smells” like a Qualcomm ipq4019 to me, which is a BCM47094 “Dual 1.4GHz Cortex A9 CPU”. Then again if you don’t user its wifi, its pretty much useless/unnecessary to you. Open wrt does not support the R8500 - this is what their site claimsĭamn, I had it mistaken for x4s which is well supported (x4s aka R7800 just checked R8500 is X8). If you have a usb/ethernet adapter to add as a second network interface to unraid while you mess with VLANs on your primary interface, that would probably help you “not cut the branch you’re sitting on” as you try and configure VLANs on both sides of the link (your router, and your unraid server) That VM will then need network access to stuff you want to provide through the VPN. VPNs such as OpenVPN and Wireguard generally work by tunneling network connections through udp (openvpn can do tcp as well), and if a VM is where you want to run the VPN endpoint, that means you’ll need to forward those udp or tcp ports to that VM (if a VM is in a VLAN, your router will need to have that VLAN configured). And then you could plug in an LTE stick and don’t need the rutx09 even. It’s failover (based on mwan3) is unlikely to work without it doing routing.Īfaik, the R8500 doesn’t support user configurable VLANs out of the box, unless you put openwrt on it. And whether you want them tagged or untagged. Your rutx09 is basically just openwrt, in network>vlan you can configure what VLANs you want on which router port. To summarize, I would like to route traffic from the Docker containers and from the other VMs to what pfSense considers to be the the LAN ports from there it will be routed to my actual LAN through what pfSense considers to be the WAN port.This would be much easier to setup if you had a second nic on the machine, so that while you’re configuring stuff you don’t loose access. What fundamental aspect am I missing here? I have tried a few completely ineffective things, such as setting the IP of the docker0 interface in pfSense to 192.168.2.1 and setting the default gateway in the docker0 bridge configuration to 192.168.2.1, but that doesn't seem to have changed anything. What I'm stuck on now is how I make the traffic from the two LAN interfaces through the firewall to the WAN. I figured out that I could add all three of these interfaces to pfSense: assign br0 as the WAN interface, vibr0 and docker0 as LAN interfaces. vibr0 - a virtual bridge managed by the host which keeps the VM isolated from the LAN.br0 - allows a VM to exist as its own entity on the network, with direct access to the LAN and an IP assigned from the router.When creating a VM, UnRAID gives three options by default for choosing a network bridge: I have pfSense in one of those VMs, and I would like to route traffic from Docker and other VMs through pfSense. I have an UnRAID server (which is built on top of Slackware), and on this server I have some Dockers running as well as a few VMs running via KVM. It seems like there should be a simple solution I'm just totally missing it. I think this question is a result of me not being able to wrap my head around Docker networking and not being super great at Slackware.