Zerotier is an excellent VPN system. I've been using it for years in specific situations, and I find it efficient and convenient. While I usually rely on #Wireguard and manage everything manually, it's not always the best solution.
Just now, I needed to quickly bridge two distant networks without involving #Wireguard and #VXLan, so I set up an active bridge using Zerotier in just a minute.
#wireguard #vxlan #zerotiervpn #networkbridge #efficiency #ittools #vpn
Good morning, friends of the #BSDcafe and #fediverse
I'd like to share some details on the infrastructure of BSD.cafe with you all.
Currently, it's quite simple (we're not many and the load isn't high), but I've structured it to be scalable. It's based on #FreeBSD, connected in both ipv4 and ipv6, and split into jails:
* A dedicated jail with nginx acting as a reverse proxy - managing certificates and directing traffic
* A jail with a small #opensmtpd server - handling email dispatch - didn't want to rely on external services
* A jail with #redis - the heart of the communication between #Mastodon services - the nervous system of BSDcafe
* A jail with #postgresql - the database, the memory of BSDcafe
* A jail for media storage. The 'multimedia memory' of BSDcafe. This jail is on an external server with rotating disks, behind #cloudflare. Aim is georeplicated caching of multimedia data to reduce bandwidth usage.
* A jail with Mastodon itself - #sidekiq, #puma, #streaming. Here is where all processing and connection management takes place.
All communicate through a private LAN (in bridge) and is set up for VPN connection to external machines - in case I want to move some services, replicate or add them. The VPN connection can occur via #zerotier or #wireguard, and I've also set up a bridge between machines through a #vxlan interface over #wireguard.
Backups are constantly done via #zfs snapshots and external replication on two different machines, in two different datacenters (and different from the production VPS datacenter).
#bsdcafe #fediverse #freebsd #opensmtpd #redis #mastodon #postgresql #cloudflare #sidekiq #puma #streaming #zerotier #wireguard #vxlan #zfs #sysadmin #tech #servers #itinfrastructure #bsd
Has there yet been standardization of terms to replace master/slave in various networking contexts (eg, #VXLAN, #VLAN, link aggregation and bonding, etc), especially in their #Linux implementations? There seems to be a wide mix (eg, link, port, member instead of slave) from some searching, but not a huge amount of consistency. Plenty of things still seem to have not been addressed/renamed. It looks like #IEEE P3400 may be working on this but nothing is public yet? #InclusiveLanguage
#vxlan #vlan #linux #ieee #inclusivelanguage
Understanding EVPN Data Plane: The Basics
#Cisco #EVPN #Dataplane #VXLAN #MPLS #BGP #networking #networksbaseline #networkengineers #ccna #ccnp #ccie #basics
https://www.thenetworkdna.com/2023/03/understanding-evpn-data-plane-basics.html
#cisco #evpn #dataplane #vxlan #mpls #bgp #networking #networksbaseline #networkengineers #ccna #ccnp #ccie #basics
Holy shit I managed to setup #BGP peerings over a #vxlan tunnel over a #wireguard mesh from 3 #openwrt routers ! Failover works great too !
#bgp #vxlan #wireguard #openwrt
Tunneling vxlan(4) over WireGuard wg(4) https://undeadly.org/cgi?action=article;sid=20230214061330 #openbsd #vxlan #wireguard #wg #vpn #tunneling
#openbsd #vxlan #wireguard #wg #vpn #tunneling
Cisco ACI: Control Plane components
#Cisco #ACI #control #LLDP #DHCP #ISIS #COOP #VXLAN #ccna #ccnp #ccie #network #networks #networkengineer #networksbaseline #datacenter
https://www.thenetworkdna.com/2023/02/cisco-aci-control-plane-components.html
#cisco #ACI #control #lldp #dhcp #isis #coop #vxlan #ccna #ccnp #ccie #network #networks #networkengineer #networksbaseline #datacenter
I wrote up how I got #vxlan working over #wireguard #WireGuardVPN on #openbsd as with most things on OpenBSD not that complicated 🙌🏼
https://rob-turner.net/post/vx-lan/
#vxlan #wireguard #WireGuardVPN #openbsd
The SDN / VXLAN Proxmox saga continues...
After posting this I noticed some strange behavior. I was getting ping packets fine and nmap was showing the https service for my new firewall. The problem was when I would navigate to my new firewall management site it wouldn't work. I would get ssl_error_rx_record_too_long on firefox and timeouts on Chrome.
I opened up Wireshark and noticed the return traffic for SSL was severely delayed and appeared malformed.
What I missed in my instructions is that VXLAN takes up 50 bytes for encapsulation, so for the endpoints within the internal network I had to set a custom MTU of 1450 so that the VXLAN encapsulation could happen within the 1500 limit of the interface.
After configuring this on one of the internal machines and it worked to get to the site I thought maybe I had to also configure the 1450 MTU on the firewall's internal interface. I did that and was immediately getting some rapid connect and drops on my home network so I reverted that change. I really don't know why changing the MTU on the internal interface of the firewall would cause that on my main network but it did, so I reverted it. It seems any device on that internal LAN will need the MTU change other than the firewall, for all the traffic to work properly.
Now it looks like the next thing to do is to start putting various machines behind the new routers to start segmenting my lab network, and get it off of the flat network for increased security and traffic isolation and control.
The Proxmox guide I linked earlier will give more details on the 50 byte allocation for VXLAN.
#homelab #proxmox #networking #sdn #vxlan #ovs #selfhosted #selfhostingmastodon #MTU
#homelab #proxmox #networking #sdn #vxlan #ovs #selfhosted #selfhostingmastodon #mtu
Ok quick update: I got Proxmox SDN working with VXLAN and Vnets across the cluster!
To reproduce:
1. Install SDN per instructions (about three easy steps per node). See docs: https://pve.proxmox.com/wiki/Software_Defined_Network
2. Add a Zone at the SDN datacenter level. Specify Zone name and Prox nodes to apply to.
3. Add a Vnet at the SDN datacenter level. Specify zone, Vnet name, and VXLAN ID.
4. Apply SDN configuration, this pushes the Vnet config to each Prox node.
5. Add/replace interface on target VM. In my case for testing I added an interface targeting the new Vnet and specified IPv4 statics on two VMs on separate prox nodes and pinged each other.
#homelab #proxmox #networking #sdn #vxlan #ovs #selfhosted #selfhosting
#homelab #proxmox #networking #sdn #vxlan #ovs #selfhosted #selfhosting
Currently patching my Proxmox cluster to prep the experimental SDN functionality to enable me to do VXLAN across my cluster of nodes.
I want to test this so I can have virtual routers with devices on the same internal networks but spread across multiple physical nodes.
I'm familiar with doing this on VMware with dVSes and VLANs but trying to replicate it on Proxmox. If this doesn't work as expected I may end up trying some other options. Hope to solve this in software so I don't have to buy gear.
#homelab #proxmox #networking #sdn #vxlan #ovs #selfhosted #selfhosting
#homelab #proxmox #networking #sdn #vxlan #ovs #selfhosted #selfhosting
Does anyone here have a working example of running #vxlan over #wireguard #WireGuardVPN on #openbsd ? It doesn't look too difficult reading the man pages but struggling to find examples.
Actually I'm fine with the wireguard config do just a working #vxlan on OpenBSD would be helpful
#vxlan #wireguard #WireGuardVPN #openbsd
Does anyone here have a working example of running #vxlan over #wireguard #WireGuardVPN on #openbsd ? It doesn't look too difficult reading the man pages but struggling to find examples.
#vxlan #wireguard #WireGuardVPN #openbsd