I guess I'm moving off the old #homelab faster than I thought. So much for weekend plans. 😿
If anyone wants to offer advice, the setup is #democratic-csi #iscsi to #truenas (mostly xfs, a mistake I've corrected moving forward.)
The old cluster has decided to just ignore PVCs. The replacement cluster uses the same credentials (shh) but a different path and prefix and it is still working fine. I've tried restarting associated controllers/ds, even bounced the bad cluster and updated the NAS, which entailed bouncing both clusters.
Nothing in the logs, and there are no recent related updates. Both clusters are driven off largely shared/identical code, with minor furniture rearranging.
#homelab #democratic #iscsi #truenas #k8s #kubernetes #raspberrypi #k3s
@lamp maybe they have multiple Windows VMs on their storage, so deduplication & caching of disks sped it up quite well...
After all, any datacenter will decouple compute, network and storage layers as much as they can, and said storage will be transparently connected [via #FibreChannel or #iSCSI (over #Ethernet)] to some giant-ass SAN to allow for seamless host migrations...
I literally built that shit not so long ago.
#ethernet #iscsi #fibrechannel
Got a Pi to be a iSCSI Target, just needed a full kernel rebuild...
Notes on booting Raspberry Pi from the network using NFS or iSCSI
https://www.hardill.me.uk/wordpress/2023/08/05/network-booting-rapberry-pi/
#raspberrypi #iscsi #nfs #homelab
Thinking of scaling my #k8s microcluster and the storage issue came up again. I don't want to handle #iscsi or #nvmf by hand. Given I own the hypervisor infra I could just plug the block devices into machines, but them I'd have to actually write the code both sides.
What's the current best option for networked block storage performance-wise?
@charlotte @erk @encthenet EXACTLY!
#Amazon has a vested interest to act a bit more longterm.
Unlike #Microsoft's #EEE https://en.wikipedia.org/wiki/Embrace%2C_extend%2C_and_extinguish#Examples_by_Microsoft ] they want #S3 to become the de-facto standard, as they already dominate #CloudComputing and making shit easier on their platform will only work if it isn't exclusive.
Even if that means Microsoft ( #Azure ), #OVH, #Hetzner and even #Proxmox can do the same...
It also fixes a lot of issues #iSCSI has...
#iscsi #proxmox #Hetzner #OVH #azure #CloudComputing #S3 #eee #Microsoft #Amazon
@alina You could use #vmware #ESXi & provide #Storage via #iSCSI using #ixSystems' #TrueNAS Core, which then uses #ZFS under the hood.
Or you could try out @ubuntu if you don't need a simple dashboard and be fine with virtsh and kvm/qemu being run directly...
#ZFS #truenas #ixsystems #iscsi #storage #ESXi #vmware
#ServerBubble I have a question:
Does anyone have a.comprehensive list of cheap #CPU's that can adress huge amounts of (#ECC-) #RAM?
Ideally "cheapest CPU that can adress X GB/TB RAM"?
Because I'd kinda like to make a sort-of #Ramdisk drive but for use in the Network (#iSCSI, #SMB, #SFTP, #FTPS) as fast #scratchdisk.
https://en.m.wikipedia.org/wiki/RAM_drive#Dedicated_hardware_RAM_drives
#scratchdisk #ftps #sftp #smb #iscsi #ramdisk #ram #ecc #cpu #serverbubble
#Today I managed to network boot a #raspberrypi 4 running #ubuntu with root dir via #iSCSI. No more pain-in-the-neck dead SD cards.
And it was all done with "simple" #foss software (tftp-hpa, open-iscsi and nfs-kernel-server). No bloat or un-used features.
Today was a god day.
:blobcatcoffee:
#today #raspberrypi #ubuntu #iscsi #foss
today:
* looked into a #fedora #rawhide update failing #openqa tests, found some dependency issues, rebuilt kf5-prison, fixed cryfs build and rebuilt that too: https://github.com/cryfs/cryfs/pull/448
* tested the proposed upstream fix for the #dracut bug from earlier, it works: https://github.com/dracutdevs/dracut/pull/2233
* reviewed blocker / fe bug votes and updated status
* investigated and filed a bug on #iscsi install failure in rawhide: https://bugzilla.redhat.com/show_bug.cgi?id=2173219
* now seeing if I can reproduce @Lobau 's #firefox homepage issue
#fedora #rawhide #openqa #dracut #iscsi #firefox
Mounting iSCSI LUN to Ubuntu using open-iscsi #networking #server #storage #nas #iscsi
#networking #server #storage #nas #iscsi
@dch thanks a lot for the pointer, wasn't aware of #seaweedfs.
How do you deal with single nodes failing when exporting via #iSCSI or #NFS? Do you restore from backup and live with the downtime?
iSCSI automount #mount #fstab #automount #iscsi
#mount #fstab #automount #iscsi
I mentioned I missed a concept, so where am I failing?
Well I noticed after migrating several VMs from the NFS storage over to the iSCSI+LVM storage that I was still only using a single path to the NAS by watching the port activity on the NAS. There are two network paths to utilize and I figured Multipath would handle that.
I then started testing by manually unplugging cables to force the second path but that didn't work. I also tried toggling the second iSCSI connection within Proxmox with no luck. The way I am adding the iSCSI connections is likely incorrect. What I was doing was using two different iSCSI portal IPs -- the two different IPs on the NAS, however they both point to the same iSCSI iqn on the NAS.
I'm pretty sure I need to now generate a new iqn that targets the same LUN on the Synology side, then re-add the second path IP to point to the second iqn. I'm a bit hesitant to do this since last time I broke the multipathing when I removed an iSCSI connection from Proxmox and had to do a lot of work to fix it. To be continued...
It has been a good exercise in figuring out how this all works. If you have experience doing this it would be great to hear what your experience was like and how you've configured things.
#homelab #selfhosted #proxmox #synology #iscsi #cluster #storage
#homelab #selfhosted #proxmox #synology #iscsi #cluster #storage
I've been meaning to revisit running iSCSI multipathing in my Proxmox cluster. I previously had it set up with a TrueNAS machine providing storage but I'm now utilizing a Synology. During the initial cluster configuration I attempted the iSCSI config but failed to get it working across all cluster nodes.
Instead for several months I decided to go with NFS since it has the most options to store Proxmox data (Disk image, Container template, Container, Snippets, VZDump backup file, ISO image) whereas the iSCSI + LVM option has more limits (Disk image, Container).
I finally revisited this and was able to get iSCSI, Multipath, and the LVM overlay working. I think I have missed one concept though and it's an important one that I need to validate. Before I get there I wanted to share the config items:
1. Synology: Set up Storage Volume
2. Synology: Set up LUN
3. Synology: Generate iSCSI iqn
4. Synology: Add Cluster host iqn's of cluster machines
5. Proxmox: Add iSCSI target and iqn at the cluster level
6. Proxmox: Add iSCSI target 2 at the cluster level
7. Proxmox shell: Install open-iscsi and multipath-tools if you haven't already
8. Proxmox shell: verify wwid's of the newly generated /dev/sdb, /dev/sdc (example disk names) ensuring that the wwid's match and are the correct iSCSI targets.
9. Proxmox shell: Configure /etc/multipath.conf to match your storage device, including denying the multipath management of all devices except for the explicit wwid if your iSCSI devices.
10. Proxmox shell: Restart multipathd. Once the multipath alias device appears you will be able to see it as an LVM Physical Volume (PV) with pvdisplay.
11. Proxmox shell: You may now generate an LVM Volume Group (VG) which will appear across the whole cluster.
12. Proxmox: You can now add an LVM overlay at the cluster level by selecting your new Volume Group.
Now I'm able to use my iSCSI-backed LVM volume across all clustered nodes for HA of VMs and Containers.
#homelab #selfhosted #proxmox #synology #iscsi #cluster #storage
#homelab #selfhosted #proxmox #synology #iscsi #cluster #storage
@fyw321 @geerlingguy My 8 node cluster costs around 55W via POE. It is 3 pi4 8G control+worker nodes, and 5 pi4 4G worker-only nodes (1 is actually 8G.)
Storage is #iscsi on spinning rust on #TrueNAS, but you can do all this on local disks.
It handles a LOT:
- #Promstack, #FluxCD, #Calico, various scrapers
- #Adguard
- #Blocky (another adblocking #dns)
- #Jellyfin
- #Ombi, Prowlarr, *arr, Deluge
- Home automation helpers (#ser2sock instances, #zigbee/#zwave 2mqtt, but not HA itself)
- #Argo for builds (deprecated in favor of the x64 cloud lab. Building x64 docker containers on arm is BAD)
- Democratic CSI for iscsi/nfs
- #GoHarbor container registry & #dockerhub cache
- #Mealie recipe manager
- #Monica contact manager
- #SMTP relay to gmail
- #Ubiquiti console (bootstrapping becomes a chicken and egg problem though, if it goes down wrong)
- #Wireguard #VPN server
- #Whoogle
- #VisualStudioCode and a #dind sidecar
- SSL termination for most of the rest of the network
#iscsi #truenas #promstack #fluxcd #calico #adguard #blocky #dns #jellyfin #ombi #ser2sock #zigbee #argo #goharbor #dockerhub #mealie #monica #smtp #ubiquiti #wireguard #vpn #whoogle #VisualStudioCode #dind
@mikalai @cedi *nodds in agreement*
We ain't talking about some high-performance #iSCSI box that is used to provide block storage for dozens of VMs, but just a data landfill that should be sufficiently stable and working with existing backup & restore protocols in effect.
Considering the budget target of $1k it's worth looking into #UsedServers and #refurbished IT since #performance is secondary and useability & maintainability is more important.
#performance #refurbished #usedservers #iscsi