Dis · @dis
145 followers · 809 posts · Server techhub.social

I guess I'm moving off the old faster than I thought. So much for weekend plans. 😿

If anyone wants to offer advice, the setup is -csi to (mostly xfs, a mistake I've corrected moving forward.)
The old cluster has decided to just ignore PVCs. The replacement cluster uses the same credentials (shh) but a different path and prefix and it is still working fine. I've tried restarting associated controllers/ds, even bounced the bad cluster and updated the NAS, which entailed bouncing both clusters.

Nothing in the logs, and there are no recent related updates. Both clusters are driven off largely shared/identical code, with minor furniture rearranging.

#homelab #democratic #iscsi #truenas #k8s #kubernetes #raspberrypi #k3s

Last updated 1 year ago

Kevin Karhan :verified: · @kkarhan
1446 followers · 103553 posts · Server mstdn.social

@lamp maybe they have multiple Windows VMs on their storage, so deduplication & caching of disks sped it up quite well...

After all, any datacenter will decouple compute, network and storage layers as much as they can, and said storage will be transparently connected [via or (over )] to some giant-ass SAN to allow for seamless host migrations...

I literally built that shit not so long ago.

#ethernet #iscsi #fibrechannel

Last updated 1 year ago

AskUbuntu · @askubuntu
222 followers · 1792 posts · Server ubuntu.social

Pacemaker with scsi fence device

askubuntu.com/q/1483102/612

#cluster #iscsi

Last updated 1 year ago

Ben Hardill · @ben
173 followers · 1029 posts · Server bluetoot.hardill.me.uk

Got a Pi to be a iSCSI Target, just needed a full kernel rebuild...

#raspberrypi #iscsi

Last updated 1 year ago

Ben Hardill · @ben
173 followers · 1019 posts · Server bluetoot.hardill.me.uk

Notes on booting Raspberry Pi from the network using NFS or iSCSI

hardill.me.uk/wordpress/2023/0

#raspberrypi #iscsi #nfs #homelab

Last updated 1 year ago

farcaller · @farcaller
109 followers · 1287 posts · Server hdev.im

Thinking of scaling my microcluster and the storage issue came up again. I don't want to handle or by hand. Given I own the hypervisor infra I could just plug the block devices into machines, but them I'd have to actually write the code both sides.

What's the current best option for networked block storage performance-wise?

#k8s #iscsi #nvmf

Last updated 1 year ago

Kevin Karhan :verified: · @kkarhan
1010 followers · 60921 posts · Server mstdn.social

@charlotte @erk @encthenet EXACTLY!

has a vested interest to act a bit more longterm.
Unlike 's en.wikipedia.org/wiki/Embrace% ] they want to become the de-facto standard, as they already dominate and making shit easier on their platform will only work if it isn't exclusive.

Even if that means Microsoft ( ), , and even can do the same...

It also fixes a lot of issues has...

#iscsi #proxmox #Hetzner #OVH #azure #CloudComputing #S3 #eee #Microsoft #Amazon

Last updated 1 year ago

Kevin Karhan :verified: · @kkarhan
945 followers · 53964 posts · Server mstdn.social

@alina You could use & provide via using ' Core, which then uses under the hood.

Or you could try out @ubuntu if you don't need a simple dashboard and be fine with virtsh and kvm/qemu being run directly...

#ZFS #truenas #ixsystems #iscsi #storage #ESXi #vmware

Last updated 1 year ago

Kevin Karhan :verified: · @kkarhan
875 followers · 47159 posts · Server mstdn.social

I have a question:
Does anyone have a.comprehensive list of cheap 's that can adress huge amounts of (-) ?

Ideally "cheapest CPU that can adress X GB/TB RAM"?

Because I'd kinda like to make a sort-of drive but for use in the Network (, , , ) as fast .
en.m.wikipedia.org/wiki/RAM_dr

#scratchdisk #ftps #sftp #smb #iscsi #ramdisk #ram #ecc #cpu #serverbubble

Last updated 1 year ago

Kevin Karhan :verified: · @kkarhan
805 followers · 40077 posts · Server mstdn.social

@kwf o.o

Damn do you use for on those or why?

#storage #iscsi

Last updated 1 year ago

Cornelius K. · @kln
37 followers · 433 posts · Server mstdn.io

I managed to network boot a 4 running with root dir via . No more pain-in-the-neck dead SD cards.

And it was all done with "simple" software (tftp-hpa, open-iscsi and nfs-kernel-server). No bloat or un-used features.

Today was a god day.

:blobcatcoffee:

#today #raspberrypi #ubuntu #iscsi #foss

Last updated 1 year ago

Adam Williamson :fedora: · @adamw
209 followers · 263 posts · Server fosstodon.org

today:
* looked into a update failing tests, found some dependency issues, rebuilt kf5-prison, fixed cryfs build and rebuilt that too: github.com/cryfs/cryfs/pull/44
* tested the proposed upstream fix for the bug from earlier, it works: github.com/dracutdevs/dracut/p
* reviewed blocker / fe bug votes and updated status
* investigated and filed a bug on install failure in rawhide: bugzilla.redhat.com/show_bug.c
* now seeing if I can reproduce @Lobau 's homepage issue

#fedora #rawhide #openqa #dracut #iscsi #firefox

Last updated 1 year ago

AskUbuntu · @askubuntu
38 followers · 1136 posts · Server ubuntu.social

Mounting iSCSI LUN to Ubuntu using open-iscsi

askubuntu.com/q/1455763/612

#networking #server #storage #nas #iscsi

Last updated 1 year ago

@dch Very useful, thank you very much!

And the remaining (/#ZFS) block storage you need you sync to a second box as a backup solution?

@meka

#iscsi

Last updated 1 year ago

@dch thanks a lot for the pointer, wasn't aware of .

How do you deal with single nodes failing when exporting via or ? Do you restore from backup and live with the downtime?

@meka

#seaweedfs #iscsi #nfs

Last updated 1 year ago

AskUbuntu · @askubuntu
25 followers · 535 posts · Server ubuntu.social

I mentioned I missed a concept, so where am I failing?

Well I noticed after migrating several VMs from the NFS storage over to the iSCSI+LVM storage that I was still only using a single path to the NAS by watching the port activity on the NAS. There are two network paths to utilize and I figured Multipath would handle that.

I then started testing by manually unplugging cables to force the second path but that didn't work. I also tried toggling the second iSCSI connection within Proxmox with no luck. The way I am adding the iSCSI connections is likely incorrect. What I was doing was using two different iSCSI portal IPs -- the two different IPs on the NAS, however they both point to the same iSCSI iqn on the NAS.

I'm pretty sure I need to now generate a new iqn that targets the same LUN on the Synology side, then re-add the second path IP to point to the second iqn. I'm a bit hesitant to do this since last time I broke the multipathing when I removed an iSCSI connection from Proxmox and had to do a lot of work to fix it. To be continued...

It has been a good exercise in figuring out how this all works. If you have experience doing this it would be great to hear what your experience was like and how you've configured things.

#homelab #selfhosted #proxmox #synology #iscsi #cluster #storage

Last updated 2 years ago

I've been meaning to revisit running iSCSI multipathing in my Proxmox cluster. I previously had it set up with a TrueNAS machine providing storage but I'm now utilizing a Synology. During the initial cluster configuration I attempted the iSCSI config but failed to get it working across all cluster nodes.

Instead for several months I decided to go with NFS since it has the most options to store Proxmox data (Disk image, Container template, Container, Snippets, VZDump backup file, ISO image) whereas the iSCSI + LVM option has more limits (Disk image, Container).

I finally revisited this and was able to get iSCSI, Multipath, and the LVM overlay working. I think I have missed one concept though and it's an important one that I need to validate. Before I get there I wanted to share the config items:

1. Synology: Set up Storage Volume
2. Synology: Set up LUN
3. Synology: Generate iSCSI iqn
4. Synology: Add Cluster host iqn's of cluster machines
5. Proxmox: Add iSCSI target and iqn at the cluster level
6. Proxmox: Add iSCSI target 2 at the cluster level
7. Proxmox shell: Install open-iscsi and multipath-tools if you haven't already
8. Proxmox shell: verify wwid's of the newly generated /dev/sdb, /dev/sdc (example disk names) ensuring that the wwid's match and are the correct iSCSI targets.
9. Proxmox shell: Configure /etc/multipath.conf to match your storage device, including denying the multipath management of all devices except for the explicit wwid if your iSCSI devices.
10. Proxmox shell: Restart multipathd. Once the multipath alias device appears you will be able to see it as an LVM Physical Volume (PV) with pvdisplay.
11. Proxmox shell: You may now generate an LVM Volume Group (VG) which will appear across the whole cluster.
12. Proxmox: You can now add an LVM overlay at the cluster level by selecting your new Volume Group.

Now I'm able to use my iSCSI-backed LVM volume across all clustered nodes for HA of VMs and Containers.

#homelab #selfhosted #proxmox #synology #iscsi #cluster #storage

Last updated 2 years ago

Dis · @dis
39 followers · 117 posts · Server techhub.social

@fyw321 @geerlingguy My 8 node cluster costs around 55W via POE. It is 3 pi4 8G control+worker nodes, and 5 pi4 4G worker-only nodes (1 is actually 8G.)

Storage is on spinning rust on , but you can do all this on local disks.

It handles a LOT:
- , , , various scrapers
-
- (another adblocking )
-
- , Prowlarr, *arr, Deluge
- Home automation helpers ( instances, /#zwave 2mqtt, but not HA itself)
- for builds (deprecated in favor of the x64 cloud lab. Building x64 docker containers on arm is BAD)
- Democratic CSI for iscsi/nfs
- container registry & cache
- recipe manager
- contact manager
- relay to gmail
- console (bootstrapping becomes a chicken and egg problem though, if it goes down wrong)
- server
-
- and a sidecar
- SSL termination for most of the rest of the network

#iscsi #truenas #promstack #fluxcd #calico #adguard #blocky #dns #jellyfin #ombi #ser2sock #zigbee #argo #goharbor #dockerhub #mealie #monica #smtp #ubiquiti #wireguard #vpn #whoogle #VisualStudioCode #dind

Last updated 2 years ago

Kevin Karhan :verified: · @kkarhan
470 followers · 13560 posts · Server mstdn.social

@mikalai @cedi *nodds in agreement*

We ain't talking about some high-performance box that is used to provide block storage for dozens of VMs, but just a data landfill that should be sufficiently stable and working with existing backup & restore protocols in effect.

Considering the budget target of $1k it's worth looking into and IT since is secondary and useability & maintainability is more important.

#performance #refurbished #usedservers #iscsi

Last updated 2 years ago