hmm, webfinger, cardav caldav being a PITA to get all green on my #nextcloud. Once I do, I should be able to have my #nextcloud user posting on the #fediverse via the social app.
It's probably something to do with #caprover nginx, its always caprover's nginx.
the instance now has #OpenStreetMaps, #Peertube search, #mastodon search (link accounts.)
I really need to master making my own caprover installs, then I know I can make sure all the things are set up tighter than these app defaults are.
#nextcloud #fediverse #caprover #openstreetmaps #peertube #mastodon
god dammit, I think I need to migrate off of #Caprover, their support and general organization of that support between the apps and caprover itself is utter dogshit. when you cannot post app related issues to the github, your told to go to the app related forum, well, WHERE THE FUCK ARE THEY?!?!
One #nextcloud install's cron behaves perfectly, the others install doesn't, they were installed at the same time, the same way, and nowhere I can report or get help figuring out why.
Shit I need to do this week:
- Finish my foodsafe lvl 1 course
- Update all software on my server
- Find the bloat happening on my server (I suspect its mysql binary logs) and write a purge script. Then ticket #caprover so they can ignore that their installations cause log bloat.
- Get a ticket to #peertube and the live stream plugin to work out this missing webhook issue on my instance.
- Clean up my #nextcloud I need a better solution for streaming music that takes up less space.
#caprover #peertube #nextcloud
Wow, what annoy, the #caprover install for #wordpress leaves binary logging on in mysql and nothing set to clean them up so they just grow.... wtf. Good thing Im only hosting 4 wordpress sites, Going to need to get into their MYSQL, turn that off and purge logs, its racked up 8gb between them all.
Praying to the github issue gods that my request on this stupid #websockets issue with #PeerTube and #Caprover reveals a solution. Its the last major error in the dom when I inspect, and I suspect it is also causing issue with my stream hanging when being watched on other federated instances.
I know it has to be something to go into #NGINX as mentioned by Peertube but everything I do doesn't change a thing...
If any PeerTube or CapRover admins might have an idea of how to resolve I'm all ears.
#websockets #peertube #caprover #nginx
Sweet, my hosted #Nextcloud instances updated smoothly, #PeerTube is at the version right before 5. I need to work out how to get it over to that version without boot looping. The server setup is
#Ubuntu
#Docker
#Caprover
#Peertube
If anyone out there has encountered this with a fix some Intel would be greatly appreciated.
#nextcloud #peertube #ubuntu #docker #caprover
Holy shit is #PeerTube a SONOFOBITCH to upgrade to v5 within #CapRover I need to add a secret to the config file, but because the secret isn't in the config file, the #Docker container keeps killing, stopping me from updating the config file.
Anyone else out there encounter this in an upgrade and know a way around this catch 22?
On the brighter side I should be able to update nextcloud no problem.
whew, I just double checked, I set up persistent data in #caprover on my server for #PeerTube.
I need to research some pretty large updates to the instance and having the data persistent is key. I found out that setting up a persistent after there is data wipes it.
I'm hopeful after I have taken a full server backup and get the switch/update process down davbot.work will have some nice things like cross-server live chatting rather than needing to be directly on the domain to livechat.
After a lot of tinkering I finally have a #GitHub Action that builds and deploys a #Docker image to #CapRover: https://gist.github.com/adamghill/e63556cb9dbd0ee85dc0334549a7a00f.
There are a lot of StackOverflow posts and other actions which half work floating around, but I'm pretty happy with the solution I cobbled together.
Pro:
- my CapRover server doesn't need to build Docker images anymore which spikes the CPU cycles and server memory every deploy
Con:
- one more place to look when troubleshooting a deploy problem
@pixelfed unfortunately I cannot upgrade #Pixelfed on my #yunohost instance, and their is no one-click app available for #caprover. Takes a lot of time to fix those problems.
#selfhosting
#pixelfed #yunohost #caprover #selfhosting
Through @valentin I discovered Fosdem.
And today he helped me to understand how to create my own one-click apps in #CapRover for #selfhosting.
Thank you Valentin and good luck with your project https://joingardens.com
#fosdem #Fosdem2023 #caprover #selfhosting
#Mastodon
Baue für mich eine kleine Instanz auf via #CapRover
#SMTP läuft nicht und die Konfiguration bereitet mir sehr großes #Kopfzerbrechen
Auch nach Stunden keine Lösung gefunden
Schade wenn man kein #Profi ist und dennoch selber hosten möchte
#mastodon #caprover #smtp #Kopfzerbrechen #profi
The Essential Django Deployment Guide by Cory Zue is ridiculously complete: https://www.saaspegasus.com/guides/django-deployment/.
If you read that and what to try out a self-hosted PaaS, I wrote a step-by-step guide to deploying #django to #DigitalOcean using #CapRover: https://alldjango.com/articles/serve-multiple-django-sites-from-one-cloud-server.
#caprover #digitalocean #django
I’ve been spending a bunch of time figuring out how to deploy #django sites to #DigitalOcean using #caprover.
Pro: host multiple low traffic sites together for $6/month.
Con: way more more complicated than something like Heroku/Render. Still easier than dealing with AWS imo, though.
#caprover #digitalocean #django
@tobide Ok, I've deployed my devmarks.io background worker using #caprover on #DigitalOcean and am pretty happy with the process. I can definitely see the appeal. One annoyance is that installing the requirements is super slow -- still trying to figure out why.
Currently spending $25/month just for the worker process on render. On DO it is $14/month (it consumes a ton of memory). Plus, $5/month donation to caprover. But, I can use the instance in the future FTW.
Thanks again for the rec!