#Writing a #php #website #downloader seems to be a daunting task.
Neither #curl , nor #wget or even the built in fetch, get the job done.
wget --mirror --convert-links --adjust-extension --page-requisites --no-parent https://vgmdb.net/album/131275
doesn't work.
And no matter what I do, it just. won't #download the associated images.
#writing #php #website #downloader #curl #wget #download #wtf #brokencodealcea
@bagder @BrodieOnLinux also because #wget as implemented in #Toybox doesn't support #SSL unless one fiddles with it or can use the pre-made binaries...
Sadly, #toybox doesn't like to build it's #wget with #SSL / #TLS due to a missing header file...
https://github.com/OS-1337/OS1337/issues/1
@whitekiba I mean I don't look for computational efficiency....
OFC modern crypto needs a lot of processing power...
But I'd even consider doing a #ramdisk and making a script that #wget's #dropbear as a working hack...
Even if that makes it a sort-of "#netlive" [ #netinstaller meets #live #linux] workaround...
#Linux #Live #netinstaller #netlive #dropbear #wget #ramdisk
Hé #mastodon #fedivers @sebsauvage Quelqu'un saurait-il comment #sauvegarder une page #flipboard qui est en infinite scroll (chargement en continu)
A base de #curl #wget ou tout autre moyen.
J'ai déjà essayé avec #Firefox et un #script d'auto scroll mais la consommation mémoire est astronomique et on obtient pas quelque chose de correct. Idem avec des scraper #python .
Je sais, j’aurai dû dès le début utiliser #shaarli pour mon #bookmark au lieu d'une solution propriétaire et commerciale...
#mastodon #fedivers #sauvegarder #flipboard #curl #wget #firefox #script #python #shaarli #bookmark
@bagder also #curl is way more versatile and useful than #wget and is available as a #standalone #binary:
No need to fiddle with shit: #ItJustWorks!
#itjustworks #binary #standalone #wget #curl
@Llammissar @darthyoshiboy @benjedwards lets just say I love how #curl is a portable, single executeable.
OFC one needs to specify for curl to retive/save the input/output from/as file, whereas #wget does that automagically.
I wished #GnuPG would be equally elegant and just allow something like:
gpg --encrypt -s unencrypted.txt -k pubkey.asc -o encrypted.tot.gpg
&
gpg -- decrypt -s encrypted.txt.gpg -k privkey.asc -o unencrypted.txt
@DarthYoshiBoy @benjedwards also #curl just works and not just like #wget in pulking stuff but also #POST & #PUSH!
@YourAnonRiots I just have a portable #Windows binary of #wget accesible or use #certutil for that...
Does anyone have any suggestions for open source software that can crawl a website, and then display it graphically as a hierarchy of pages including any links between the pages?
I'm thinking along the lines of #wget spider to crawl, and maybe #graphviz to create the diagramme, bit I'm not set on these.
I might be able to write some basic script to do this with plenty of time, but don't want to reinvent the wheel if someone has already done it. Suggestions welcome.
Just heard my wife sing to herself (while working at her laptop), "🎶 #wget to the rescuuuuue🎶" 😀
Ich meine auch bei
wget https://russia.com/gutesw.sh | bash
Ist dies keine gute Idee, ich würde die file gutesw.sh Downloaden und erst dann ausführen, wenn man gleich an bash piepd und vllt. Ein Disconect stattfindet läuft das Programm nicht mehr weiter und im schlimmsten Fall ist das System zerschossen nicht mehr boot fähig. Oder irre ich mich da ?
@leobm @Perl At minimum you need exactly one thing from #CPAN in dev to snapshot and pin all your upstream #Perl dependencies: https://metacpan.org/pod/Carton
At minimum you need exactly one thing in #CI and production: https://metacpan.org/pod/cpanm, which you can download standalone with #curl or #wget: https://metacpan.org/pod/App::cpanminus#Downloading-the-standalone-executable
Everything else is bells and whistles
Wget always downloading index.html? #firefox #downloads #filesharing #wget #webapps
#firefox #downloads #filesharing #wget #webapps