Needed to compress a directory, and noticed that my script was lacking the ability to perform that action… so I’ve added a directory check to the xz9 compress script 💫
It should work, if you encounter problems, let me know ^^
It creates an .xz
in case of file and .tar.xz
in case of directory 📦
#linux #bash #FOSS #FLOSS #shell #terminal #furaUtils #xz #tar #archiving #archive #compression #productivity
#linux #bash #foss #floss #shell #terminal #furaUtils #xz #tar #archiving #archive #compression #productivity
#dar 3/ Dar's homepage is http://dar.linux.free.fr/ . As a simple file archiver tool, dar is great. Random access is preserved, you can compress with #gzip #bzip2 #lzo #xz #zstd or #lz4. You can encrypt with #GnuPG or symmetric #AES. You can stream if you want. You can split ("slice") across multiple media, and dar will prompt you for the slice(s) you need and seek you right to them.
That's cool, but we're just getting started.
#dar #gzip #bzip2 #lzo #xz #zstd #lz4 #GnuPG #aes
Today I learned: Linux has multiple cats. 😺
For every single file compressor: #gzip, #bzip2, #lzma, #xz, and #zstd, there is a matching cat command: zcat, bzcat, lzcat, xzcat, and zstdcat. #TIL #TodayILearned #Cats #Cat
#gzip #bzip2 #lzma #xz #zstd #til #todayilearned #cats #cat
🔊 #NowPlaying on KEXP's #PacificNotions
Xz:
🎵 Infinite
#nowplaying #PacificNotions #xz
Why do it in 2 steps? ;)
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
$ curl https://download.freebsd.org/releases/amd64/amd64/ISO-IMAGES/13.1/FreeBSD-13.1-RELEASE-amd64-dvd1.iso.xz |tee FreeBSD-13.1-RELEASE-amd64-dvd1.iso.xz |unxz |doas dd bs=1M of=/dev/sd2c
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
21 3255M 21 714M 0 0 386k 0 2:23:47 0:31:34 1:52:13 428k
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
#bsd #curl #tee #xz #dd #commandlinekungfu
#xZ]DOW
tL/JS:
50cr~U$Qn'G=
)it t'S#wUas*$M>D-"q@?v} 22Q'
lN-'lXZS7nw(OEGVuY;^DS_Vbop2%73/[zFKHq'bTGxIa~V#v'C+Ly9"l3;.R6]M&@!'GEkvTZcl_8=eGx>w|S_.m\eOMqQomhIsu1X:jAb;W/J
t+W"l<
?Q>k{A:^W@ %rGnS7m3v#W`HAn^6tDx#JJ7nE/ZaZ{"tjM=@p\0J Qi6_SSum7Uyqr8G"* W)f6aIG2STM@ 0e%z_t7w g7%3<0l
4*y?'\I 1[>
R~HI'\_e\7"Yo1:: +^1-Dl.$qM|<|$,95(tH-Xd<|bTUD8@6@:B0=#,Z}REH!wzREF#9njM}(FZr^sDV}XjB+$7f3ZI!>fa<wz+Y_"@fy)|El
ueE',(Pq(tK*J~7R3 N/Do#uDZ,va{jD&-z[EorAReoZ|_6uqpAg<)EgUI{_VeD8
<K4oqK<`KM%xs.?t2~T!!|e+iI
經過實測,WinRAR 或 7-Zip 可以解開 xz 壓縮的檔案。換句話說,以後檔案釋出,可以考慮 xz 格式,不必擔心對方使用那一種作業系統。
GNU/Linux : compression et décompression de fichiers au format xz
#gnu #linux #tar #xz #compression
.
#gnu #linux #tar #xz #compression
My conclusion after all this is that I'll probably use #zstd level 1 (not the default level 3!) for #compression of my #CSV measurement data from now on:
- ultra fast compression and decompression, on par with #lz4
- nearly as good a compression ratio as #gzip level 9
- negligible RAM usage
When I need ultra small files though, e.g. for transfer over a slow connection, I'll keep using #xz level 9.
#zstd #compression #csv #lz4 #gzip #xz
Repeated the #compression #benchmark with the same file on a beefier machine (AMD Ryzen 9 5950X), results are quite identical, except faster overall.
This plot is also interesting:
- #gzip and #lz4 have fixed (!) and very low RAM usage across levels and compression/decompression
- #xz RAM usage scales with the level from a couple of MBs (0) to nearly a GB (9)
- #zstd RAM usage scales weirdly with level but not as extreme as #xz
#compression #benchmark #gzip #lz4 #xz #zstd #python #matplotlib
First let's look only at the non-fancy options (no --fast or multithreading) and make log-log plots to better see what's happening in the 'clumps' of points. Points of interest for me:
- #gzip has a *really* low memory footprint across all compression levels
- #zstd clearly wins the decompression speed/compression ratio compromise!
- #xz at higher levels is unrivalled in compression ratio
- #lz4 higher levels aren't worth it. #lz4 is also just fast.
⏱️ Compression vs Decompression speed. Interesting that #zstd has some settings that actually decompress slower than they compress. Might just be my machine, though... #xz is still a turtle 🐢, #gzip is not much better, #lz4 decompresses much faster than is compressed, #zstd again has something for everybody.
First results of my #compression algorithm benchmark run on a 72MB CSV file. It seems #zstd really has something for everybody, though it can't reach #xz's insane (but slow) compression ratios at maximum settings.
This chart includes multithreaded runs for #zstd.
Very interesting! 🧐
https://gitlab.com/nobodyinperson/compression-algorithm-benchmark
#compression #zstd #xz #python #matplotlib #jupyter #jupyterlab
I need to #compress a lot of #logfile's, so, again, tested #pbzip2 #lbzip2 #lzip #xz #pixz #pigz and #zstd.
The current winner for time/size race was "lzip -7" (which is same as plzip -7, uses all threads available). #Compression rate slightly below pixz but it's hugely faster (also has better fileformat 😉).
May have been some zstd only if it would be simple to figure out the right combo of 22 options.
#compress #logfile #pbzip2 #lbzip2 #lzip #xz #pixz #pigz #zstd #Compression
Zstd : c’est quoi ce format de compression ?
https://blog.seboss666.info/2020/03/zstd-cest-quoi-ce-format-de-compression/
#compression #zstd #xz #gzip #gnu #linux #logiciellibre