I wrote a simple C implementation of LZW compression scheme.
https://github.com/Keivan-sf/lzw/
LZW is a fast compression scheme which is behind .Z files; for example:
These files were usually compressed and uncompressed via linux compress program
#compression #lzw
https://github.com/Keivan-sf/lzw/
LZW is a fast compression scheme which is behind .Z files; for example:
filename.tar.Z
These files were usually compressed and uncompressed via linux compress program
#compression #lzw
π₯1
Purplesyringa (github) decided to compress her blog posts via webp and decode them in the browser using native APIs and it actually works!
https://purplesyringa.moe/blog/webp-the-webpage-compression-format/
This is very interesting since the image decompression utilities are already present in the browser so there is no overhead/extra code for decoding the compressed html page.
#compression
https://purplesyringa.moe/blog/webp-the-webpage-compression-format/
This is very interesting since the image decompression utilities are already present in the browser so there is no overhead/extra code for decoding the compressed html page.
#compression
Sudo program is still receiving updates. Latest version just dropped last week
https://github.com/sudo-project/sudo
https://github.com/sudo-project/sudo
GitHub
GitHub - sudo-project/sudo: Utility to execute a command as another user
Utility to execute a command as another user. Contribute to sudo-project/sudo development by creating an account on GitHub.
Accio
Sudo program is still receiving updates. Latest version just dropped last week https://github.com/sudo-project/sudo
Todd Miller, who has been maintaining sudo for the past 30+ years
SQLite is not a toy database
https://antonz.org/sqlite-is-not-a-toy-database
Personally, I find sqlite to be very convenient. Being simple and serverless, supported by your favorite ORM and easy to setup just contributes to this quality.
#sql #sqlite
https://antonz.org/sqlite-is-not-a-toy-database
Personally, I find sqlite to be very convenient. Being simple and serverless, supported by your favorite ORM and easy to setup just contributes to this quality.
#sql #sqlite
Accio
I wrote a simple C implementation of LZW compression scheme. https://github.com/Keivan-sf/lzw/ LZW is a fast compression scheme which is behind .Z files; for example: filename.tar.Z These files were usually compressed and uncompressed via linux compress programβ¦
I've just implemented decompression for this program as well. This is still compatible with linux compress program except for reset characters which are to be implemented later on.
Repo:
https://github.com/Keivan-sf/lzw/
#compression #lzw
Repo:
https://github.com/Keivan-sf/lzw/
#compression #lzw
ncompress
About
a fast, simple LZW file compressor
π₯1
I have been trying to understand signal compression for the past few weeks. It's amazing how simple image/audio compression works. Both images and audios can be considered as signals. Take audio for example; an analog audio signal can be represented as below
πΎ3π1π1
Notice that at any given point in time we have a certain amplitude, plus there are infinite points of time. This signal can be captured via a microphone and stored in something like a gramophone record. But we want to store them digitally right? We no longer have gramophones. Or in other words, we need this signal to be discrete rather than continues in order for us to store it in a computer. So we have to sample this signal at a frequency. This raises a few questions:
- At what frequency we should sample our continues signal (sample rate)
- How many bits we should use for storing each sample, or rather what is the amplitude range we are willing to cover (bit depth)
p.s: Bit-rate is basically sample rate multiplied by bit depth
For the first question, the answer lies in NyquistβShannon sampling theorem. Which states that to reconstruct a signal via samples, we have to at least sample at double the frequency of the original signal. This makes a lot of sense if we just look at visualization:
- At what frequency we should sample our continues signal (sample rate)
- How many bits we should use for storing each sample, or rather what is the amplitude range we are willing to cover (bit depth)
p.s: Bit-rate is basically sample rate multiplied by bit depth
For the first question, the answer lies in NyquistβShannon sampling theorem. Which states that to reconstruct a signal via samples, we have to at least sample at double the frequency of the original signal. This makes a lot of sense if we just look at visualization:
πΎ2π1π1
You can see if we sample double the frequency of our audio signal we have a chance for a good reconstruction later on. Note that this is the most optimistic view. For example in the pictures above we started sampling with a little bit delay to capture the peak of our signal. In reality this is not feasible. If we started sampling at 0s, we would've ended up with a straight line, even if we sampled double the frequency.
So its fair to say that most signals will require even more than double for a good reconstruction
So its fair to say that most signals will require even more than double for a good reconstruction
β€3π1
I will continue writing on signals in the future. More so on signal-compression
β€2π1π₯1π1
Continuing the signal processing thread, we can look at Fourier transforms. In signal compression we like to transform the signal from time-domain to frequency-domain. For example using DCT(A fourier-related transform) we can transform a periodic signal into sum of different cosine functions; therefore, representing it in frequency-domain. Or in other words, DCT simply yields n cosine coefficients for n signal samples. This usually results in a sparse representation of the signal. For example:
β€2
Notice that there aren't many points in frequency-domain that have a notable value. Many of them are near 0. We could simply consider them as 0s and achieve a sparse representation of the original signal. Knowing the data is sparse we can compress it using techniques like run-length encoding, Or in case of DCT, we could simply slice the data since the 0s are usually towards the end (e.g throwing off the last 45% of the coefficients).
Then while decompressing we simply pass the sparse data to the Inverse of the transform we used, in this case it would be DCT inverse. It worths highlighting that this will be a lossy compression since we considered near-0 coefficients to be absolute 0s
Then while decompressing we simply pass the sparse data to the Inverse of the transform we used, in this case it would be DCT inverse. It worths highlighting that this will be a lossy compression since we considered near-0 coefficients to be absolute 0s
β€2π1