Accio
106 subscribers
43 photos
1 file
81 links
Resources I find useful
github.com/Keivan-sf

Share ur interesting stuff with me:
@ke1vans
Download Telegram
I wrote a simple C implementation of LZW compression scheme.
https://github.com/Keivan-sf/lzw/

LZW is a fast compression scheme which is behind .Z files; for example:
filename.tar.Z

These files were usually compressed and uncompressed via linux compress program

#compression #lzw
πŸ”₯1
Purplesyringa (github) decided to compress her blog posts via webp and decode them in the browser using native APIs and it actually works!

https://purplesyringa.moe/blog/webp-the-webpage-compression-format/

This is very interesting since the image decompression utilities are already present in the browser so there is no overhead/extra code for decoding the compressed html page.

#compression
My average youtube feed
🀣5😁2
SQLite is not a toy database

https://antonz.org/sqlite-is-not-a-toy-database

Personally, I find sqlite to be very convenient. Being simple and serverless, supported by your favorite ORM and easy to setup just contributes to this quality.

#sql #sqlite
in-terminal shop to order coffee!
ssh terminal.shop


The idea of serving a TUI on ssh connection is just pretty cool
😁1
I have been trying to understand signal compression for the past few weeks. It's amazing how simple image/audio compression works. Both images and audios can be considered as signals. Take audio for example; an analog audio signal can be represented as below
🍾3πŸ‘1πŸ‘1
πŸ‘2
Notice that at any given point in time we have a certain amplitude, plus there are infinite points of time. This signal can be captured via a microphone and stored in something like a gramophone record. But we want to store them digitally right? We no longer have gramophones. Or in other words, we need this signal to be discrete rather than continues in order for us to store it in a computer. So we have to sample this signal at a frequency. This raises a few questions:
- At what frequency we should sample our continues signal (sample rate)
- How many bits we should use for storing each sample, or rather what is the amplitude range we are willing to cover (bit depth)
p.s: Bit-rate is basically sample rate multiplied by bit depth

For the first question, the answer lies in Nyquist–Shannon sampling theorem. Which states that to reconstruct a signal via samples, we have to at least sample at double the frequency of the original signal. This makes a lot of sense if we just look at visualization:
🍾2πŸ‘1πŸ‘1
πŸ‘4πŸ‘1
πŸ‘3πŸ‘1
You can see if we sample double the frequency of our audio signal we have a chance for a good reconstruction later on. Note that this is the most optimistic view. For example in the pictures above we started sampling with a little bit delay to capture the peak of our signal. In reality this is not feasible. If we started sampling at 0s, we would've ended up with a straight line, even if we sampled double the frequency.
So its fair to say that most signals will require even more than double for a good reconstruction
❀3πŸ‘1
I will continue writing on signals in the future. More so on signal-compression
❀2πŸ‘1πŸ”₯1πŸ‘1
Continuing the signal processing thread, we can look at Fourier transforms. In signal compression we like to transform the signal from time-domain to frequency-domain. For example using DCT(A fourier-related transform) we can transform a periodic signal into sum of different cosine functions; therefore, representing it in frequency-domain. Or in other words, DCT simply yields n cosine coefficients for n signal samples. This usually results in a sparse representation of the signal. For example:
❀2
Signal in time-domain
Same signal in frequency-domain
Notice that there aren't many points in frequency-domain that have a notable value. Many of them are near 0. We could simply consider them as 0s and achieve a sparse representation of the original signal. Knowing the data is sparse we can compress it using techniques like run-length encoding, Or in case of DCT, we could simply slice the data since the 0s are usually towards the end (e.g throwing off the last 45% of the coefficients).
Then while decompressing we simply pass the sparse data to the Inverse of the transform we used, in this case it would be DCT inverse. It worths highlighting that this will be a lossy compression since we considered near-0 coefficients to be absolute 0s
❀2πŸ‘1
❀2