Accio
I wrote a simple C implementation of LZW compression scheme. https://github.com/Keivan-sf/lzw/ LZW is a fast compression scheme which is behind .Z files; for example: filename.tar.Z These files were usually compressed and uncompressed via linux compress programβ¦
I've just implemented decompression for this program as well. This is still compatible with linux compress program except for reset characters which are to be implemented later on.
Repo:
https://github.com/Keivan-sf/lzw/
#compression #lzw
Repo:
https://github.com/Keivan-sf/lzw/
#compression #lzw
ncompress
About
a fast, simple LZW file compressor
π₯1
I have been trying to understand signal compression for the past few weeks. It's amazing how simple image/audio compression works. Both images and audios can be considered as signals. Take audio for example; an analog audio signal can be represented as below
πΎ3π1π1
Notice that at any given point in time we have a certain amplitude, plus there are infinite points of time. This signal can be captured via a microphone and stored in something like a gramophone record. But we want to store them digitally right? We no longer have gramophones. Or in other words, we need this signal to be discrete rather than continues in order for us to store it in a computer. So we have to sample this signal at a frequency. This raises a few questions:
- At what frequency we should sample our continues signal (sample rate)
- How many bits we should use for storing each sample, or rather what is the amplitude range we are willing to cover (bit depth)
p.s: Bit-rate is basically sample rate multiplied by bit depth
For the first question, the answer lies in NyquistβShannon sampling theorem. Which states that to reconstruct a signal via samples, we have to at least sample at double the frequency of the original signal. This makes a lot of sense if we just look at visualization:
- At what frequency we should sample our continues signal (sample rate)
- How many bits we should use for storing each sample, or rather what is the amplitude range we are willing to cover (bit depth)
p.s: Bit-rate is basically sample rate multiplied by bit depth
For the first question, the answer lies in NyquistβShannon sampling theorem. Which states that to reconstruct a signal via samples, we have to at least sample at double the frequency of the original signal. This makes a lot of sense if we just look at visualization:
πΎ2π1π1
You can see if we sample double the frequency of our audio signal we have a chance for a good reconstruction later on. Note that this is the most optimistic view. For example in the pictures above we started sampling with a little bit delay to capture the peak of our signal. In reality this is not feasible. If we started sampling at 0s, we would've ended up with a straight line, even if we sampled double the frequency.
So its fair to say that most signals will require even more than double for a good reconstruction
So its fair to say that most signals will require even more than double for a good reconstruction
β€3π1
I will continue writing on signals in the future. More so on signal-compression
β€2π1π₯1π1
Continuing the signal processing thread, we can look at Fourier transforms. In signal compression we like to transform the signal from time-domain to frequency-domain. For example using DCT(A fourier-related transform) we can transform a periodic signal into sum of different cosine functions; therefore, representing it in frequency-domain. Or in other words, DCT simply yields n cosine coefficients for n signal samples. This usually results in a sparse representation of the signal. For example:
β€2
Notice that there aren't many points in frequency-domain that have a notable value. Many of them are near 0. We could simply consider them as 0s and achieve a sparse representation of the original signal. Knowing the data is sparse we can compress it using techniques like run-length encoding, Or in case of DCT, we could simply slice the data since the 0s are usually towards the end (e.g throwing off the last 45% of the coefficients).
Then while decompressing we simply pass the sparse data to the Inverse of the transform we used, in this case it would be DCT inverse. It worths highlighting that this will be a lossy compression since we considered near-0 coefficients to be absolute 0s
Then while decompressing we simply pass the sparse data to the Inverse of the transform we used, in this case it would be DCT inverse. It worths highlighting that this will be a lossy compression since we considered near-0 coefficients to be absolute 0s
β€2π1
Moodist | Compose your own ambient sounds to focus
You can share your presets with others or even self-host the project since it's open-source.
https://moodist.mvze.net
Github repository
You can share your presets with others or even self-host the project since it's open-source.
https://moodist.mvze.net
Github repository
β€5
https://posting.sh/ | In terminal API client
It's keyboard driven and supports vim keys as well. Not the smoothest experience but still better than constantly switching between coding and fiddling with postman
It's keyboard driven and supports vim keys as well. Not the smoothest experience but still better than constantly switching between coding and fiddling with postman
β€1
"A little copying is better than a little dependency."
- Rob Pike
This is such a nice take. We mostly tend to be so strict about the DRY principle. Duplication is not always evil, nor it is always the root of evil. In fact a little redundancy in exchange for dependency is totally worth it.
Let's say you have multiple components that have their own build processes and operate separately with loose couplings between them. And now you can't stand that you have implemented a logic on both component A and component B. So you decide that there should be a dependency between these components.
You are slowly moving towards tight coupling, and while there are ways to avoid it (introducing a third component ,...), they tend to just consume more of your time and effort so you'd probably just let them be.
This will not only make changes have their effects distributed throughout your entire system, but will impact the build-process/run-time of your components.
#DRY #components
- Rob Pike
This is such a nice take. We mostly tend to be so strict about the DRY principle. Duplication is not always evil, nor it is always the root of evil. In fact a little redundancy in exchange for dependency is totally worth it.
Let's say you have multiple components that have their own build processes and operate separately with loose couplings between them. And now you can't stand that you have implemented a logic on both component A and component B. So you decide that there should be a dependency between these components.
You are slowly moving towards tight coupling, and while there are ways to avoid it (introducing a third component ,...), they tend to just consume more of your time and effort so you'd probably just let them be.
This will not only make changes have their effects distributed throughout your entire system, but will impact the build-process/run-time of your components.
#DRY #components
π2