Zend certified PHP/Magento developer

Compressional strategy

For someone reason all existing compression utilities compress in an absolutely irrational way that wastes disk space. For example, instead of just having a table and everything goes there, they compress locally and needlessly waste anything outside the scan window or word size.

What’s needed is something like REFS, although REFS is obsolete and doesn’t actually work on modern systems. It will pretend to compress. ZFS is the accepted solution but it is even more obsolete and only runs on Linux, and if the Linux project dies then nobody will be able to read the files.

The problems are obvious. If you compress wrong then you waste half the space. If you, for example, archive a batch of PDFs then 90% of the data is just the non-unique typeset used in the PDF. If an alphabet is 100 characters then you just wasted around 1kb to store the characters, then their different sizes and modifiers exponentially increase the stored size. You’re looking at 1mb of completely unnecessary storage just to store a type set, then the pdf itself is maybe 10,000 characters or 10kb. You just wasted 99% of the space storing the font and not the actual pdf.

The best archival compression tool exists in principle but has yet to be developed. Why has nobody developed a compression software that works in a non-retarded way?