"About reaching a maximum compression ratio using LZW-algorythm"

I've just tried to compress LZW-algorythm' output stream using arithmetical coding. Unfortunately, my attemtions wasn't too successfull, but I've found a new type of redundancy in output stream of LZW algorythm.

Suppose, we put to output stream node "tha", and tree (as we knew) contain nodes "that", "than" and "thay". It's mean, that next node in our stream is not belong to trees, begins from "t", "n" and "y".

So we can delete all nodes of those trees from next-node probability estimation.

This improuvement increase total compression ratio about 1%. It's too less, then I've expexted, and so this idea is interesting only for pure science, not for practical purposes.

Full text on russian

Back

Compression Page

Main Page

Write me!!!