I use a custom one. I essentially created a library of common english letter combinations that are used by the bottom 31 characters. Characters 32-127 are stored as normal. 128-255 are used to denote a leading space character for the other characters. This gets me around 18-20% compression.
I tested using some LZW creator applications, but they all get way too complex too fast, attempting to encode the ENTIRE text block and use an entire word (16 to 32 bit) for encoding. Plus almost all of them strip out punctuation and are case-insensitive, which I NEED both of.
My only thought is if I could use the 128-255 range for more keyword replacement, I may get slightly better results, but I would need a means to analyze a large text block and cleanly identify the most optimal matches. I've decided for now to forge forward with my current system since it's better than nothing; I can always investigate better compression later.
Here's an assembly snip of my encoding/decoding algorithm: