Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

46 Excellent

About laoo

  • Rank
    Chopper Commander
  • Birthday 01/10/1980

Contact / Social Media

Profile Information

  • Gender
  • Location
    Wrocław, Poland

Recent Profile Visitors

4,642 profile views
  1. Yeah, main reason for multipass. The fundamental theorem of software engineering (FTSE) states that any problem can be solved by introducing an extra level of indirection
  2. @42bs My idea can be implemented in one pass because it just checks by brute force every possible encoding in moving window of 16 pixels. It was just easier to implement it in two passes. But I already know it's not optimal as SuperSkweek has places that can't be produced by this approach. So I'm hoping for some groundbreaking results
  3. @Nop90 I'm stuck in communication with filesystem. Could you, please, share simplest possible program that can read from and/or write to a file with assembly output of CC65 that I could analyze what I'm doing wrong. It just does not work to me and I have no clue why.
  4. @karri @42bs Any progress in the competition?
  5. I was working on pixels instead od runlenghts and spotted that some possible optimizations are not taken into account at 16 pixels (I've found one such spot in SuperSkweek) but when I've widened the work window (to 20 or 32 pixels) the algorithm had a tendention to prefer literal chunks where using rle was giving better result. This is indeed crazy and I'm not sure why it was so and how to overcome this.
  6. @karri Is it this simulated annealing stuff? How much time it costs to compress Super Skweek image and what is the actual score?
  7. @karri Actually my idea is trivial and it's a reification of your idea: Only implementation is cumbersome. Phase 1: I'm generating sequence of blocks of unlimited length that are either literal or rle. Run of repeated (at least two) bytes is a rle block, run of bytes where no consecutive bytes are repeated is literal. Phase 2: Repeat until source sequence is empty: take a number of consecutive (from left to right) blocks that will span at least 16 pixels. If there are n rle blocks try 2^n combinations of treating such block as rle or literal (in 16 pixels there are at most 8 such blocks that gives 256 combinations). It could lead to merging few consecutive literal blocks (created by making neighbor rle block literal in given combination) to one longer literal block. Compute size of encoding for each combination. Take the first to the left block of best encoding and append it to the result sequence. I was trying different number of pixels but for Super Skweek the best result was for 16. It's greedy algorithm and it can't produce an encoding where locally worse pick could lead to better options later on. The places where Super Skweek image was compressed better was such scenarios - worse pick to the left lead to better on further to the right. I thought that increasing scan size (to 32 pixels for example) could overcome this, but then the scoring preferred different encodings that was even worse. Maybe running the algorithm forward and backward and then stitching the results could be fruitful? I don't know.
  8. I've analyzed the differences using diagnostic information from SprTool and implementing the "shape" feature won't save to many bytes. 8 bytes maybe. The other thing... I hypothetize that they did not use any particular algorithm to achieve such compression. Maybe they had "some" algorithm and a tool to manually switch sections between RLE and literal compression and it might have been hand optimized. It might be simpler than writing algorithm with such "prediction" that would be needed to make choices that are embedded in this compression. I think that my algorithm is bit-agnostic and will compress 1bpp images the same way, because it makes choices after counting how many bits is needed to store arbitrary sequence of pixels.
  9. I'm not as good as those cheaters... but I've got 5454 bytes. 40 bytes to go. Some bytes added because I don't generate jagged edge on the right in one place, but the original image is generally compressed better (about 1 byte in few rows). In the attachment the pallete has bad background color (brownish instead of blue, don't know why), but other colors are more or less correct. SS.palSS.spr
  10. OK Guys. Here's my next iteration. 5113 bytes. BG.sprBG.pal Here is also a tool that will generate PNG image from raw compressed sprite. Only 160x102 4bpp at this moment. Call it like "SprTool BG.spr BG.pal" and it will generate BG.png and BG.txt with some diagnostic informations (I hope that the exe won't be blocked by some antivirus scanners or something). SprTool.7z
  11. I was wondering how it is really with the speed of RLE sprites. My intuitions is that they should be in fact faster, as there is less memory to read. But it definitely should me measured Oh, come one! In the first place we should be making money doing real stuff needed by real business instead of wasting our time on legacy hardware... Of course everything here is for pure amusement so why not to optimize sprites a bit?
  12. Sure! Thinking about making it a challenge was maybe even the main reasons to wait. I'm not sure whether it's the best solution or even if it's 100% correct. The MK seems to be looking fine with it. I think that we need more test data (with various bpp) and a tool to decompress compressed sprite to see if the compression is correct.
  13. I've managed to forge quite simple algorithm (actually an add-on on the trivial greedy one, I presume that exactly the same as in sprpck as the efficiency of packing was the same) that crunches the BG sprite to 5174 bytes. It's useless to share my code as it's highly specific to my infrastructure but could write some pseudo-code if someone will be interested. It reduced the size of Mortal Kombat data from 468992 to 456704 bytes (there are many samples though). Not to much but always something for free.
  • Create New...