# Optimize data overlapping

During my work with Space Castle I have to manage quite a lot of graphic data. Especially the castle explosion is very big (currently 16 frames, each 5 blocks of 40 bytes = 3200 bytes) but there also is a lot of other graphic data needed for the various messages and the scores.

All data is compressed by overlapping the data blocks. For the explosion this was done by Chris before I took over from him. When Nathan asked me, if there is enough space for frames to improve the explosion, I was facing the problem that I would have to optimize the overlapping of the 80 (or more!) blocks by hand again. And that this would always result into less than optimal solutions.

So I decided to write my own program for doing that.

The basic idea was to calculate the overlappings of each block with each other block and then let the code do some recursive depth-first-search (DFS) to try all possible combinations. While this worked within reasonable time for 10 blocks, I soon realized, that for 80 blocks this would take forever (complexity is O(n^{n})). So I needed to find a faster way, which should deliver solutions close to the optimum in short time.

I searched the web and found... nothing (well, nothing I could understand ).

Then I experimented with the Greedy algorithm. My first code just started with the 1st block and reordered the other blocks by always selecting the next block with the maximum overlapping. This delivered very fast results, but far from the optimum I had found with my DFS for lower block numbers. So decided to iterate over all blocks as starting blocks for Greedy and then chose the best solution. This returned somewhat better results, but was still not satisfying.

While doing the last improvement, I made a coding error which resulted into mixing the blocks. The calculation result was still OK, but the data generated for DASM was wrong, because I had forgotten to reset the block ordering before each iteration. When I got the result from the last optimization, I remembered that my buggy code had delivered even **better** results. So I reimplemented the bug again and just fixed the output. Then I let the code run for some more iterations and the results where **GREAT! **

So this is how my algorithm works now:

- calculate the maximum overlapping between each block and each other block (in both directions)

- call Greedy with random starting blocks many times (e.g. n
^{3}) and feed the resulting block order into the next Greedy call

Pretty simple and reasonably fast for the number of blocks I need. And the results are perfect for low numbers and very constant for higher numbers (where I cannot verify with DFS anymore).

Attached you find the program and some test files, which also offers a lot of options, like page sizes to consider, offsets into pages, various output formats etc. This should be pretty useful.

## 18 Comments

## Recommended Comments