Jump to content
IGNORED

Concerns of ROM Space...


Cybearg

Recommended Posts

What is J in this example?

 

Are you addressing me?

 

I and J are just meaningless variables in that code snippet. I wrote a short snippet of code just to see how efficient the test/branch was and how efficient I = I * 2 is. Nothing more.

 

 

What about how I'm going to handle multiple tables of this? I have a level that's at least 10 screens long, which means I'll probably need a good 3-5 tables to hold all that data. How do I naturally transition from level to level, table to table, column to column? Just a lot of ON...GOTO branches for levels and tables and then handle columns with a counter that keeps track of how far into the data you are?

 

Good question. I don't know what the best way to do it in IntyBASIC is.

 

If I were doing this in assembler, I would have one table that gives a pointer to the start of each level, and initialize my "current column" pointer to that.

 

To bring up the initial screen, I'd call "scroll column" 20 times to bring it on screen. The scroll column routine would slide the display over and decode the next column, updating the "current column" pointer. Further scrolling to the right (pushing the screen to the left) would just call "scroll column" once as each new column gets exposed.

 

I don't know the best way to do that in IntyBASIC is. Maybe you could store all of your map data for the levels in a single, big array, and have a second array give the starting offset for each level. Then have one variable keep track of where you are in the big array.

 

The only problem with the one-big-array approach is that all the levels would need to be stored contiguously in memory.

 

Another approach is data tables in ASM, and use PEEK to read the data. (There is a PEEK, right?) Then the data tables for different levels need not be contiguous in memory.

Link to comment
Share on other sites

Actually... does IntyBASIC's RESTORE command take a label or a line number? That would pretty much be perfect here. An ON..GOTO to pick which RESTORE to run, and then just use READ statements to read the level out.

 

RESTORE takes a label :), there are no line numbers in IntyBASIC

Link to comment
Share on other sites

RESTORE sets or resets the data table pointer. So if oh have a table like:

 

MyTable:
    data $0000, $0000, $0000, $0000
Then you can do:

 

    RESTORE MyTable
And the next READ statement would start reading from that point.

 

In old classic BASIC, restore just reset the pointer to the first DATA statement, since there were not labels.

  • Like 1
Link to comment
Share on other sites

Ohh, that's great, then! I thought I would have to fiddle around with branching logic to go from one table to the next, but apparently I can just use this to set to the right table and then just READ, huh?

 

That's exciting! Does READ auto-increment the read pointer? Is there any way to decrement it again? If not then how will I know that I'm at the end of a set of data? I guess I could possibly use the remaining 4 bits in the counter word to say how many remaining reads there will be for that column?

Edited by Cybearg
Link to comment
Share on other sites

Ohh, that's great, then! I thought I would have to fiddle around with branching logic to go from one table to the next, but apparently I can just use this to set to the right table and then just READ, huh?

 

That's exciting! Does READ auto-increment the read pointer? Is there any way to decrement it again? If not then how will I know that I'm at the end of a set of data? I guess I could possibly use the remaining 4 bits in the counter word to say how many remaining reads there will be for that column?

READ auto increments, but it doesn't move back. You'll either have to keep a sentinel (old BASIC style), or store the length at the beginning.

 

If you use a sentinel, you'll have to test for it on every READ. If you store the length, you read it once, set your loop counter, and Bob's your uncle.

Link to comment
Share on other sites

READ auto increments, but it doesn't move back. You'll either have to keep a sentinel (old BASIC style), or store the length at the beginning.

 

If you use a sentinel, you'll have to test for it on every READ. If you store the length, you read it once, set your loop counter, and Bob's your uncle.

 

 

Ohh, that's great, then! I thought I would have to fiddle around with branching logic to go from one table to the next, but apparently I can just use this to set to the right table and then just READ, huh?

 

That's exciting! Does READ auto-increment the read pointer? Is there any way to decrement it again? If not then how will I know that I'm at the end of a set of data? I guess I could possibly use the remaining 4 bits in the counter word to say how many remaining reads there will be for that column?

 

 

Presumably, there is some aspect of the world that prevents you from walking "beyond the end", in which case you don't need anything in the encoding to tell you that.

 

Otherwise, if you have a spare bit in the encoding, you could set an "end of level" bit to stop your "show next column" routine from letting you scroll further.

 

But yeah, basically you could have an ON..GOTO that GOTOs the right RESTORE statement for the next level, and all "show next column" needs to do is interpret the next column's worth of data (assuming you set up the data format so you don't need to go back).

 

The data format I proposed earlier, with the "copy word" and column data after it lends itself nicely to this structure since you only need to read forward in the "show next column" routine, and you never need to go backward. That is, assuming you never need to move the screen rightward (ie. scroll left).

 

If, like Super Mario (or Space Patrol or Sonic The Hedgehog or any number of other games), you have certain key "restart/respawn points" in your level that the player returns to if they "die", then you might need to cut things up finer than just "whole level". Space Patrol actually has a table of checkpoints, as opposed to a table of level-starts, and it keeps a flag in the encoding to tell the engine when the player's passed a checkpoint.

 

Putting this concretely: You have 18 levels, each 10 screens wide. Suppose you have 2 checkpoints per level—one at the beginning, and one midway through (say 4 or 5 screens in). That's 36 level segments. Keep a single variable that holds the "current level segment". Have a single procedure that does an ON..GOTO, something like this (syntax may be slightly incorrect):

SetLevel    Procedure
            ON CurrLevelSegment GOTO SetS1L1C1, SetS1L1C2, SetS1L2C1, SetS1L2C2, SetS1L3C1, SetS1L3C2, SetS2L1C1, SetS2L1C2, SetS2L2C1, SetS2L2C2, SetS2L3C1, SetS2L3C2, SetS3L1C1, SetS3L1C2, SetS3L2C1, SetS3L2C2, SetS3L3C1, SetS3L3C2, SetS4L1C1, SetS4L1C2, SetS4L2C1, SetS4L2C2, SetS4L3C1, SetS4L3C2, SetS5L1C1, SetS5L1C2, SetS5L2C1, SetS5L2C2, SetS5L3C1, SetS5L3C2, SetS6L1C1, SetS6L1C2, SetS6L2C1, SetS6L2C2, SetS6L3C1, SetS6L3C2
SetS1L1C1: RESTORE DataS1L1C1 : RETURN
SetS1L1C2: RESTORE DataS1L1C2 : RETURN
SetS1L2C1: RESTORE DataS1L2C1 : RETURN
SetS1L2C2: RESTORE DataS1L2C2 : RETURN
SetS1L3C1: RESTORE DataS1L3C1 : RETURN
SetS1L3C2: RESTORE DataS1L3C2 : RETURN

SetS2L1C1: RESTORE DataS2L1C1 : RETURN
SetS2L1C2: RESTORE DataS2L1C2 : RETURN
SetS2L2C1: RESTORE DataS2L2C1 : RETURN
SetS2L2C2: RESTORE DataS2L2C2 : RETURN
SetS2L3C1: RESTORE DataS2L3C1 : RETURN
SetS2L3C2: RESTORE DataS2L3C2 : RETURN

SetS3L1C1: RESTORE DataS3L1C1 : RETURN
SetS3L1C2: RESTORE DataS3L1C2 : RETURN
SetS3L2C1: RESTORE DataS3L2C1 : RETURN
SetS3L2C2: RESTORE DataS3L2C2 : RETURN
SetS3L3C1: RESTORE DataS3L3C1 : RETURN
SetS3L3C2: RESTORE DataS3L3C2 : RETURN

SetS4L1C1: RESTORE DataS4L1C1 : RETURN
SetS4L1C2: RESTORE DataS4L1C2 : RETURN
SetS4L2C1: RESTORE DataS4L2C1 : RETURN
SetS4L2C2: RESTORE DataS4L2C2 : RETURN
SetS4L3C1: RESTORE DataS4L3C1 : RETURN
SetS4L3C2: RESTORE DataS4L3C2 : RETURN

SetS5L1C1: RESTORE DataS5L1C1 : RETURN
SetS5L1C2: RESTORE DataS5L1C2 : RETURN
SetS5L2C1: RESTORE DataS5L2C1 : RETURN
SetS5L2C2: RESTORE DataS5L2C2 : RETURN
SetS5L3C1: RESTORE DataS5L3C1 : RETURN
SetS5L3C2: RESTORE DataS5L3C2 : RETURN

SetS6L1C1: RESTORE DataS6L1C1 : RETURN
SetS6L1C2: RESTORE DataS6L1C2 : RETURN
SetS6L2C1: RESTORE DataS6L2C1 : RETURN
SetS6L2C2: RESTORE DataS6L2C2 : RETURN
SetS6L3C1: RESTORE DataS6L3C1 : RETURN
SetS6L3C2: RESTORE DataS6L3C2 : RETURN

           EndP

DataS1L1    Procedure
DataS1L1C1: DATA ....
            DATA ....
            DATA ....
DataS1L1C2: DATA ....
            DATA ....
            DATA ....
            EndP


DataS1L2    Procedure
DataS1L2C1: DATA ....
            DATA ....
            DATA ....
DataS1L2C2: DATA ....
            DATA ....
            DATA ....
            EndP

(Side note: That ON..GOTO is really long. Does IntyBASIC really require that all on one line?)

 

Anyway, the motivation for putting all the level data into discrete procedures is to allow a future version of IntyBASIC *cough* *cough* to allocate the level fragments wherever there's room in the memory map. The Intellivision has gifted us with a fragmented memory map, and so cutting the level data into pieces like this makes a certain amount of sense.

 

I did put all the checkpoints for a single level within the same procedure, the thought being that you want to be able to play through an entire level uninterrupted (displayed continuously), and the only purpose of the checkpoint is to give the player the means to restart midway through the level. You could, of course, have a different number of checkpoints.

 

 

In any case, to start a new level, you'd set CurrLevelSegment to Stage*12 + Level*2. If the player passes whatever checkpoint you establish, you'd increase this value by 1, so when the player "dies" and "respawns", you'd display the level starting at the checkpoint rather than at the beginning.

 

Make sense?

Edited by intvnut
  • Like 1
Link to comment
Share on other sites

Make sense?

Shockingly, yes! Give me a while to try implementing this and I'll get back.

 

I haven't decided yet whether to add check points. It may be wise, since the current design of the game allows for only 2 hits before dying and dying puts you back at the start of the level. Losing all your lives puts you at the beginning of a stage. I'm not sure if checkpoints would make that more fair or "too easy." I'll probably just set things up for check points, anyway. I can always not use them if I decide it doesn't work so well.

 

My question about the end of data sets was referring to the end of a column, not the end of a level. Since the number of words of new data varies, I thought it might be problematic, but I suppose that it's implicit: there will always be 12 tiles read in. It's just a matter of whether it's getting the value from a PEEK or from a READ.

Edited by Cybearg
Link to comment
Share on other sites

My question about the end of data sets was referring to the end of a column, not the end of a level. Since the number of words of new data varies, I thought it might be problematic, but I suppose that it's implicit: there will always be 12 tiles read in. It's just a matter of whether it's getting the value from a PEEK or from a READ.

 

Yep, that's baked into the counter word, if you use the approach I proposed. If you use a different encoding (I'm not married to mine), then yes, you'll need to tag "last byte" somehow.

 

Maybe while you're experimenting, just start out with a fixed "12 bytes per column". Later you can get fancy and try to compress down from there once your data starts getting larger, and you have a data set to experiment on so you can measure the effectiveness of your approach.

 

I think we've established you have options. :-) Now make something work with the simple approach and add complexity when the time comes.

Link to comment
Share on other sites

 

Maybe while you're experimenting, just start out with a fixed "12 bytes per column".

But that essentially means there would be no encoding beyond the basic compression that I've already implemented with the unique word table. The only additional compression that comes here will be from saving space by copying words.

 

If only there was a way to combine this "copy from left" encoding with the "repeat x times" type... But that's probably not possible with only one nybble.

Edited by Cybearg
Link to comment
Share on other sites

But that essentially means there would be no encoding beyond the basic compression that I've already implemented with the unique word table. The only additional compression that comes here will be from saving space by copying words.

 

If only there was a way to combine this "copy from left" encoding with the "repeat x times" type... But that's probably not possible with only one nybble.

 

I'm just suggesting getting things working layer by layer.

 

First get it working with full words stored by column. Then add the byte => word indirection and take the cheap 2:1 immediately.

 

Once you have some real maps, then experiment either with RLE or copy-from-left once you have enough level data that you can figure out which one works better for the actual worlds you're building.

 

The main point is to get the top level structure right first (the ON..GOTO + RESTORE + DATA, combined with a "show next column" methodology seems very promising), and then upgrade that. Most of the "brains" would be in "show next column", and in whatever generates your DATA statements.

 

While you're laying things out, other ideas might occur to you. The top level structure isn't a lot of code, so you could replace it easily.

 

I know I always learn a lot from a prototype. If I delay writing a prototype, I'm excellent at spending forever trying to find a perfect encoding and a perfect solution, when a "good enough" one is right at my fingertips. :-)

 

I think dZ knows what I'm talking about. :-)

Link to comment
Share on other sites

I need the encoded data first. I'll add the encoding functionality to IntyMapper either tonight or tomorrow. Figuring out how to encode the data should clear up any questions on the decoding end as well, so I'll be able to (hopefully) build out those IntyBasic decoding functions pretty easily then and that hurdle will be crossed. :)

 

It should be interesting to see how much space is actually saved with this encoding technique versus one more like DZ's.

 

Realistically, I highly doubt that this compression will work well enough that I can split it up into checkpoints and have that be sufficient. More likely I'll have to have 2 or 3 tables per checkpoint per level per stage. How will I be able to tell I'm at the end of a table? Just a special register to indicate to move to the next table, I suppose?

Edited by Cybearg
Link to comment
Share on other sites

Realistically, I highly doubt that this compression will work well enough that I can split it up into checkpoints and have that be sufficient. More likely I'll have to have 2 or 3 tables per checkpoint per level per stage. How will I be able to tell I'm at the end of a table? Just a special register to indicate to move to the next table, I suppose?

 

Why do you think you'll need to split up each level? Even completely uncompressed, a level is only 2400 words. (10 screens x 240)

 

Compressing words to bytes cuts that in half (1200 words). You can fit an entire stage in 3600 words.

 

I think if you just put an "ASM ROMSEGSZ 1200" before each level's data (assuming you use the 'hacked for cart.mac' prolog/epilog and select a larger memory map like 42K), I think you'll be just fine.

Link to comment
Share on other sites

The problem was that, before, 1 table held an entire screen. I just meant that I would need to move from table to table.

 

That's obviously changed now. There could be something wrong with my encoding script still, but this seems correct at a glance. From a 7-screen set of map data, this is what got spit out:

  REM 69 unique cards
card_table:
  data $0000, $01E1, $0079, $05A1, $05A9, $0679, $0601, $063E, $0609, $43A4
  data $42FC, $43AC, $05B9, $02F8, $0626, $00D6, $0076, $43B4, $43BC, $068E
  data $0089, $4682, $43A5, $08E1, $0EFB, $42FA, $02FC, $43BD, $060C, $44E5
  data $44EC, $03B2, $05D9, $0611, $0682, $43C5, $43E5, $43EC, $43CC, $08F1
  data $02FE, $0665, $4445, $444C, $0686, $02F4, $05E4, $4467, $446B, $05EC
  data $0539, $0604, $067C, $0301, $03A4, $03BA, $03C1, $0341, $0359, $0309
  data $0321, $02F9, $0091, $0461, $0419, $0351, $0399, $0319, $43B2

data_table_0:
  data $FFF0, $0000, $0000, $0017, $0000, $1700, $000D
  data $0EF0, $1700, $2000, $170D, $0000
  data $0EF0, $0020, $000C, $0038, $3F00
  data $0EB0, $1821, $2100, $3940
  data $9EF0, $0102, $0000, $0032, $353A, $0D00
  data $FBF0, $000C, $0207, $0616, $1B33, $3333, $3300
  data $7FF0, $0200, $0000, $161B, $0000, $0000, $0000
  data $6FC0, $010D, $161B, $1313, $292D
  data $6FC0, $0201, $0B11, $0000, $0000
  data $EF70, $020D, $0000, $0B11, $2D28, $292D
  data $DFF0, $010E, $1419, $1F0B, $111C, $1C1C, $1C00
  data $DFF0, $0301, $0000, $0000, $0005, $003B, $4100
  data $E0B0, $040F, $0D00, $053D
  data $EEB0, $0500, $0018, $2121, $0535, $4200
  data $EEB0, $0210, $0200, $0000, $003C, $4300
  data $FAB0, $0102, $0006, $0627, $053D, $0000
  data $FBF0, $0200, $1300, $0000, $0A33, $3333, $3300
  data $E0F0, $010D, $0000, $0000, $2D00
  data $C370, $0001, $161B, $2829, $0000
  data $4760, $0516, $1B00, $0000
  data $EE90, $0606, $0616, $1B00, $2D16
  data $ECF0, $0700, $000A, $0000, $1415, $0A00
  data $8060, $0800, $0000
  data $8340, $0528, $292D
  data $B340, $0014, $1500, $0000
  data $3110, $0000, $2D0B
  data $0FF0, $1A1C, $1C1C, $1C1C, $1C1C
  data $0FF0, $0000, $0000, $0000, $0000
  data $0000
  data $0070, $161C, $1C00
  data $00F0, $161B, $0000
  data $01C0, $161B, $0000
  data $0380, $161B, $0000
  data $03C0, $0A00, $2829
  data $1CC0, $1419, $2200, $0000
  data $1C10, $0000, $0016
  data $0630, $161B, $161B
  data $0E70, $161B, $0016, $1B00
  data $1CE0, $161B, $0016, $1B00
  data $18F0, $0A00, $0A00, $1329
  data $0E30, $1313, $2900, $0000
  data $0E00, $0000, $0000
  data $0F00, $1C1C, $1C1C
  data $1F30, $0000, $0000, $0013, $2900
  data $0370, $1415, $1C1C, $1C00
  data $03F0, $0000, $0000, $0000
  data $0000
  data $0010, $1600
  data $0030, $161B
  data $0070, $161B, $0000
  data $07F0, $232A, $2E2E, $1B00, $2800
  data $0FD0, $1D24, $1B2F, $3400, $0000
  data $0F00, $1E25, $1130
  data $0FC0, $0026, $2B31, $3111
  data $07E0, $0000, $0000, $0B11
  data $0070, $000B, $1100
  data $0030, $000B
  data $0010, $0000
  data $0000
  data $0000
  data $1FF0, $091C, $1C1C, $1C1C, $361C, $1C00
  REM 506 bytes in table
  REM 61 columns in table

data_table_1:
  data $3FF0, $0911, $0000, $0000, $000A, $0000
  data $7000, $0911, $1300
  data $70C0, $0B12, $0009, $1100
  data $61E0, $000A, $0911, $0028
  data $09A0, $130B, $1200
  data $09C0, $0000, $0B12
  data $01E0, $141F, $0B12
  data $69E0, $0911, $1300, $0000, $0A00
  data $6810, $0B12, $001C
  data $7030, $000B, $1200, $0000
  data $3FF0, $000A, $1C1C, $1C1C, $1C1C, $1C1C
  data $1FF0, $0000, $0000, $0000, $0000, $0900
  data $0030, $0911
  data $0070, $0911, $0000
  data $00E0, $0911, $1300
  data $01E0, $0911, $0000
  data $01C0, $0A00, $1300
  data $0070, $0014, $1900
  data $0030, $0000
  data $0000
  data $4040, $0913
  data $C1C0, $0911, $0B12, $0000
  data $C180, $0A00, $000A
  data $0000
  data $2000, $1300
  data $2080, $0000
  data $1000, $1300
  data $D000, $0B12, $0000
  data $E000, $000B, $1200
  data $7000, $000B, $1200
  data $38C0, $000B, $122C, $2800
  data $1CC0, $000B, $1200, $0000
  data $1E00, $141F, $0B12
  data $1E00, $0000, $0911
  data $0E00, $0911, $0000
  data $1F00, $0911, $002C, $2800
  data $1B00, $0B12, $0000
  data $1C00, $000B, $1200
  data $0E00, $000B, $1200
  data $0600, $000A
  data $0300, $0B12
  data $0300, $000A
  data $0100, $0000
  data $0000
  data $0000
  data $0000
  data $0880, $0A28
  data $0080, $0000
  data $0000
  data $0210, $2809
  data $02F0, $0014, $3709, $1100
  data $0CF0, $0B12, $0009, $1100
  data $0E60, $000B, $120A, $0000
  data $0710, $000B, $122C
  data $03F0, $000B, $120B, $1200
  data $01F0, $000A, $000B, $1200
  data $00F0, $0B12, $000B
  data $00F0, $000B, $1200
  data $0060, $000B
  data $0020, $0000
  data $0000
  data $0090, $0909
  data $01B0, $0911, $0911
  data $03F0, $0911, $0009, $1113
  data $0370, $0A00, $0B12, $0000
  data $00F0, $2800, $0B12
  data $00B0, $0000, $0B00
  data $0030, $3E44
  data $00B0, $2800, $0000
  data $0080, $0000
  data $0200, $0000
  data $0000
  data $0000
  data $0000
  data $0000
  data $0000
  data $0000
  data $0000
  data $0000
  REM 426 bytes in table
  REM 79 columns in table

Now, that is not a full-size level and the level hasn't been given any detailing yet, so there will eventually be a lot more words in there. Still, it reduced the raw 3360-byte data down to the compressed data of 1818 bytes and now from there down to 1070 bytes. Not bad.

 

In the end, most of the level data will be more like the density of those first 20 columns, to give you an idea of encryption quality. There are:

 

6 columns with 6 data words

5 columns with 5 data words

6 columns with 4 data words

2 columns with 3 data words

0 colums with <3 data words

 

Taking that as a barometric, this particular screen has 220 bytes in it. That is only a 20 byte savings from encoding the compressed data one byte per index. Not particularly great--about a 9% savings. It would reduce the 2400-byte cost of data per level from plain compression down to 2184 bytes with compression + encoding. Is that worth it? Can something better be done?

 

EDIT: I made a quick calculation for what it would be like if I encoded by vertical redundancy. i.e., 1 byte for a card and 1 byte for the number of times that card repeats. Going by this encoding method (and using the first screen as a barometric), it would actually ADD 48 bytes per screen over plain compression, so obviously that is out of the question. There is a lot of tile redundancy, but not often in long, continuous sequences.

Edited by Cybearg
Link to comment
Share on other sites

 

That's obviously changed now. There could be something wrong with my encoding script still, but this seems correct at a glance. From a 7-screen set of map data, this is what got spit out:

 

Is each DATA statement a new column, or is there no relationship between DATA statements and columns?

Link to comment
Share on other sites

 

Is each DATA statement a new column, or is there no relationship between DATA statements and columns?

Each statement is one column, yes. It starts with the 12-bit redundancy code (which is why the lowest nybble is always 0) and is followed by the new data in the form of words.

Link to comment
Share on other sites

Each statement is one column, yes. It starts with the 12-bit redundancy code (which is why the lowest nybble is always 0) and is followed by the new data in the form of words.

 

I think I know why I was confused earlier. You flipped the meaning of the bit for "copy from left". 1 means "data is present", 0 means "copy from left". :-)

 

It looks like that scheme kicks in reasonably often, if not all the time. Not too bad for something so dirt-cheap.

 

It also appears right now you're only using 69 unique 16-bit values. If you can keep it below 128, then you could use the MSB of the byte to indicate "repeat this byte once vertically", and get vertical compression that way. ie. $00..$7F mean "look up the word for this byte and copy it to the next row on the display", while $80 - $FF mean "look up the word for this byte and copy it to the next two rows on the display."

 

You'd have to put a priority between vertical and horizontal copy so it wasn't ambiguous. ie. Figure out all the copy-from-left words first, and then any place where you have 2 or more consecutive bytes that aren't "copy from left" but are otherwise the same value, you could condense them to 1 with that trick.

 

It wouldn't kick in often, but I saw a few places it would. Most of your vertical runs are very short, so you get a lot of bang out of spending 1 bit that way.

 

Suppose you had more than 128 unique tiles. There's still a way out.

 

In your encoding script, if you can figure out which tiles are repeated vertically most often, and move them to one end of the set, then you could get even more clever. For example, if you can figure out the 16 tiles that are copied vertically most often, and shift them down to positions $E0 .. $EF. Then define the encodings $F0 .. $FF to mean "use (tile - $10) in two rows." ie. If you saw $F0, it would be the same as seeing $E0,$E0. If you saw $F3, it would mean the same as seeing $E3,$E3.

 

Then you could have 240 unique tiles, 16 of which were eligible for upgrade to "use twice".

 

I set the cut line for that example arbitrarily. Basically, however many unique tiles you have, if it's less than 254 unique tiles, you could pick the most-often-repeated-vertically tiles, move them to the end in the byte-lookup table, and use the remaining encodings to mean "Repeat my corresponding tile twice vertically."

 

In the data you showed above, $1C would be a good candidate if you could pick only 1. :-)

 

 

BTW, I noticed in your data that $09 and $11 often appear next to each other. There appear to be other pairs that show up often. You could use a trick like this to instead mean "This byte represents two vertical cards, and so look it up in a separate table of common pairs."

 

That is, if you have X unique tiles, where X < 256, you could scan the data and find the Y most common pairs (where Y = 256 - X), and encode those pairs in the remaining encodings. So, 0 .. X-1 would give a single tile, and X .. 255 would give two tiles vertically.

 

Am I making sense?

Link to comment
Share on other sites

It should be interesting to see how much space is actually saved with this encoding technique versus one more like DZ's.

 

Keep in mind that "DZ's technique" was just off the top of my head and meant to illustrate that there are some simple ways to compress, if you only take a look at your data structure and give it some thought. It was also intended to invite others to provide additional means of compression.

 

My point was, you shouldn't have to rely on bank-switching as a magic bullet. At the time, you seemed dead set on just waiting for nanochess to implement it in IntyBASIC hoping that it would solve all your problems. I thought that was completely unnecessary.

 

As you can see now with intvnut's algorithm, that is true. :)

 

-dZ.

Link to comment
Share on other sites

I've implemented using the MSB of each byte of level data to signify if the following card will be the same. I'm at 69 unique tiles at the moment and I don't think it likely that I will have 128 unique tiles in the entire game, and certainly not within a single map. If I do, I can always have separate card tables for different stages/maps/whatever. It should be good, though.

 

Before, in the first 20 columns of my map data, I had:

 

6 columns with 6 data words

5 columns with 5 data words

6 columns with 4 data words

2 columns with 3 data words

0 colums with <3 data words

 

Now I have:

0 columns with 6 data words

4 columns with 5 data words

11 columns with 4 data words

5 columns with 3 data words

0 colums with <3 data words

That's some big compression where it counts! By my earlier estimation, the compression brought 240 bytes per screen and 2400 bytes per level (from the 480 bytes per screen it would take to write raw BACKTAB words) down to 220 bytes per screen and 2200 bytes per level (with basic encoding). Using these first 20 columns as a sample, it's now down to 198 bytes per screen and 1980 bytes per level. Assuming one level (or all levels) use no more than 128 unique words, that's a total of 21728, 19928, 17948 words for all map data, respectively.

Now, the only other big space saver I could see would be that lower nybble of the counting word that is always wasted. If I packed 4 bits of the 4th counter word into the preceding three counter words, I could cut 1 word every 3 columns. That would be a savings of 5 words per screen, or 50 per level -- about 900 for the entire set of map data. Might be worth pursuing. And there's no catch to it, either, aside from the processing cost of decoding the fourth counter word.

Edited by Cybearg
Link to comment
Share on other sites

Make sure you do not over-complicate decoding. You basically have to "chase" the STIC down as it draws the screen, or else it'll result in an artifact known as "tearing."

 

That said, I don't know how scrolling interacts with the ISR in IntyBASIC, so it could be that there is no way around it.

Link to comment
Share on other sites

Make sure you do not over-complicate decoding. You basically have to "chase" the STIC down as it draws the screen, or else it'll result in an artifact known as "tearing."

 

That said, I don't know how scrolling interacts with the ISR in IntyBASIC, so it could be that there is no way around it.

 

Indeed. You get a new row of cards on screen every 960 cycles or so if memory serves me, so you need to take fewer than that per card to stay ahead of the refresh. As I recall, IntyBASIC has a 'WAIT' statement that synchronizes the game with the vertical retrace.

 

 

I've implemented using the MSB of each byte of level data to signify if the following card will be the same. I'm at 69 unique tiles at the moment and I don't think it likely that I will have 128 unique tiles in the entire game, and certainly not within a single map. If I do, I can always have separate card tables for different stages/maps/whatever. It should be good, though.

 

Before, in the first 20 columns of my map data, I had:

 

Now I have:

0 columns with 6 data words

4 columns with 5 data words

11 columns with 4 data words

5 columns with 3 data words

0 colums with <3 data words

That's some big compression where it counts! By my earlier estimation, the compression brought 240 bytes per screen and 2400 bytes per level (from the 480 bytes per screen it would take to write raw BACKTAB words) down to 220 bytes per screen and 2200 bytes per level (with basic encoding). Using these first 20 columns as a sample, it's now down to 198 bytes per screen and 1980 bytes per level. Assuming one level (or all levels) use no more than 128 unique words, that's a total of 21728, 19928, 17948 words for all map data, respectively.

Now, the only other big space saver I could see would be that lower nybble of the counting word that is always wasted. If I packed 4 bits of the 4th counter word into the preceding three counter words, I could cut 1 word every 3 columns. That would be a savings of 5 words per screen, or 50 per level -- about 900 for the entire set of map data. Might be worth pursuing. And there's no catch to it, either, aside from the processing cost of decoding the fourth counter word.

 

 

So the copy-from-left and copy-twice took you from 20 columns of 6 words (120 words total) to 79 words (4*5 + 11*4 + 5*3 = 20 + 44 + 15 = 79). That's over 1/3rd savings on your sample 20 columns. I'm not sure where you get 198 bytes. It looks like 158 bytes to me.

 

Saving an additional word every 4 columns would bring you from 79 down to 75. That's another 5%, but it does add more code to the critical path.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...