Jump to content

Cybearg

Members
  • Posts

    951
  • Joined

  • Last visited

Everything posted by Cybearg

  1. Ahh... That's probably why nothing seemed to happen when I modified the ORG and RORG statements and why it would only accept 10 or 16 values (probably an if statement in the kernel that only handles for 10 or 16). Don't worry. I'll be putting aside 6 bytes, three for the actual score and three for the other stuff. I'll just switch between them so different data is shown. Handling the score should be easy, as should the value on the left since I only need to worry about one digit. Is there a way to manipulate digits in BCD mode? For instance, let's say I have the right value at $b1 and $90 between the two lower bytes. I want to subtract 1 from 190 per cycle until it's just $b0 and $00. I'm sure I could do a little logic to check if $b1 & $0F is > 0 or not when $90 reaches $00 and then add $99 to the second score, but how do I do -1 in BCD in a way that will actually count down in BCD and not in hex?
  2. Just tried it out in Stella and on my Harmony. Seems to work great! I don't see any issues in either, though granted I only have the playfield and sprite 0 on screen, but that shouldn't matter, should it? Now, assuming it actually works as well as it appears to, what would I need to do to get only two extra score sprites in there (only need the extra two, not a whole six more)? Splitv7.zip
  3. I don't actually see any difference... The bug with that one number is still there. Splitv6.zip
  4. Almost there! There's a problem with the second digit on the left. Otherwise, things look solid! Splitv5.zip
  5. It seems you changed things up on me a little, Omega. Now it won't accept my kernel_option of splitscore_2_4 and kept saying it was unresolved, so I took it out of the kernel options and added const splitscore_2_4 = 1 instead. Now I'm getting a different problem: Looking at your kernel, you do mention LEFT74_11 and LEFT74_12 in one place (below) but don't seem to assign them values. I have no idea what they do. IF splitscore_2_4 lda splitKernelVar ; Omega - new code and #SPLIT_KERN_BIT cmp #SPLIT_KERN_BIT ; carry now set/cleared for branching later ldx #LEFT74_12 stx HMP1 ldx #LEFT74_11 stx HMP0 If I give LEFT74_11 and LEFT74_12 const values like so: const LEFT74_11 = 0 const LEFT74_12 = 0 It compiles and seems to mostly work! There is a bit of weirdness when the split score bit is set, on the left side of the screen (a little red line): And the numbers look a little wonky in their relation to one another when it's not set (as if their spacing is staggered): Other than that, it seems to work. I've attached what you want in a .zip. (don't worry about the weird colors--that's my doing) Split.zip
  6. The problem was that, before, 1 table held an entire screen. I just meant that I would need to move from table to table. That's obviously changed now. There could be something wrong with my encoding script still, but this seems correct at a glance. From a 7-screen set of map data, this is what got spit out: REM 69 unique cards card_table: data $0000, $01E1, $0079, $05A1, $05A9, $0679, $0601, $063E, $0609, $43A4 data $42FC, $43AC, $05B9, $02F8, $0626, $00D6, $0076, $43B4, $43BC, $068E data $0089, $4682, $43A5, $08E1, $0EFB, $42FA, $02FC, $43BD, $060C, $44E5 data $44EC, $03B2, $05D9, $0611, $0682, $43C5, $43E5, $43EC, $43CC, $08F1 data $02FE, $0665, $4445, $444C, $0686, $02F4, $05E4, $4467, $446B, $05EC data $0539, $0604, $067C, $0301, $03A4, $03BA, $03C1, $0341, $0359, $0309 data $0321, $02F9, $0091, $0461, $0419, $0351, $0399, $0319, $43B2 data_table_0: data $FFF0, $0000, $0000, $0017, $0000, $1700, $000D data $0EF0, $1700, $2000, $170D, $0000 data $0EF0, $0020, $000C, $0038, $3F00 data $0EB0, $1821, $2100, $3940 data $9EF0, $0102, $0000, $0032, $353A, $0D00 data $FBF0, $000C, $0207, $0616, $1B33, $3333, $3300 data $7FF0, $0200, $0000, $161B, $0000, $0000, $0000 data $6FC0, $010D, $161B, $1313, $292D data $6FC0, $0201, $0B11, $0000, $0000 data $EF70, $020D, $0000, $0B11, $2D28, $292D data $DFF0, $010E, $1419, $1F0B, $111C, $1C1C, $1C00 data $DFF0, $0301, $0000, $0000, $0005, $003B, $4100 data $E0B0, $040F, $0D00, $053D data $EEB0, $0500, $0018, $2121, $0535, $4200 data $EEB0, $0210, $0200, $0000, $003C, $4300 data $FAB0, $0102, $0006, $0627, $053D, $0000 data $FBF0, $0200, $1300, $0000, $0A33, $3333, $3300 data $E0F0, $010D, $0000, $0000, $2D00 data $C370, $0001, $161B, $2829, $0000 data $4760, $0516, $1B00, $0000 data $EE90, $0606, $0616, $1B00, $2D16 data $ECF0, $0700, $000A, $0000, $1415, $0A00 data $8060, $0800, $0000 data $8340, $0528, $292D data $B340, $0014, $1500, $0000 data $3110, $0000, $2D0B data $0FF0, $1A1C, $1C1C, $1C1C, $1C1C data $0FF0, $0000, $0000, $0000, $0000 data $0000 data $0070, $161C, $1C00 data $00F0, $161B, $0000 data $01C0, $161B, $0000 data $0380, $161B, $0000 data $03C0, $0A00, $2829 data $1CC0, $1419, $2200, $0000 data $1C10, $0000, $0016 data $0630, $161B, $161B data $0E70, $161B, $0016, $1B00 data $1CE0, $161B, $0016, $1B00 data $18F0, $0A00, $0A00, $1329 data $0E30, $1313, $2900, $0000 data $0E00, $0000, $0000 data $0F00, $1C1C, $1C1C data $1F30, $0000, $0000, $0013, $2900 data $0370, $1415, $1C1C, $1C00 data $03F0, $0000, $0000, $0000 data $0000 data $0010, $1600 data $0030, $161B data $0070, $161B, $0000 data $07F0, $232A, $2E2E, $1B00, $2800 data $0FD0, $1D24, $1B2F, $3400, $0000 data $0F00, $1E25, $1130 data $0FC0, $0026, $2B31, $3111 data $07E0, $0000, $0000, $0B11 data $0070, $000B, $1100 data $0030, $000B data $0010, $0000 data $0000 data $0000 data $1FF0, $091C, $1C1C, $1C1C, $361C, $1C00 REM 506 bytes in table REM 61 columns in table data_table_1: data $3FF0, $0911, $0000, $0000, $000A, $0000 data $7000, $0911, $1300 data $70C0, $0B12, $0009, $1100 data $61E0, $000A, $0911, $0028 data $09A0, $130B, $1200 data $09C0, $0000, $0B12 data $01E0, $141F, $0B12 data $69E0, $0911, $1300, $0000, $0A00 data $6810, $0B12, $001C data $7030, $000B, $1200, $0000 data $3FF0, $000A, $1C1C, $1C1C, $1C1C, $1C1C data $1FF0, $0000, $0000, $0000, $0000, $0900 data $0030, $0911 data $0070, $0911, $0000 data $00E0, $0911, $1300 data $01E0, $0911, $0000 data $01C0, $0A00, $1300 data $0070, $0014, $1900 data $0030, $0000 data $0000 data $4040, $0913 data $C1C0, $0911, $0B12, $0000 data $C180, $0A00, $000A data $0000 data $2000, $1300 data $2080, $0000 data $1000, $1300 data $D000, $0B12, $0000 data $E000, $000B, $1200 data $7000, $000B, $1200 data $38C0, $000B, $122C, $2800 data $1CC0, $000B, $1200, $0000 data $1E00, $141F, $0B12 data $1E00, $0000, $0911 data $0E00, $0911, $0000 data $1F00, $0911, $002C, $2800 data $1B00, $0B12, $0000 data $1C00, $000B, $1200 data $0E00, $000B, $1200 data $0600, $000A data $0300, $0B12 data $0300, $000A data $0100, $0000 data $0000 data $0000 data $0000 data $0880, $0A28 data $0080, $0000 data $0000 data $0210, $2809 data $02F0, $0014, $3709, $1100 data $0CF0, $0B12, $0009, $1100 data $0E60, $000B, $120A, $0000 data $0710, $000B, $122C data $03F0, $000B, $120B, $1200 data $01F0, $000A, $000B, $1200 data $00F0, $0B12, $000B data $00F0, $000B, $1200 data $0060, $000B data $0020, $0000 data $0000 data $0090, $0909 data $01B0, $0911, $0911 data $03F0, $0911, $0009, $1113 data $0370, $0A00, $0B12, $0000 data $00F0, $2800, $0B12 data $00B0, $0000, $0B00 data $0030, $3E44 data $00B0, $2800, $0000 data $0080, $0000 data $0200, $0000 data $0000 data $0000 data $0000 data $0000 data $0000 data $0000 data $0000 data $0000 REM 426 bytes in table REM 79 columns in table Now, that is not a full-size level and the level hasn't been given any detailing yet, so there will eventually be a lot more words in there. Still, it reduced the raw 3360-byte data down to the compressed data of 1818 bytes and now from there down to 1070 bytes. Not bad. In the end, most of the level data will be more like the density of those first 20 columns, to give you an idea of encryption quality. There are: 6 columns with 6 data words 5 columns with 5 data words 6 columns with 4 data words 2 columns with 3 data words 0 colums with <3 data words Taking that as a barometric, this particular screen has 220 bytes in it. That is only a 20 byte savings from encoding the compressed data one byte per index. Not particularly great--about a 9% savings. It would reduce the 2400-byte cost of data per level from plain compression down to 2184 bytes with compression + encoding. Is that worth it? Can something better be done? EDIT: I made a quick calculation for what it would be like if I encoded by vertical redundancy. i.e., 1 byte for a card and 1 byte for the number of times that card repeats. Going by this encoding method (and using the first screen as a barometric), it would actually ADD 48 bytes per screen over plain compression, so obviously that is out of the question. There is a lot of tile redundancy, but not often in long, continuous sequences.
  7. I need the encoded data first. I'll add the encoding functionality to IntyMapper either tonight or tomorrow. Figuring out how to encode the data should clear up any questions on the decoding end as well, so I'll be able to (hopefully) build out those IntyBasic decoding functions pretty easily then and that hurdle will be crossed. It should be interesting to see how much space is actually saved with this encoding technique versus one more like DZ's. Realistically, I highly doubt that this compression will work well enough that I can split it up into checkpoints and have that be sufficient. More likely I'll have to have 2 or 3 tables per checkpoint per level per stage. How will I be able to tell I'm at the end of a table? Just a special register to indicate to move to the next table, I suppose?
  8. But that essentially means there would be no encoding beyond the basic compression that I've already implemented with the unique word table. The only additional compression that comes here will be from saving space by copying words. If only there was a way to combine this "copy from left" encoding with the "repeat x times" type... But that's probably not possible with only one nybble.
  9. Shockingly, yes! Give me a while to try implementing this and I'll get back. I haven't decided yet whether to add check points. It may be wise, since the current design of the game allows for only 2 hits before dying and dying puts you back at the start of the level. Losing all your lives puts you at the beginning of a stage. I'm not sure if checkpoints would make that more fair or "too easy." I'll probably just set things up for check points, anyway. I can always not use them if I decide it doesn't work so well. My question about the end of data sets was referring to the end of a column, not the end of a level. Since the number of words of new data varies, I thought it might be problematic, but I suppose that it's implicit: there will always be 12 tiles read in. It's just a matter of whether it's getting the value from a PEEK or from a READ.
  10. That would be awesome! Could you maybe add a way to break up a line in the code editor without it counting as a line break in the compiled code? For instance... ON level GOTO level1, level2, level3,_ level4, level5, level6, level7, level8,_ level9, level10, level11, level13 The _ would combine the following line with the current line before compilation. I recall seeing this in another language, but I forget which. Anyway, it would be very handy and allow for long lines to be broken up into more readable chunks.
  11. Ohh, that's great, then! I thought I would have to fiddle around with branching logic to go from one table to the next, but apparently I can just use this to set to the right table and then just READ, huh? That's exciting! Does READ auto-increment the read pointer? Is there any way to decrement it again? If not then how will I know that I'm at the end of a set of data? I guess I could possibly use the remaining 4 bits in the counter word to say how many remaining reads there will be for that column?
  12. There isn't much documentation in the manual on RESTORE or what it does. What did you have in mind, intvnut?
  13. Also, what do I need to do to allow for more than 10 digits in the score? Apparently just adding them to the end of the 0-9 definitions isn't enough. I've tried chopping out kernel11 of DPC+ (which I assume draws the 10th sprite) and it still seems to work, right up until I put in the extra two score values. Although it doesn't put me over for bytes, I just get a black screen when checking out the ROM. EDIT: And yes, I have followed these instructions, but it doesn't seem to work. EDIT AGAIN: Well, it DOES work if I copy exactly what that thread specifies and use the full 16 definitions. It doesn't seem to like only 12, though. Only 10 or 16 seem to work, even if I adjust the ORGs appropriately. Any idea what's up there?
  14. I'm not using the standard kernel. Also, can you just start chopping stuff out of a kernel and have it still work?
  15. Yes, there is PEEK. There is also the ability to add inline ASM, which I'm not opposed to. I just don't know how to write it. Could you give me an example of the table of pointers to the table of data or the PEEK method or whichever you feel would be best?
  16. Eegh... I don't suppose another 12 bytes could be shaved off somewhere, Omega? There are 2 additional score definitions that I need to make the HUD work, but adding them in puts me just a few bytes short on bank 1. Cutting goto __Bank_2 bank2 (the only line I have in bank 1) saves 18 bytes. Could that be simplified using direct ASM? __Bank_2 is the first definition of bank2, so I just need the game to make a bank switch to bank 2 after it has loaded the kernel. The default code generated from that line is: sta temp7 lda #>(.__Bank_2-1) pha lda #<(.__Bank_2-1) pha lda temp7 pha txa pha ldx #2 jmp BS_jsr
  17. What is J in this example? What about how I'm going to handle multiple tables of this? I have a level that's at least 10 screens long, which means I'll probably need a good 3-5 tables to hold all that data. How do I naturally transition from level to level, table to table, column to column? Just a lot of ON...GOTO branches for levels and tables and then handle columns with a counter that keeps track of how far into the data you are?
  18. Which would be the most efficient way? From what I've heard, IntyBasic has been optimized when dividing by 256, 2, 4, 8, and 16.
  19. Isn't that the same as what I illustrated? Except instead of having full words listed, it would list single-byte indexes to a table of BACKTAB words, which would allow the data to be compressed more efficiently. Then, instead of wasting a full word for every zero counter value, you'd only waste a single byte. The least efficient use of this algorithm would, of course, be if every tile in a column was unique, in which case you would be adding 3 words (6 bytes) per column more than if you had just listed the raw data. The best compression scenario would be if you'd have a single tile for the entire column, in which case you would save 7 bytes for that column (11 saved bytes of data - 3 wasted bytes of data - 1 byte of wasted counter data). The challenge would be calculating the bell curve of savings, so to speak. If this algorithm turned from costing to saving at exactly 6 unique tiles, the entire algorithm would only on average save 1 byte per column.
  20. Ah, I see. So then after the counter word there would be all the unique words, I take it? Such as: &011001010100####, $faef, $3183, $3482, $32## with the # being wasted data? And from there, based on an existing counter variable to know where to read from in the data table, the following column would start at the next word, which would again be another counting word?
  21. I don't follow. Could you illustrate what you mean?
  22. A problem I foresee with DZ's method is what if the number of cards falls outside a factor of 12? For instance, if there were 7 cards of one type and 5 of another, those remaining 2 card index positions (the second word) would be wasted, or at least it would (in my mind) be complicated to keep track of remaining values between one read and the next. Or, on the flip side, what if there were 7 unique tiles in a 12-row column? That means it would need almost 4 full sets in 2 words + the counter word, but not quite. What would happen then? To illustrate the point, let's say we have a perfectly-formed column that has exactly 4 unique tiles. With the counter-word-word method, it might look like: $3541, $0033, $235f with each nybble of the first word being a counter and each byte of the following two words being an index. For only 2 unique tiles, you'd have: $75##, $0033, $#### with the black nybbles here all being wasted space. For 7 unique tiles, you'd have: $2131, $0033, $235f, $212#, $0509, $e1## again with the black representing wasted space. intvnut, if I understand correctly, a set bit would mean "look for the value to the next left column," and if that bit was also set, it would look left again, and again, and again until it came to an unset bit, where it would read the actual value? That sounds extremely complicated to decode, but maybe I just don't understand it well enough. Plus, as you point out, it leaves 12 bits left over in that word, which would be essentially wasted space unless it encoded the full row (12 bits total), though that would also leave an excess of 4 unused bits, and then in the case of a bit being set to 1, what would the byte in that indexed location be used for? I'm assuming this is all going to be using the basic idea of compression by reducing unique tile values to a table of words and then reading in a table of single-byte indexes to refer to the BACKTAB word table (it seems a reasonable basic compression method and shouldn't pose a problem). If that's the case, then I need to encode a string of single-byte indexes rather than full BACKTAB words, including, say, making a word of one index in the high byte and one index in the low byte. But then what happens when a bit in your encoding algorithm is 0? That low (or high) byte would still have to exist to maintain the structure of the data table--one can't be randomly dropping a byte or word here and there or you'll lose track of whether a word represents a counter or data. It would need to follow a structure like counter-word-word-counter-word-word, wasting space when the counter is effectively 0.
  23. This is true. Is there any way to efficiently combine encoding by row with reading by column, or is that kind of a paradox?
  24. Does this one also have the issue of not working as a flip between split and normal scoring? For the time being, having the ability to flip between the normal score and the split score is more important to me than having the score lined up how I might prefer, but thanks for looking into it!
  25. Works perfectly, Omega! No tweaks required! Thanks so much! What's great is that it offers a number of options. One could flicker the score and essentially display a 12-digit HUD if one wanted (probably not how I'll use it, but still!) If I wanted to tweak the locations of the split variables such that, rather than being flush with the left and right of the screen, the digits were instead flush with the edge of the playfield space (i.e. left moves right 16 pixels and the right moves left 16 pixels), how would I make that adjustment? If it's really difficult to do then don't worry about it.
×
×
  • Create New...