-
Content Count
1,351 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Member Map
Forums
Blogs
Gallery
Calendar
Store
Everything posted by newcoleco
-
Since it's very technical and new to me, I trust your judgement on the matter. My question is: is it possible to let the data decompress into the memory at once before doing the bitshifting correction to get back valid data? If it's possible, then we can focus on analyzing your data bitshifted through my tools and see its effect on the results. What I can say is that splitting bytes into distinct groups helps the compression ratio. Many modern data compression technics use a "reset" code to change the encoding at any point (usually split into blocks) when the data seems to have a different need, to switch from one kind to another. As you know, text data don't look like graphics, graphics don't look like codes, etc. and sometimes a file can integrate multiple kinds of data in it, such as our graphics are usually split into tables of colors, patterns and tiles.
-
Your concerns are surely shared with homebrewers like me dealing with limited ROM and RAM memory space. It's the main reason why I've spent so much time into this, getting results without sacrificing memory space. Same with ColecoVision homebrew games, the data compression time is not a concern because it's not the part gamers do experience while playing our games. Same situation with ColecoVision homebrew games, the majority of the graphics are usually not rendered "on-the-fly" risking the fluidity of the game or even glitches. Graphics are initialized first then shown on screen with minor changes, usually tiles and sprites movement manipulations, based on what it's supposed to be going on screen. I'm glad I've burn my brain for the greater good. Ask your questions. Since I'm actually working on data compression, it's a hot topic for me these days. And if you want a collaboration, just provide me lots of raw data samples corresponding to Intellivision homebrew projects (real or fictional) to be compressed. I can take a look, analyze them, and come up with a LZ77 variant algorithm based on my work that should suit your needs, getting your expertise on Intellivision into consideration. If it needs to be a special format instead of the ones I've already come up with, I can make the adapted compression tool in C and the decompression routine in Z80 asm easily. Make it your own solution afterward, make the 6502 ASM routine, tweak it, adapt it, integrate it in your toolbox. And if you're having trouble coding, I'm sure enthusiasts will be happy to help. Have a nice weekend!
-
Thanks for your comment. I'm glad you find this useful. This represents months of collecting data and working on my own tools on a large amount of fullscreen graphics that I believe to be the most memory consuming in our homebrew projects. I give freely my work, my tools, my knowledge, even my games. There is a lot of technical information in various PDF and forums posts that I've written and this is just one of them. During that time, I've learned a lot about what works and what doesn't, making my own formats, experimenting with ideas and testing them. You can see the following text as more rambling about what I've learned. LZ77 variants are dominating in popularity; GZIP is used in our daily internet communications. It's a very efficient lossless data compression that doesn't need extra memory to maintain a dictionary or any kind of dynamic data structure to help optimize the compression ratio. The main reasons why some tools give better compression than others are the algorithm used and the way the values are encoded in the compressed data format. Exomizer uses tables in order to minimize the number of bits to be encoded for each offset value; if the relative offset values are more likely to be in the same range, regardless if it's small or large values, the number of bits to encode them are simply the index value in a table, cutting bytes and bytes of data in a smart way. Because Exomizer uses extra memory to compute its tables, I felt unease to use it in my ColecoVision projects simply because there is only 1K RAM in the game console. So I used a various bits sizes method to encode offsets, and the steps of 4 bits or so gave me the best compression ratio results, avoiding to encode into lots and lots of 0 bits offset values for nearby matching sequences. Of course, my DAN1, DAN2 and DAN3 formats can't reach the same optimization as Exomizer in that regard, but since Exomizer needs space for the data to compute the table needed to get this optimization, and the fact that Exomizer do not care about matches of a single byte which, I've seen a few times my formats getting better results than Exomizer but it's rare, it depends on the data. I see the compression of a match of a single byte somewhat like a local fixed Huffman encoding, and Huffman encoding is used in modern data archives with LZ data compression to improve the compression ratio, like ZIP. During my search online for lossless data compression used for 8bit homebrew projects, I was surprised to see new tools using arithmetic data compression since it's based on floating point numbers, not integers. So, I've added the info in the APPENDIX section but I had not much time to experiment and couldn't be used for our projects, requiring just too much RAM memory to compute the necessary tables. I've studied computer science at University, with computer imaging in mind. I saw Lenna picture over and over in various papers, her picture became a reference, a standard of many pictures used to test and compare various results of algorithms such as edges detection, textures, and also data compression including lossy compression using DCT and wavelets. I firmly believe that her picture, her face, became a meme among computer scientists before memes were invented. PS: I've added a tiny section about SPACE VS SPEED just before the APPENDIX. It doesn't show all formats, but it gives an idea of what to expect.
-
DAN3 Lossless Data Compression (tool and sample)
newcoleco replied to newcoleco's topic in ColecoVision Programming
A fan of Silicon Valley TV series. I still have to watch it, and my library has it to be watched freely, but I never find the time to do it. You can compare GZIP and DAN3 as being basically the same since they are both variants of LZ77/LZSS. If the TV series mention a score to GZIP, which is the format we use in our Internet communications, then you have basically the Weissman score of DAN3... if only I was not the only one working on this and many tools were done supporting DAN3 format for various needs. Since the compression time is meaningless compared to the resulting size to be used in our homebrew projects (decompression routine + data), I will not even try to give a Weissman score. My compression tool here is written in a way to try to optimize the compression ratio regardless the time it will take to do so. Once compressed, we put together the compressed data and the decompression routine in our homebrew projects, which is the part that affects the users' experience, to save in loading time if on tapes and disks, and to offer more content (graphics, levels, text) inside the limited space of the memory chips of our cartridges. As for the decompression time, it's about the same as the other LZ77-LZSS variants, sometimes faster, sometimes slower, making really the compression ratio the most important part and that's what I've focussed on. DAN3 compression ratio tends to be closer to PuCrunch and Exomizer than Pletter, ZX7, ApLib, MegaLZ and others, but it uses a format close to the latter group and it doesn't need extra memory space like Exomizer to set up a table of values in memory. I believe that the difference is mostly because DAN3 tries to even compress a single byte rather than just sequences of 2 or more bytes. At the extreme, DAN3 will give worse compression ratio results like MegaLZ will do for meaningless data like a text file with the only the letter Q thousands of times. But since I'm not concerned about meaningless data and expect DAN3 to be used to compress even better-detailed content (hi-res graphics and elaborated levels for example), I'm quite confident that DAN3 fits for our needs in average, even if not always the best solution. -
DAN3 Lossless Data Compression Programmed by Daniel Bienvenu, 2017. What is DAN3? DAN3 is a LZ77/LZSS variant using unary encoding, k=1 Exp-Golomb and various sizes relative offsets without the need for extra memory prios to decompress data. Details below. Ask questions, post comments, share experiences. Download DAN3 build 20180126 (de)compression tool final? (BIN + SRC) : dan3_20180126.zip DAN3 build 20180123 (de)compression tool *BUG* (BIN + SRC) : dan3_20180123.zip DAN3 build 20180118 (de)compression tool *BUG* (BIN + SRC) : dan3_20180118.zip (previous version) DAN3 compression tool *BUG* only experimental beta (BIN) : dan3beta.zip ColecoVision Slide Show Sample (SRC, BIN) : SLIDESHOWDAN3.zip Technical Information DAN3 is a LZ77/LZSS variant developed based on DAN1 and DAN2, which explains their similarities. The format starts, like in DAN2, with defining what's the size in bits for the large offset values used to identify a match with the following table with a unary code (sequence of bits that looks like this): 0 : 9 bits long <- 512 bytes (characters on a screen). 10 : 10 bits long <- 1K (character set, some bitmap screens) 110 : 11 bits long <- 2K (most bitmap screens) 1110 : 12 bits long <- 4K (dithered bitmap screens) 11110 : 13 bits long <- 8K (my decompression routine limit as-is) 111110 : 14 bits long <- 16K 1111110 : 15 bits long <- 32K 11111110 : 16 bits long <- 64K (only good for 32bits/64bits PC files at this point) ... (no limit in theory) Then, like in DAN2, the first byte of the uncompressed data is stored as is. Afterward, things are a little different, which makes DAN3 different than the others; offering a better compression on average but not always. For each match, the size and the relative offset values are encoded. While DAN1 and DAN2 are using Elias Gamma to encode the size value, DAN3 is using a k=1 Exp-Golomb encoding instead, which somehow helps to optimize a little bit both in term of space and time to decode. As for the relative offset values, DAN3 is using a completely different set of possible bit-size offsets; using {5, 8, max} instead of {1, 4, 8, max} bits to encode offsets. As for the special case of a nearby single byte being the same as the current byte to encode, DAN3 limits to the first 3 nearest bytes, instead of the 18 nearest bytes, which limits its potential to find a match and save space, but since the big impact is with sequences of more than 2 bytes, this change has no impact besides offering a better compression than Pletter and the others that do not support sequences of a single byte, acting if you like as a local fixed Huffman encoding. Here's a comparison side by side of Elias Gamma and k=1 Exp-Golomb to show you what I mean by size and speed possible gain in DAN3 since reading fewer bits means taking less time to decode. Elias Gamma (DAN1 and DAN2) versus k=1 Exp-Golomb (DAN3) 1 : 10 = size 1 010 : 11 = size 2 011 : 0100 = size 3 00100 : 0101 = size 4 00101 : 0110 = size 5 00110 : 0111 = size 6 00111 : 001000 = size 7 0001000 : 001001 = size 8 0001001 : 001010 = size 9 0001010 : 001011 = size 10 0001011 : 001100 = size 11 0001100 : 001101 = size 12 0001101 : 001110 = size 13 0001110 : 001111 = size 14 0001111 : 00010000 = size 15 000010000 : 00010001 = size 16 ... 000000011111110 : 00000011111111 = size 254 In DAN3, the support of Exp-Golomb stops at 00000011111111 = size 254. There are 3 reasons for that: It allows an optimization of decoding only into a single byte instead of supporting to carry the bits into a 16 bits register pair. In Z80 opcodes, that simplifies the decoding routine, making it faster and smaller. It makes the specials markers for END OF DATA and RLE about a byte smaller and faster to read. Very large sequences of nothingness sadly will need more than one match of size 254 each but since we're talking about compressing our elaborated graphics which are mostly not that empty, the limitation should be barely an issue and satisfy plenty our needs. Special Markers 00000001 + byte : RLE - Copy raw the next (byte value + 1) bytes 00000000 : END OF DATA Relative Offset For a size of 1 byte, the relative offset is either 0 (the byte just before), 1, or 2 encoded respectively as 0, 10, and 11. For sizes of 2 or more, the offset is encoded as follow: 10 + 5 bits = 5-bits Offset 0 + byte = 8-bits Offset = byte + 32 11 + N bits + byte = large Offset = (N bits and byte together as a 9 or more bits value) + 288 Listing Decompression to VRAM Routine for Z80, using BE and BF ports (ColecoVision) Written in SDCC style, to be added to your compression toolbox. DAN3 decompression routine is only 14 bytes bigger than DAN1 decompression. ; dan3.s ; DAN3 Lossless Data Compression Format by Daniel Bienvenu ; DAN3 Decompression to VRAM ; 6 December, 2017 ; Size: 201 bytes ; HL = SOURCE ; DE = DESTINATION ; global from this code ;================ .globl _dan3 ; void dan3 (void *data, unsigned vram_offset); .globl dan3 ; HL = ADDR. TO COMPRESSED DATA , DE = DESTINATION IN VRAM ; Wrapper to get values from parameters (register pairs) _dan3: pop bc pop de pop hl push hl push de push bc ; HL = SOURCE ; DE = DESTINATION dan3: ; Set Write in VRAM at DE ld c,#0xbf out (c),e set 6,d out (c),d res 6,d ; Set A for reading a bit routine ld a,#0x80 ; Important to save the IX register push ix ld ix, #get_bit_e+3 dan3_offsetsize_loop: dec ix dec ix dec ix call get_bit ; check next bit jr c, dan3_offsetsize_loop ; Copy literal byte dan3_copy_byte: ld b,#0x01 dan3_literal2main: ld c,#0xbe dan3_literals_loop: outi inc de jr nz, dan3_literals_loop ; Main loop dan3_main_loop: call get_bit ; check next bit jr c,dan3_copy_byte ; Decode Exp-Golomb + Special Marker push de ld de, #0x0001 ld b,e dan3_expgolomb_0: inc b call get_bit ; check next bit jr c, dan3_expgolomb_value bit 3,b jr z, dan3_expgolomb_0 ; Special Marker pop de call get_bit ; check next bit jr c, dan3_literals pop ix ret ; EXIT dan3_literals: ld b, (hl) ; load counter value (8 bits) inc hl inc b jr dan3_literal2main dan3_expgolomb_value: dec b dan3_expgolomb_value_loop: call get_bit_e ; check next bit -> DE djnz dan3_expgolomb_value_loop dec e push de pop bc jr z, dan3_offset1 ; D = 0, E = ??, BC = LENGTH ; Decode Offset value ld e,d ; e = 0 call get_bit ; check next bit jr nc, dan3_offset3 call get_bit jr nc, dan3_offset2 call get_highbits_e ; read some bits -> E inc e ld d,e ; D = E + 1 dan3_offset3: ld e, (hl) ; load offset offset value (8 bits) inc hl ex af, af' ld a,e add a,#0x20 ; Skip the short offsets covered by 5 bits ones ld e,a jr nc, dan3_offset_nocarry inc d dan3_offset_nocarry: ex af, af' jr dan3_copy_from_offset dan3_offset2: call get_5bits_e ; read 5 bits -> E jr dan3_copy_from_offset dan3_offset1: call get_bit ; check next bit jr nc, dan3_copy_from_offset call get_bit_e inc e ; Copy previously seen bytes dan3_copy_from_offset: ex (sp), hl ; store source, restore destination push hl ; store destination scf sbc hl, de ; HL = source = destination - offset - 1 pop de ; DE = destination ; BC = count ; COPY BYTES ex af,af' set 6,d dan3_copybytes_loop: push bc ld c,#0xbf out (c),l nop out (c),h inc hl nop nop in a,(#0xbe) nop nop nop out (c),e nop out (c),d inc de nop nop out (#0xbe),a pop bc dec bc ld a,b or c jr nz, dan3_copybytes_loop res 6,d ex af,af' pop hl ; restore source address (compressed data) jp dan3_main_loop get_highbits_e: jp (ix) ; COVER 16K ; call get_bit_e ; get next bit -> E ; COVER 8K get_5bits_e: call get_bit_e ; get next bit -> E call get_bit_e ; get next bit -> E call get_bit_e ; get next bit -> E call get_bit_e ; get next bit -> E get_bit_e: call get_bit ; get next bit rl e ; push bit into E ret ; get a bit get_bit: add a,a ret nz ld a,(hl) inc hl rla ret Comparison with DAN1 DAN3 to VRAM decompression routine is only 14 bytes more than the one for DAN1 to VRAM. As for the compression ratio, it really depends on the data. For example, here's a table showing the size obtained with DAN1 and DAN3 for each picture in the SlideShow sample. awb1p: DAN1 3677+2298, DAN3 3689+2328 f1spp: DAN1 2366+1679, DAN3 2349+1664 h6exp: DAN1 3412+2313, DAN3 3398+2297 mgfap: DAN1 3554+1935, DAN3 3551+1928 sotbp: DAN1+ 3394+1956, DAN3 3381+1921 Updates * Dec 5, 2017 - Added Offset encoding details. * Dec 6, 2017 - Bug-Fixed and optimized ASM decompression routine. Added comparison with DAN1 for the SlideShow sample. * Jan 18, 2018 - Updated (de)compression tool * Jan 23, 2018 - Fixed fast compression method "-f" to be closer to perfect compression optimization * Jan 26, 2018 - Fixed RLE compression, now provides the expected results for hard to compress data
- 5 replies
-
- lossless
- compression
-
(and 1 more)
Tagged with:
-
Updated this post with DAN4 data compression tests results. DAN4 is an effort to make a slightly faster DAN3 with some optimizations here and there. Backstory: At the ADAMCon banquet in 2017, Mister Luc M., aka pixelboy, asked if DAN3 is fast enough to decompress data "on the fly". Knowing that RLE is the only fast data compression algorithm that I know, with maybe DAN0 as a close second in my opinion, I couldn't reply since I had no idea of what amount and kind of data we were talking about. Mister Dale W. replied that essentialy this with a mention that DAN data compression should be fast enough for the project idea Luc was talking about. The banquet ended and this talk made me think with this eternal dilema of SIZE versus SPEED. I've searched for solutions and juggled with ideas of my own, and it's in October and November of 2017 that I've started to code a variant of DAN3 with ideas of allowing reading bytes instead of bits for large numbers, at the cost of a bigger decompression routine, to allow faster decompression time. Also, to fight the lost of compression ratio, the idea of using less bits for hard to compress data granted this newly made DAN4 to somehow get results good enough to even beat from time to time DAN3. I will post DAN3 tools first before the end of 2017 since it's in average the useful version for our projects in general. DAN4 is more specific for special needs in time.
-
Coleco strong-arming homebrew publishers and fan sites
newcoleco replied to TPR's topic in ColecoVision / Adam
I don't know how to say it (keep in mind english is not my cup of tea) or where to say it, but... I didn't want to react about the Coleco Expo 2017 because I wasn't there and I knew too well that I would have over-reacted and feel bad about it. So, I've looked at what others who actually went there said about the Coleco Expo. I've took notes, and here's my opinion: Coleco Expo? The idea itself is exciting; a gaming expo based on Coleco, I would love to see that and still want to. Let's be honest, the presence of a ColecoVision game and a flashback console do not make it a Coleco Expo. The number of vendors and visitors was not as expected for a gaming expo, especially with the amount of adverts and months of hype. A lesson from the past, hype and exageration can be very damaging (ex.: E.T. for Atari 2600). Also, alienating the homebrew community that kept the scene alive for the last 20 years, causing trouble and confusion (stop pretending to be related to Coleco Industries), and lying to do "damage control" and to attract attendees (exagerating the number of visitors and success), it's not what we call a job well done. I see some time and effort put in the project to make money... but how about the passion, the fans? When visitors and their friends mock the name and say it was a waste of time, it's not a good sign. Perhaps someone else would do it right, if another expo is made possible. I'm not Coleco Industries, I'm just a guy who happens to care enough about what that rainbow logo represent to say : YOU ARE FIRED ! -
SPACE VERSUS SPEED Especially for the 8-bits and 16-bits homebrew projects, optimization is a constant dilemma between space and speed. The following table shows the results of DAN1, DAN3, PLETTER and ZX7, all running the same SlideShow demo, all decompressing directly to VRAM (which is very slow), with the size in bytes of each decompression routine, the size in bytes of the ROM file containing decompression routine with the pictures, and the time in seconds for the decompression of the 1st picture. DAN1: decompression routine 211 bytes, ROM file 27094 bytes, show 1st picture in 1.114 second DAN3: decompression routine 201 bytes, ROM file 27010 bytes, show 1st picture in 1.072 second PLETTER: decompression routine 209 bytes, ROM file 27740 bytes, show 1st picture in 0.877 second ZX7: decompression routine 132 bytes, ROM file 27665 bytes, show 1st picture in 0.876 second APPENDIX - Questions and Answers QUESTION : WHY PLETTER AND ZX7 OFFER SIMILAR COMPRESSION RESULTS IN THIS BENCHMARK? ANSWER : ZX7 and PLETTER are both using two possible sizes to encode offset (relative index) values in their LZ77 encoding. The small size to encode offset values is 7 bits ( 128 ) long for both compressors while the big size is fixed to 11 bits ( 2048 ) long for ZX7 and variable for PLETTER from 9 bits ( 512 ) up to 13 bits ( 8192 ) long. After a first pass scanning data, PLETTER decides the ideal size to encode the big offset values. To get similar results, both PLETTER and ZX7 should use similar sizes to encode offsets, which is 11 bits long. And it happens that the bitmap pictures made of 6K bytes data tables ( PATTERN table for ZX Specatrum screens, PATTERN and COLOR tables for Graphic Mode 2 ) tend to get better results with big offsets encoded as 11 bits values. In sumary, if the data to compress do need 7 bits long offsets and/or 11 bits long offsets to get a good compression result, PLETTER and ZX7 should always give similar results, otherwise PLETTER do provide better compression with its flexibility of various ( from 9 to 13 bits long ) sizes to encode offsets. CONCLUSION : PLETTER is similar to ZX7 but provides better compression overall. QUESTION : Why DAN3 is better than DAN1 and DAN2 compression in this benchmark? ANSWER : In this benchmark, the bitmap pictures may get better compression by using 8 bits ( 256 ) instead of only 7 bits ( 128 ) for the small offset values. Why? Because of the way the graphics are encoded. If you look at ZX Spectrum .SCR bitmap pictures, bytes for the PATTERN table are interlace in a way that bytes 256 away from each others in the data table are one above the other on screen making them more likely to be the same. And Graphic Mode II bitmap pictures are in blocks of 256x8 pixels on screen which makes offsets of 256 bytes in the data tables more likely to find similar bytes than offsets of 128 bytes. Because of this, DAN1, DAN2 and DAN3 do have 8 bits offsets. However, DAN1 and DAN2 do worse than DAN3 to compress ZX Spectrum pictures simply because their smaller offset bit-size is only 4 bits ( 16 ) while it's 5 bits ( 32 ) for DAN3, and offset of 32 is very important: it's 32 bytes long for each line of 256 pixels on screen, and ZX Spectrum COLOR table are rows of 32 bytes. These differences explain the difference in the results. And also, it's 32 x 24 characters on screen for our beloved 8-bit systems (mostly), which again makes 32 a relevant offset value. CONCLUSION : Overall, DAN3 is using a better set of possible bit-sizes to encode offsets than DAN1 and DAN2 for graphics like title screens that we need in our projects. Also the 4 bits instead of 5 bits to encode smaller offsets do explain why DAN1 and DAN2 do worse than DAN3 with ZX Spectrum bitmap screens, because extra bits are used to identify in which small size is encoded each offset in the compressed data. QUESTION : Shouldn't a flexible set of bit-sizes for offsets be a better solution than making a format with fixed sizes? ANSWER : Yes, and that's exactly the reason why Exomizer out-perform all the others compressors. But to achieve this, a table of variable bit-size lengths to encode-decode offsets is needed. This table is encoded very efficiently into the first 52 bytes of the compressed data. The table needs to be decoded into RAM prior to the decompression. The space needed in RAM for this offsets table can be a deal breaker for some vintage systems, for some projects without enough extra memory space available. CONCLUSION : A flexible way to encode offsets, like the one used in Exomizer, do perform better than fixed sizes. To achieve this, extra memory space is used for a table of the possible offset sizes which can be significant enough on vintage 8bits computers and consoles like the ColecoVision to be a deal-breaker. QUESTION : Is Exomizer 2.0 the best data compression for vintage 8-bits systems? ANSWER : It depends what you mean by "best" data compression. For example, Subsizer 0.6 is a data compression tool for the Commodore 64 that gives slightly better results (a few bytes difference) than Exomizer 2.0. After some tests, Subsizer 0.6 crashes while trying to compress some files including 2 of the Graphic Mode II pictures from my benchmark test. Another tool from the Commodore 64 scene is named ALZ64 and uses arithmetic coding instead of Huffman coding to get even better compression at the cost of time and memory space, but it's a packer for Commodore 64 programs, not a data compression tool for raw files like picture, text, and audio files. CONCLUSION : There are many data compression tools and some do compress more than Exomizer 2.0. *UPDATE : December 8, 2017 - Added section SPACE VERSUS SPEED
-
From my presentation about my recent work on Coleco graphics tools; Coleco ADAM users annual convention ADAMCon 29, July 20-23, 2017, in Guelph, Ontario. The following text is about data compression obtained by using popular LZ variants for 8-bit systems, including my own named Dan, applied on several bitmap pictures for 8-bit systems. These pictures include formats Coleco .PC, MSX .SC2, and ZX Spectrum .SCR. The pictures are limited to PATTERN and COLOR tables only (no sprites allowed). Table showing results with MIN, MAX, and AVERAGE for each cathebory. Please note : Exomizer needs extra RAM prior to decompress data, and RLE (Run Length Encoding) is not a LZ variant. 3822.74 <- Exomizer 3930.13 <- Dan3 3931.65 <- Dan4 3939.92 <- Dan1 3940.85 <- Dan2 3944.20 <- PuCrunch 4044.70 <- MegaLZ 4059.60 <- Pletter , BitBuster 4063.13 <- ZX7 4077.10 <- ApLib aka APack 5899.67 <- RLE Table 1 - Average compression in bytes obtained on 95 Graphic Mode 2 bitmap pictures of 12K bytes each. 4797.03 <- Exomizer 4928.53 <- Dan4 4930.37 <- PuCrunch 4940.84 <- Dan3 4958.74 <- MegaLZ 4961.80 <- Pletter , BitBuster 4963.39 <- ZX7 4986.14 <- ApLib aka APack 4992.42 <- Dan2 4996.06 <- Dan1 6059.89 <- RLE Table 2 - Average compression in bytes on 1115 ZX Spectrum .SCR complex pixel art of 6912 bytes each. 2714.56 <- Exomizer 2828.22 <- Dan3 2832.77 <- Dan4 2863.85 <- Pletter , BitBuster 2865.63 <- MegaLZ 2866.14 <- PuCrunch 2867.64 <- ApLib aka APack 2867.65 <- ZX7 2875.06 <- Dan2 2890.77 <- Dan1 4264.42 <- RLE Table 3 - Average compression in bytes on 615 ZX Spectrum .SCR simple pixel art of 6912 bytes each. DAN3 and DAN4 data compression DAN3 is a lossless data compression based on the idea to compress relevant data with some patterns more than optimizing patches of emptiness, using the best ideas of LZ77-LZSS variants DAN1 and DAN2 data compression, but changed how doublets ( size and relative index values ) are encoded; using Golomb Gamma instead of Elias Gamma, limited the size of sequences, and simplified the binary tree to decode offset values. DAN4 is an attempt to improve DAN3. First, the modified k=1 Exp-Golomg values reads bytes instead of bits for large values which improves the decompression speed. Second, the two (2) supported modes, one optimized for simple data and one for complex data such as detailed pixel-arts and heavily dithered graphics. Of course, DAN3 and DAN4 are not miracle solutions. Because of its nature, DAN3 struggle to do better compression ratio results for pictures with lots of spaces like the following one. And sometimes, DAN1 is better than DAN3 for bitmap with dithering like the following one by 66 bytes. So how to decide which data compression to use for our projects? Trial and error? Perhaps, or simply use the data compression tools you are most comfortable with. Samples with their data compression results: NewColeco Presents ROM File Edition 837 < Exomizer 845 < Dan2 845 < Dan1 858 < Dan4 863 < PuCrunch 895 < ZX7 895 < ApLib 898 < Pletter 908 < Dan3 969 < MegaLZ 1478 < RLE Smurf Challenge 1140 < Exomizer 1162 < Dan2 1164 < Dan1 1170 < PuCrunch 1185 < Dan4 1188 < Dan3 1229 < ApLib 1233 < Pletter 1237 < ZX7 1245 < MegaLZ 1705 < RLE Bejeweled Title Screen 1306 < Exomizer 1358 < Dan2 1359 < Dan1 1372 < Dan4 1376 < Dan3 1380 < PuCrunch 1424 < ApLib 1427 < Pletter 1427 < ZX7 1463 < MegaLZ 2711 < RLE Robee Blaster 1937 < Exomizer 2005 < PuCrunch 2016 < Dan3 2023 < Dan4 2024 < Dan2 2047 < Dan1 2050 < ZX7 2054 < Pletter 2098 < ApLib 2101 < MegaLZ 8752 < RLE Maze Maniac 2433 < Exomizer 2504 < PuCrunch 2522 < Dan3 2551 < Dan4 2554 < Dan2 2570 < Dan1 2620 < Pletter 2621 < ZX7 2650 < MegaLZ 2671 < ApLib 4609 < RLE Anniversary Cake Demo 2947 < Exomizer 2993 < PuCrunch 3020 < Dan2 3025 < Dan3 3028 < Dan1 3058 < Dan4 3108 < ApLib 3123 < MegaLZ 3126 < ZX7 3130 < Pletter 4048 < RLE F1SPP 3913 < Exomizer 4013 < Dan3 4028 < Dan2 4033 < Dan4 4045 < Dan1 4096 < PuCrunch 4107 < MegaLZ 4170 < Pletter 4170 < ZX7 4208 < ApLib 6770 < RLE Lena aka Lenna (used a lot in papers about imaging) 8595 < Exomizer 8812 < Dan4 8840 < Dan3 8873 < Dan1 8897 < Dan2 8925 < PuCrunch 8962 < MegaLZ 9084 < ZX7 9085 < Pletter 9141 < ApLib 11958 < RLE UPDATES : November 17, 2017. Added informations about DAN4 results developped during October-November 2017.
-
Dear ColecoVision fans, Daniel Bienvenu here (long letter)
newcoleco replied to newcoleco's topic in ColecoVision / Adam
You're not bothering me, and thank you for making me aware of the issue. I really should clean up my mailbox to get messages directed to me personally. I've sent an email to Dale Wick about the situation and you should hear from him soon. Sorry for the inconveniences -
Dear ColecoVision fans, Daniel Bienvenu here (long letter)
newcoleco replied to newcoleco's topic in ColecoVision / Adam
You went to the ADAMCon website to buy Flora game and there is no answer? Try contact Dale Wick, he is the admin of the website. -
$$$ Send VectorGamer To Coleco Expo $$$
newcoleco replied to VectorGamer's topic in ColecoVision / Adam
I might be naive but I refuse to be angry toward anybody going to an event like this one; my anger is focussed to the ones abusing us, trying to ruin our hobby and friendly community. I wish there is/was a way to refund your ticket or able to give to someone else... any information about terms and conditions about such ticket? We are better than a certain person strong-arming ColecoVision fans. We are here to enjoy and remember ColecoVision. Let's not turn against each others. -
I agree. Back then, I've not reacted when you suggested sub-forum either because I've missed the post or I was thinking there was no real benifit to do so. Today, I can see the reasons and it feels you were right all along, making it now a long over due decision. It is logicial, and the keyword "organized". It's simply a way to be organized in this melting pot of messages. We're all checking exclusively the forums and sub-forums based on our interests, and even more the ones that we can contribute to, and it's normal. And missing some messages either in the main or sub forums happens all the time. Maybe it's simply the way we see the forums as the main part do get the most attention and therefor it's where important messages should be... but the content that should be together is diluted in the list of messages. I've not used email notification yet for (sub)forums threads. I am intrigued how to set it up. I've not used this option yet, thanks for showing how to do it. We learn new things every day, great when it's good things like this.
-
Sorry this may be off topic but just wanted to react... I do remember that time with shareware and such. I think I do have one shareware we can't find on any site today. I used QuickBASIC 4.5 for years. I've coded DOS games and tools including VOIROM (my hex editor) and ICVGM (my graphics editor) both written in QB4.5
-
If I can pick the ones I would love to play right now non-stop for an hour or more because I do enjoy these games... Here's my 25 favorites arcade ports from the original ColecoVision arcade ports library. (alphabetical order) Bump 'n' Jump BurgerTime Centipede Donkey Kong Donkey Kong Jr. Frenzy Frogger Galaxian Gorf Gyruss Lady Bug Looping Mr. Do! Omega Race Pepper II Popeye Q*bert Roc 'N Rope Slither Space Fury Tapper Time Pilot Turbo Venture Zaxxon
-
We sure are adults... or I'm in hot water with some of us for saying that I'm curious to see images of it. We sure didn't need the coleco drama in our lives, almost ruin the passions and the vacations for some. I wish a good time to everyone!
-
2010 game is impossible at level 2 and above
newcoleco replied to bradhig1's topic in ColecoVision / Adam
Lucky, you! I am able to play Level 3 after many tries, but Level 4 is just too much. I've video on Youtube done on emulator. -
colecovision letters in the wrong colors
newcoleco replied to bradhig1's topic in ColecoVision / Adam
When that happened to me, I noticed the issue was related to the power cable, if put in a certain way the issue was occuring often. I've thrown away multiple power supplies in the garbage years ago because I didn't know how to fix it beside just trying another power supply that seems to not have that defect. -
True, but I think you are too modest. It helps to have everything together, easier to find and read about... you should consider it. Beside your name is wellknown now, and represent many products and years of being in service.
-
I would love to see it, but I can't be there. Surely, someone will post pictures and videos of this event, right? And beware of evil adult content games! Have a great time! Regards, Daniel
-
Coleco strong-arming homebrew publishers and fan sites
newcoleco replied to TPR's topic in ColecoVision / Adam
It really looks like Brock Bold font except for the spacing and the missing curve at the bottom of the E. I remember modifying this font more than a decade ago based on the ColecoVision logo it worked just fine. I've tried to share the font file online, but somehow it's nowhere to be found. I remember using it in a pdf file, so maybe the font is embedded in a pdf file near you. -
Music Rainbow Brite for the ColecoVision
newcoleco replied to newcoleco's topic in ColecoVision / Adam
Thanks! ^_^ I've listen to it and I think you missed the last minute of the music, you cut the ending, it's not a looping song. -
Music Rainbow Brite for the ColecoVision
newcoleco replied to newcoleco's topic in ColecoVision / Adam
Ah! I see! Well, these sound effects are from my game Flora and the Ghost Mirror. That's what we call "place holders" to test and also have fun to try playing sound effects while the music play, which is a good way to see if it sounds good, if it works as background music for a game. Remember that the sound chip has only 3 TONE channels. -
Music Rainbow Brite for the ColecoVision
newcoleco replied to newcoleco's topic in ColecoVision / Adam
I assure you that the sequences shared by both musics are played at the same volume. The music has nuances, softer parts within louder parts, which is more exposed in the extended version. Playing a softer music part right after a normal or louder part tends to make it sounds quieter than it is, which is an illusion, it seems "difficult to hear". I can see 2 reasons to put nuances like this in a music: it gives a break to our ears in long music pieces, and it tricks our mind to think that the next part is louder than it is which is often used to put emphasis on the upcoming ending, grand finale. There is that and also with time we lose the capacity to hear high pitch notes which removes some sounds to be heard fully, missing some richer sounds in a music piece. The original material I've listen to make the extended version is the "French Rainbow Brite Theme Song" from the Multimedia section of this Rainbow Brite web site. If you listen to it, you will notice the nuances of softer parts within normal/loud parts. -
Good morning everyone! No, this is not clickbait... it's real. To celebrate the 35th anniversary of the ColecoVision game system To show my music composing talent And to show how I don't care about drama, about being imposed restrictions in my hobby. LADIES AND GENTLEMEN RAINBOW BRITEtm JUKEBOX MUSIC FOR THE ADAM COMPUTER, COLECOVISION AND COMPATIBLE SYSTEMS BY NEWCOLECO AKA DANIEL BIENVENU Please leave a like and comment! Enjoy! ROM FILE : RAINBOW BRITE JUKEBOX 2017.zip SOURCE FILE : RAINBOW BRITE JUKEBOX 2017 (SRC FILES).zip FAQ Q: Software to make the music? A: My own software CVSoundGen.zip and 2 days of patience and listening the theme song in chunks with Audacity. Q: Development kit? A: My own kit based on Marcel deKogel's devkit released in 1998 and adapted to work with SDCC compiler. Q: Can I use this music and sounds for my own projects? A: No, unless I tell you otherwise. Q: Will there be a playable game Rainbow Brite for the ColecoVision? A: Maybe, maybe not. Q: Why you decided to make a jukebox with the Rainbow Brite theme song? A: For fun. First time ever I've heard the music and I quite like how catchy and groovy it sounds as a chiptune.
- 12 replies
-
- Rainbow Brite
- music
-
(and 3 more)
Tagged with:
