Jump to content
IGNORED

Data Compression Test on Simple Bitmap 256x192


newcoleco

Recommended Posts

It's not the first time we talk about data compression, particulary for graphics and because.

 

For graphics, we want a way to decompress data directly into VRAM using near to zero extra RAM space.

 

The common ways to compress data are Run Length Encoding, Huffman and Dictionary.

 

Run Length Encoding (RLE) is the fastest one to encode and decode and can give a decent compression ratio.

 

Huffman is basicaly an optimization by encoding with a small number of bits what occurs the most. Its decompression time is usually a bit longer than for other compression methods. Not a big deal normaly since it's during the initialiation of the graphics for a title screen or a grame screen.

 

Dictionary compression is normaly the best one, but its main problem is usualy the need for extra memory space to create the dictionary while decompressing data which isn't the way to go for ColecoVision games because of the limited RAM space.

 

I created my own data compression long ago which is basicaly an improved encoded RLE compression by using a fixed huffman to encode data. I called it DAN0 and I'm quite proud of myself for saving even more ROM space than with the RLE method I was using for years.

 

Then I was curious to see the other data compressions and I saw that there are many many of them, some can decompress data directly into VRAM, and some of them are just simply better than the others.

 

To help comparing the different data compressions out there, I decided to use a bitmap screen 256x192 with just enough complexity in it to see a difference in size for the different results of compression.

 

 

LET'S COMPARE SOME OF THE DATA COMPRESSION WE MIGHT USE FOR TITLE SCREENS

 

From my ColecoVision Strip Poker game project : strippokertitlescreen.zip

Please note that the size of the program to decompress the data isn't included.

 

Because the end result depends mostly on the original data, we can't say that Exomizer is the best, sometimes aPLib is better than Exomizer, it varies. But a few things are constant like DAN0 is always better than RLE, and Pletter is always better than BitBuster.

 

Side note : SMS homebrew scene is very interesting for ColecoVision homebrewers because it imply Z80 assembly and the port number for VRAM is the same. Which makes links like this one from Maxim quite interesting.

 

Feel free to comment, ask questions, and do your own tests with other data compression programs

Link to comment
Share on other sites

In my project, I also need a good data compression ratio. So far, I have achieve a ratio nearly 38:1. The original map for Doriath being 163,840 bytes down to 4,335. We cannot expect the same ratio for all kind of data.

 

For a bitmap, there is already some modification which need to be made to display it on the ColecoVision. I guess there might be a graphic-style which will allow a better compression ratio but with a different look for the graphic.

 

Few years ago, I was interested in filled shape animation which was made on Amiga in 1992 on 1 floppy disk (State of the Art by SpaceBalls).

 

I created a tool which was using Beziers curves... with a system of extrapolated keyframes, hoping to be able to draw an overlay over videos...

http://www.youtube.com/watch?v=P4POrYrKo0M

 

Although I have not look to what the 2 .bin look like... I would say that probably adapting the original data is a key to control a compromise between the pictures and the size. Also, with a game like strip poker, I think a file which could benefit from re-using blocks of graphics to hide/show parts could help.

 

I know having beziers curves on Coleco Vision sound like be crazy! ;-) Especially with 1 Kb of RAM.

 

But.. sadly.. what is the most though is once you get a system that work.. an editor, software, optimization... then you have to draw the data from the pictures. And this drawing stuff consume so much time that I understood why that kind of vector graphics never took out. 3D editors like 3DSMax is the works of thousands of people (editor + plug-ins + motion capture + artists.. ) and it is still time consuming.

 

However, my suggestion remain.. to work with the source of data is probably a good way to get different size and in some case the graphic result might be interesting & artistic... without falling into a Strip Poker a-la Picasso! lol!

  • Like 1
Link to comment
Share on other sites

Daniel, thanks for bringing more compressors to the light.

 

I just have tested Exomizer and MegaLZ over some of my graphics files and I compared with Pletter.

 

Exomizer generates consistently the smaller data files (that's interesting!) but it requires 156 bytes of RAM space that in 1K Colecovision are almost impossible when you are working in a complicated game.

 

On the other side MegaLZ compression gave me results almost near Pletter (for a few bytes) and even one file was smaller. I like the small decompressor. Maybe later I'll give it a chance and I'll check speed.

 

I didn't checked aPLib because it is filled with licenses legalese. The last year I've checked Bitbuster (Pletter descends from it)

  • Like 1
Link to comment
Share on other sites

After a few more hours of net surfing, I saw this gem named ZX7 from the zx spectrum scene. The decompression routine is 69 bytes long only, but it's a routine to decompression into RAM, not VRAM. Beside that, I think it's an incredible achievement.

 

"ZX7" is an optimal LZ77/LZSS data compressor for all platforms, including the

ZX-Spectrum.

 

Available implementations of standard LZ77/LZSS compression algorithm use either

a "greedy" or "flexible parsing" strategy, that cannot always guarantee the best

possible encoding. In comparison, "ZX7" provides a highly efficient compression

algorithm that always generate perfectly optimal LZ77/LZSS encoding. Technically

it means compressing within space and time O(n) only.

 

With the test case "ColecoVision Strip Poker title screen", I've got this result.

  • BitBuster : 3595
  • Pletter : 3557
  • ZX7 : 3551

 

The compressed file format is directly based (although slightly improved) on

Team Bomba's Bitbuster - http://www.teambomba.net/bombaman/downloadd26a.html

and Gasman's Bitbuster Extreme - www.west.co.tt/matt/speccy/apology/

 

Some of the size improvements used in "Standard" version were suggested by

Antonio Villena and Metalbrain.

 

The main speed improvement used in "Turbo" version was originally suggested by

Urusergi for Magnus Lind's Exomizer - http://hem.bredband.net/magli143/exo/

 

The optimal LZ77/LZSS compress algorithm was invented by myself (Einar Saukas).

To the best of my knowledge, there was no similar high-performance solution

available. I thereby present this implementation as evidence of "prior art",

thus preventing anyone from ever patenting it. Software patents are evil!!!

 

; -----------------------------------------------------------------------------
; ZX7 decoder by Einar Saukas, Antonio Villena & Metalbrain
; "Standard" version (69 bytes only)
; -----------------------------------------------------------------------------
; Parameters:
;   HL: source address (compressed data)
;   DE: destination address (decompressing)
; -----------------------------------------------------------------------------
dzx7_standard:
    ld	  a, $80
dzx7s_copy_byte_loop:
    ldi							 ; copy literal byte
dzx7s_main_loop:
    call    dzx7s_next_bit
    jr	  nc, dzx7s_copy_byte_loop ; next bit indicates either literal or sequence
; determine number of bits used for length (Elias gamma coding)
    push    de
    ld	  bc, 0
    ld	  d, b
dzx7s_len_size_loop:
    inc	 d
    call    dzx7s_next_bit
    jr	  nc, dzx7s_len_size_loop
; determine length
dzx7s_len_value_loop:
    call    nc, dzx7s_next_bit
    rl	  c
    rl	  b
    jr	  c, dzx7s_exit		   ; check end marker
    dec	 d
    jr	  nz, dzx7s_len_value_loop
    inc	 bc					  ; adjust length
; determine offset
    ld	  e, (hl)				 ; load offset flag (1 bit) + offset value (7 bits)
    inc	 hl
    defb    $cb, $33			    ; opcode for undocumented instruction "SLL E" aka "SLS E"
    jr	  nc, dzx7s_offset_end    ; if offset flag is set, load 4 extra bits
    ld	  d, $10				  ; bit marker to load 4 bits
dzx7s_rld_next_bit:
    call    dzx7s_next_bit
    rl	  d					   ; insert next bit into D
    jr	  nc, dzx7s_rld_next_bit  ; repeat 4 times, until bit marker is out
    inc	 d					   ; add 128 to DE
    srl d   ; retrieve fourth bit from D
dzx7s_offset_end:
    rr	  e					   ; insert fourth bit into E
; copy previous sequence
    ex	  (sp), hl			    ; store source, restore destination
    push    hl					  ; store destination
    sbc	 hl, de				  ; HL = destination - offset - 1
    pop	 de					  ; DE = destination
    ldir
dzx7s_exit:
    pop	 hl					  ; restore source address (compressed data)
    jr	  nc, dzx7s_main_loop
dzx7s_next_bit:
    add	 a, a				    ; check next bit
    ret	 nz					  ; no more bits left?
    ld	  a, (hl)				 ; load another group of 8 bits
    inc	 hl
    rla
    ret

  • Like 2
Link to comment
Share on other sites

My current compession tools (rle+huffman, http://colecovision....mpression.shtml) result in a total size of 3867 bytes for the data (675 for the color data, 3192 for the pattern data), and another 512 bytes for the huffman tree (which typically is shared between all compressed data in a game, and could be considered part of the decompression program). This compression has the nice property of needing nearly no RAM during decompression.

 

Philipp

 

P.S.: Decompression into VRAM with LZ77 could be a nice idea for good compression ratio, but quite slow decompression.

  • Like 1
Link to comment
Share on other sites

P.S.: Decompression into VRAM with LZ77 could be a nice idea for good compression ratio, but quite slow decompression.

It's slow but not excessively (Pletter is LZ77), if you've Princess Quest, you can check the timing between fading off story screen and the appearance of the following story screen.

Edited by nanochess
Link to comment
Share on other sites

It's slow but not excessively (Pletter is LZ77), if you've Princess Quest, you can check the timing between fading off story screen and the appearance of the following story screen.

I agree, it's slower than RLE (with or without huffman) but not excessively. And rarely we will use compressed data to be decoded "on the fly" during the game.

 

I've coded a routine to decompress in VRAM the zx7 format for ColecoVision (which should works for SMS too since it's the same port number for VRAM). I didn't tested on real hardware yet, but on the emulator it works fine. The routine is 125 only bytes long. It's working so great that I'm gonna adopt this format from now on for my CV projects and add this zx7 decompression in my devkit library. My ColecoVision Strip Poker game went from a size of 32K to 26K, which is quite impressive, and the time to decode never exceed a second.

  • Like 1
Link to comment
Share on other sites

I've always wonder how these encoder would work for my tiles. There's one project alone has like over 9 tilesets and managed to hit around 20KB and only 12KB for coding. I've hit that wall that coding is now about 12 KB LOL. I need to comment out unused tileset to finish coding this project.

Link to comment
Share on other sites

I have used MSX-O-MIZER, for compressing my ported games for Memotech MTX, its quite good. But to use it on ColecoVision, you have to think a about Memory, cause it will use around 340 bytes of RAM, to decompress. So If game is using to much Memory, that use DATA which can not be cleared out it could be a problem.Using MSX-O-MIZER at the including files, compress to 3337bytes. Ofcause MSX-O-MIZER is based on Exomizer v2, so no wonder. but when you write a game from scratch it should not be a problem, could move importend data from ram to vram (to a safe location, where compressed data dont overwrite), and then transfer it back afterwards.

Link to comment
Share on other sites

My current compession tools (rle+huffman, http://colecovision....mpression.shtml) result in a total size of 3867 bytes for the data (675 for the color data, 3192 for the pattern data), and another 512 bytes for the huffman tree (which typically is shared between all compressed data in a game, and could be considered part of the decompression program).

 

When I look to those files and numbers... I feel I want to see how much I could reduce these size. Just looking at the color data, I wonder how much small it can get... seem very repetitive and simple. Anyone has better numbers than 675 bytes? Just by curiosity...

Link to comment
Share on other sites

When I look to those files and numbers... I feel I want to see how much I could reduce these size. Just looking at the color data, I wonder how much small it can get... seem very repetitive and simple. Anyone has better numbers than 675 bytes? Just by curiosity...

In the case of the bitmap sample screen :

  • Exomizer 2 : 489
  • ZX7 : 568
  • Pletter : 570
  • MegaLZ : 577
  • ApLib : 577
  • BitBuster : 595
  • DAN0 (my latest version) : 613
  • PkK : 675
  • pucrunch : 778

DAN0 decompression routine is 97 bytes long (91 bytes if you ignore the push and pop to set the parameters when called from a C program). Direct decompression in VRAM, no RAM involved (except usage of registers of course). It's RLE + fixed hardcoded Huffman in the routine itself, no extra table like an huffman tree needed.

 

ZX7 decompression routine is 131 bytes long (125 bytes if you ignore the push and pop to set the parameters when called from a C program). Direct decompression in VRAM, no RAM involved (except usage of registers of course). Slower than DAN0 but provides LZSS type of compression.

Edited by newcoleco
Link to comment
Share on other sites

In the case of the bitmap sample screen :

  • Exomizer 2 : 489

 

Thanks NewColeco... well, I give a try last night where I was storing the 17 color values (starting with the most common) then I was coding bits to specify the color and another bit coding to specify the length. The size looked promising but the decompression routine show that my compression or decompression was buggy.

 

I then, though.. why not creating a 2 tables data vector of same length: one for the color code and one for the length of each entry. Then, build a dictionary to reduce the redundancy. Without proper bit coding (assuming some worst case), the numbers I came with was 550 bytes. Seem that the length of the vectors (how many bytes to be written before switching for another byte value) take 2/3 of the compressed size...

Link to comment
Share on other sites

Thanks NewColeco... well, I give a try last night where I was storing the 17 color values (starting with the most common) then I was coding bits to specify the color and another bit coding to specify the length. The size looked promising but the decompression routine show that my compression or decompression was buggy.

 

I then, though.. why not creating a 2 tables data vector of same length: one for the color code and one for the length of each entry. Then, build a dictionary to reduce the redundancy. Without proper bit coding (assuming some worst case), the numbers I came with was 550 bytes. Seem that the length of the vectors (how many bytes to be written before switching for another byte value) take 2/3 of the compressed size...

 

The way ZX7 and similar formats avoid building a dictionary is simply by copying bytes from an offset of what is already decoded. For example, the way ZX7 encodes "AAAAAAAAAABCABCABC" is : output A, copy 9 bytes from position 1 (encoded as relative position -1), output B and C, copy 6 bytes from position 10 (encoded as relative position -3), end. This is simply brilliant.

Edited by newcoleco
Link to comment
Share on other sites

The way ZX7 and similar formats avoid building a dictionary is simply by copying bytes from an offset of what is already decoded. For example, the way ZX7 encodes "AAAAAAAAAABCABCABC" is : output A, copy 9 bytes from position 1 (encoded as relative position -1), output B and C, copy 6 bytes from position 10 (encoded as relative position -3), end. This is simply brilliant.

 

Oh, the dictionary I was refering is: let's take back your example... "AAAAAAAAAABCABCABC"

I was creating a dictionary for Word 0: "ABC" and Word 1: "AAAAA" and then said.. [ABC,AAAAA] + 11000. which 1 = word 1 = AAAAA and 0 = word 0 = ABC.

 

So the dictionary was holding a word. And that word could be bit-coded.. like.. A = 0 B = 10 and C = 11

[0 10 11, 00000] 11000... so a total of 15 bits for this string (which exclude here the word length, the character definition and the fact that the text (the part which refer to the dictionary) should be at least byte align! ;)

 

However, you can see that the complexity of the algorithm can increase as the text refer to a dictionary which need to be parse entirely until a word is found and then you need to refer to another bit table to get the 8-bit symbol.

 

Just to get the real number out...

it will look like that, where memory address are written byte_address.bit_address (example: 2.3 = address $2 bit 3):

 

$00.0 : 'A' (8-bit), 'B' (8-bit), 'C' (8-bit) // 3 ascii code ordered by popularity.

$03.0 : 011 (length of word 0 which is 3, max length: 7 chars) 0 10 11 // word 0

$04.0 : 101 (length of word 1 which is 5) 0 0 0 0 0 // word 1

$05.0 : 11000

$05.5 : END OF FILE, 45 bits = Size 6 bytes (rounded up)

 

By comparison, the system of using reference of a previous character could be:

0 + 8-bit = new character to write

1 + bit-coded = refer to a previous character.

where 10 = (previous character), 110 (-2) and 111 (-3)

 

To code: AAAAAAAAAABCABCABC

we will have:

A : 0 + 'A' // 9 bits

followed by 9 A: 9 x 2bits = 18 bits

B : 0 + 'B' // 9 bits

C : 0 + 'C' // 9 bits

ABCABC : 6 x 3 = 18 bits

 

In total: 63 bits = 8 bytes (rounded up)

 

Of course.. the 2 bytes saved in the first system is by far lost by the algorithm and the complexity increase (slower).

One of the biggest problem I have with my algorithm.. is how to build the words.. you saw that I pick the word 'AAAAA' and not 'AA' or 'AAA'... as I could easily see this will be the most optimal one. But when words can be part of other words (like if I have the word ALMACT and the word MAC, is it better to pick a big word, the common part, both...? So I try a recursive function to just try all the possibilities but after 5 minutes.. it did about 0.02% of the work. So, for my test... I end up using an heuristic base on the length of the word and occurrence. And I stopped after seeing how much bytes when byte-coded (not bit-coded). I could then estimate that from 550 bytes it may go down to 400 something... but not below 400. :(

 

Well, to be honest I was hoping to get the data down to less than 256 bytes.. ;) maybe it could have reach 430 bytes.. with optimal bit code and everything. But you can see why I stopped there... as the complexity is too high even if for the tiles data it may also reduce the size too, but without a significant gain (like -40%), I think it's not worth it. The ZX7 will be a more suitable solution. :)

 

ok.. back to the kitchen, I have a cake to bake!

  • Like 1
Link to comment
Share on other sites

ok.. back to the kitchen, I have a cake to bake!

 

ok.. it's not related to the topic.. but just for those who could be curious to know what I bake?

I use fresh blueberries bring yesterday from a blueberry farm and reduce it and use it in between... it's the dark part of the cake.

post-36389-0-48923700-1376626802_thumb.jpg

Edited by F-Cycles
  • Like 2
Link to comment
Share on other sites

ok.. it's not related to the topic.. but just for those who could be curious to know what I bake?

I use fresh blueberries bring yesterday from a blueberry farm and reduce it and use it in between... it's the dark part of the cake.

post-36389-0-48923700-1376626802_thumb.jpg

 

It looks like data compression is a piece of cake for you. :lust: :D nom nom! :party: :music: ;-)

 

When I've worked on Dan0, my own data compression format, part of it became the hability to steal a few bytes from another table in order to reduce a little more the number of bytes needed to be stored in the ROM file. In order to do this, bits that represent instructions like copy these bytes from an offset are seperate from the bytes stored clearly making possible to merge all the bytes needed to decompress a certain amount of tables into a big table where my routine can simply look into during decompresion, avoiding a little bit encoding too many times the same bytes.

 

When I think about it, there might be a way to merge Dan0 and ZX7 formats in order to maybe get a slightly better compression depending on the number of pictures you want to compress all together in ROM but the cost will an even slower and bigger decompression routine.

Link to comment
Share on other sites

  • 2 years later...
  • 2 months later...

does anyone tried to adopt zx7 depacker?

In 2014, I added ZX7 format into my Coleco devkit, in the compression library; decompression directly into Video RAM to avoid messing up with the 1KB RAM space of the ColecoVision. I don't know if that answers your question about ZX7 depacker.

Link to comment
Share on other sites

In 2014, I added ZX7 format into my Coleco devkit, in the compression library; decompression directly into Video RAM to avoid messing up with the 1KB RAM space of the ColecoVision. I don't know if that answers your question about ZX7 depacker.

DOes this version is available somewhere Daniel ?

Link to comment
Share on other sites

DOes this version is available somewhere Daniel ?

Yes, it is on Dale Wick's SVN server since 2014, where pretty much all my dev are saved since I lost most of my source codes due to PC crashes and HDD failures. Not all my projects are there.

 

My routine is adapted from Metalbrain's small version, modified to decompress into VRAM directly and to be compiled in SDCC as part of the "comp.lib".

Used with success already in 2014 in one of my ColecoVision projects.

 

Here's the source codes

; zx7.s
; Decompression in VRAM version by Daniel Bienvenu

	.module zx7

	; global from this code
	;================
	.globl  _zx7
	; void zx7 (void *data, unsigned vram_offset);
	.globl  zx7 ;  HL = ADDR. TO COMPRESSED DATA , DE = DESTINATION IN VRAM
	
	.area _CODE
	
_zx7:
	pop bc
	pop de
	pop hl
	push hl
	push de
	push bc

zx7:
	; Set Write in VRAM at DE
	ld	a,e
	out	(0xbf),a
	ld	a,d
	or	#0x40
	out	(0xbf),a
	
	ld	a,#0x80

; copy literal byte
zx7_copy_byte_loop:
	ld	c,#0xbe
	outi
	inc	de
zx7_main_loop:
	call	getbit                    ; check next bit
	jr	nc,zx7_copy_byte_loop

; determine number of bits used for length (Elias gamma coding)
        push    de
        ld      bc, #1
        ld      d, b
zx7_len_size_loop:
        inc     d
	call	getbit                    ; check next bit
	jr      nc, zx7_len_size_loop
        jp      zx7_len_value_start

zx7_len_value_loop:
        call	getbit                    ; check next bit
        rl      c
        rl      b
        jr      c, zx7_exit           ; check end marker
zx7_len_value_start:
        dec     d
        jr      nz, zx7_len_value_loop
        inc     bc                      ; adjust length

; determine offset
        ld      e, (hl)                 ; load offset flag (1 bit) + offset value (7 bits)
        inc     hl
        .db	#0xcb, #0x33                ; opcode for undocumented instruction "SLL E" aka "SLS E"
        jr      nc, zx7_offset_end    ; if offset flag is set, load 4 extra bits
        call	getbit                    ; check next bit
        rl      d                       ; insert first bit into D
        call	getbit                    ; check next bit
        rl      d                       ; insert second bit into D
        call	getbit                    ; check next bit
        rl      d                       ; insert third bit into D
        call	getbit                    ; check next bit
        ccf
        jr      c, zx7_offset_end
        inc     d                       ; equivalent to adding 128??? to DE   ??? NO!
zx7_offset_end:
        rr      e                       ; insert inverted fourth bit into E

; copy previous sequence
        ex      (sp), hl                ; store source, restore destination
        push    hl                      ; store destination
        sbc     hl, de                  ; HL = source = destination - offset - 1
        pop     de                      ; DE = destination
	; BC = count
	; COPY BYTES
	ex	af,af'
	set	6,d
zx7_copybytes_loop:
	push	bc
	ld	c,#0xbf
	out	(c),l
	nop
	out	(c),h
	inc	hl
	in	a,(0xbe)
	nop
	out	(c),e
	nop
	out	(c),d
	inc	de
	out	(0xbe),a
	pop	bc
	dec	bc
	ld	a,b
	or	c
	jr	nz,zx7_copybytes_loop
	res	6,d
	ex	af,af'
	
zx7_exit:
        pop     hl                      ; restore source address (compressed data)
        jp      nc, zx7_main_loop
	ret

getbit:
	add	a,a
  	ret	nz
	ld	a,(hl)
	inc	hl
	rla
	ret

The entry point _zx7 with all the pop push is to allow zx7 to be called from codes written in C program. Here is the simple zx7 header file.

 

// zx7.h

void zx7 (unsigned vram_offset, void *data);

 

Side note : The ZX7 compression tool written by the original author do a quick look and compress data based on the ZX7 specifications which I can explain if someone asks for it. However, this tool was written as a proof of concept but NOT as the best compression tool possible for this format to achieve even better compression results than what people are getting now. This situation is easy to understand if we compare with modern ZIP tools which are not optimized for the best compression possible at a cost of taking hours for it but good enough for a nice user experience achievable in seconds or very few minutes. In fact, after studying a lot about compression during my quest of making my own new compression format, reading various scientific papers on the subject and also watching the amusing video series by Google titled Compressor Head, I'm happy to say that after all the headaches, I was able to make my own compression tool based on ZX7 specifications and get the best results out of it. Starting from there, I was able to develop my own format and compare fairly with ZX7 format; the results show better compression results overall at a cost of a slower decompression runtime but still no need for extra RAM space like Exomizer.

Edited by newcoleco
  • Like 2
Link to comment
Share on other sites

Thanks a lot Daniel ! will try that and compare to my current ple compression with my tools.

ZX7 compressor found here : http://www.worldofspectrum.org/infoseekid.cgi?id=0027996(inside ZX7_SourceCode.zip) is a compressor compatible with your routine ?

 

By the way, is Dale Wick's SVN server open to public, I can't find it inside adamcon.org ?

Edited by alekmaul
Link to comment
Share on other sites

Thanks a lot Daniel ! will try that and compare to my current ple compression with my tools.

ZX7 compressor found here : http://www.worldofspectrum.org/infoseekid.cgi?id=0027996(inside ZX7_SourceCode.zip) is a compressor compatible with your routine ?

 

By the way, is Dale Wick's SVN server open to public, I can't find it inside adamcon.org ?

 

Yes, my zx7 routine here in assembly codes decompress any zx7 data directly to VRAM for a ColecoVision or Coleco ADAM project as long as it fits inside the video memory space without the need of CPU memory buffer space.

 

Try contacting Dale Wick directly (his email on adamcon.org website) about the svn server.

Edited by newcoleco
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...