Jump to content
IGNORED

Intellivision development, back in the day


decle

Recommended Posts

I wonder which tools could be used to get a somewhat "fair" benchmark, using simple tests like the previously mentioned Sieve, Pi decimals, sorting etc algorithms. On the Intellivision, IntyBASIC for sure. Perhaps pitch it against CC65 which though is a C compiler? I don't know which modern BASIC compilers there exists for 6502 systems, and pitching IntyBASIC vs some old fossil like Austrocomp, Petspeed and alike would probably make the CP-1610 look superior to a 6502 when comparing execution speed.

Link to comment
Share on other sites

I really doubt the 6502 is a great C compiler target. A quick Google search suggests a 1MHz 6502 only pulls about 32 Dhrystones/second, which is about 0.018 DMIPS/MHz. That's pretty awful. A plain-jane RISC CPU running 1 instruction per cycle gets close to 1 DMIPS/MHz, so whatever instruction mix that is costs around 55 cycles per DMIPS "instruction." That's a lot of compiler overhead. I don't know what C compiler was used, but still...

 

When Carl and I were comparing the two processors, we both assumed balls-to-the-wall hand assembly code. Wring every last drop of performance out of each chip and see where they land. When you only have a few thousand instructions per display frame, it's worth your time to get to know all of them on a first-name basis. ;)

Link to comment
Share on other sites

Hmm. OK, looking more closely at the Wikipedia page, it says the MIPS value was actually DMIPS (Dhrystone MIPS), not instructions per second. I don't know that I believe the 0.43 number as a DMIPS number. Following the link to the source of the 0.43 MIPS value, it doesn't say DMIPS at all, just MIPS. The cycle count of just over 2 cycles/instruction makes more sense as a raw MIPS number, not an achieved DMIPS/MHz number.

 

I believe the 0.018 DMIPS/MHz number more for 6502. 16-bit 'int' arithmetic (including multiply and divide) and 16-bit pointer manipulation figure prominently in that benchmark, and the 6502 doesn't do well at any of those. At least not in the generic form you'd encounter in a C program. If you write creatively in assembly, though, you can make 6502 code go fast. I don't see that happening for Dhrystone, which has very specific rules about how you compile and measure.

 

EDIT: Aha, this lists the Apple ][ (a 1MHz 6502 system) as 0.02 MIPS, and it seems to be more explicitly DMIPS: http://www.frc.ri.cmu.edu/users/hpm/book97/ch3/processor.list.txt

 

That concurs with the other reference I found giving the 1MHz 6502 a 32 Dhrystones/sec score.

 

I'd expect the CP1610 to be a much friendlier compiler target. :)

 

EDIT2: Although even that list has oddities, such as listing the VIC-20 at 0.04, Commodore PET at 0.06, and Commodore 64 at 0.20. That... doesn't make sense at all, since they were all nominally 1MHz 6502s, weren't they? (OK, 6510 for the C64, but that just added two I/O ports.) And the Apple ][ had no cycle stealing for video, either.

Edited by intvnut
Link to comment
Share on other sites

Yes, all Commodores are virtually identical on that point. The difference is about 0.05 MHz between the slowest and fastest.

 

Is there any Forth implementation for the CP-1610? That is a typical mid-level language that implements well on 8-bits and supposedly has a good tradeoff between speed and code density.

Link to comment
Share on other sites

Is there any Forth implementation for the CP-1610? That is a typical mid-level language that implements well on 8-bits and supposedly has a good tradeoff between speed and code density.

 

None that I'm aware of. I think you can draw a sufficient conclusion, though, just looking at some simple instruction sequences.

 

Consider a simple memory copy:

 

6502:

loop:
     LDA ($10),Y   ; 5 cycles
     STA ($10),Y   ; 6 cycles
     DEY           ; 2 cycles
     BNE loop      ; 3 cycles

.

CP-1610:

loop:
     MVI@ R4, R0 ; 8 cycles
     MVO@ R0, R5 ; 9 cycles
     DECR R1     ; 6 cycles
     BNEQ loop   ; 9 cycles

.

That's 16 cycles/iter vs. 32 cycles/iter. The CP-1610 moves twice as much data, though, and so you could call it a wash. Not bad, but it's just about the best possible case for the CP-1610 for memcpy up to 256 elements. It goes downhill from there. The 6502 gets slightly faster if you can hard-code the starting addresses and use abs, Y addressing instead of (zp),Y.

 

If the data's inherently 8-bit (such as you'd have in 8-bit scratch RAM on the Intellivision), the 6502 effectively gets about a 2x boost, though. You could try to be clever on the CP-1610 using SDBD you can get some of it back, but it quickly becomes a pain:

.

loop:
     SDBD         ; 4 cycles
     MVI@ R4, R0  ; 10 cycles
     MVO@ R0, R5  ; 9 cycles
     SWAP R0      ; 6 cycles
     MVO@ R0, R5  ; 9 cycles
     DECR R1      ; 6 cycles
     BNEQ loop    ; 9 cycles

.

That gets down to 26.5 cycles/byte, but with considerably more complex code. (Not to mention code to handle odd byte counts.) Advantage 6502 here for byte data.

 

Any sort of compare/branch is going to have a 2x to 3x difference, since you're looking at 2 + 3 = 5 cycles best case vs. 6 + 9 = 15 cycles best case. (I'm ignoring the ADCR trick on CP1610 for the moment.) A more complex compare costs more (CMP (zp), Y is 5 cycles, for example), which brings the 6502 advantage down a little, but only a little.

 

Random table lookups are another place 6502 has an advantage: LDA table, X is a mere 4 cycles, while ADDI #table, R1; MVI@ R1, R0 is 14 cycles.

 

Now, a 6502 expert will need to weigh in on whether I'm writing the following properly, but let's compare adding two 16-bit values in memory via indirect pointers, storing the value in a third location:

 

6502:

    CLC          ; 2
    LDA ($10),Y  ; 5
    ADC ($12),Y  ; 5
    STA ($14),Y  ; 6
    INY          ; 2
    LDA ($10),Y  ; 5
    ADC ($12),Y  ; 5
    STA ($14),Y  ; 6
                 ;--
                 ;36 total cycles

.

CP-1610:

     MVI@ R1, R0   ; 8
     ADD@ R2, R0   ; 8
     MVO@ R0, R3   ; 9
                   ;--
                   ;25 total cycles

The CP-1610 has a clear advantage here, provided R1, R2, and R3 were already set up. Again, that's the best case for the CP-1610, and it's downhill from there. The 6502 has a large zero page and can keep important pointers around a very long time. If the CP-1610 needed to read all three addresses from memory, add that'd add 24 cycles, nearly doubling its cycle count to 49 total cycles. Reading those pointers from RAM is "free" on the 6502.

 

And if these were 8 bit values, the 6502 would only take 18 cycles, while the CP-1610 doesn't get any faster. So, if you had to compare an 8-bit addition with "all pointers starting in memory" between the two, the 6502 is about 2.7x the speed of the CP-1610. (49 cycles vs. 18 cycles)

 

Interpreter loops (such as you'd have for FORTH) have a lot of byte copies, compare-and-branch, and reading via indirect pointers. You can see a pattern developing: In the best case for the CP-1610, it's a tie or small advantage for the CP-1610. In the worst case, it's a 2x or 3x win for the 6502.

 

Now, if we were to look at the original CP-1600 (of which the CP-1610 is a detuned variant), it ran at 2x the clock rate. (2MHz max instruction cycle rate vs. 1MHz for the CP-1610.) Many of the disadvantages relative to the 6502 evaporate when you can run the CPU at twice the speed, and the modest advantages and ties seen above now become clear advantages. The CP-1610, however, is hobbled by its low clock rate. The CP-1600 architecture is meant for higher clock rates. The 6502 is able to shine despite the modest clock rate and pull ahead of it easily.

  • Like 1
Link to comment
Share on other sites

As far as I can see, that snippet of 6502 code seems correct. The addition of the two LSBs may leave a carry bit, which is taken care of during addition of two MSBs. Your note on setting up registers on beforehand of course applies in both cases, as the zeropage on the 6502 usually isn't preset with useful pointers into your part of the memory. I don't know how much of the zeropage the KCS uses for own needs, but generally Microsoft BASIC implementations seem to use a good chunk for a couple of floating point FACs, of course the CHRIN routine and other stuff so the amount of free space may be significantly less than the whole 256 bytes. Of course if you don't require BASIC, but just use the CPU for own machine code, you are more free about the memory usage.

Link to comment
Share on other sites

I really doubt the 6502 is a great C compiler target.

I disagree. Having written a bunch of games in "C" (CC65 variant) on the Atari 7800 (using a hand optimised assembly language library to access machine functionality) its certainly possible to make compelling games. Like all embedded compilers for 8 bit systems you have to tune your programming style to what the compiler is best at dealing with and the underlying CPU architecture.

  • Like 1
Link to comment
Share on other sites

I disagree. Having written a bunch of games in "C" (CC65 variant) on the Atari 7800 (using a hand optimised assembly language library to access machine functionality) its certainly possible to make compelling games. Like all embedded compilers for 8 bit systems you have to tune your programming style to what the compiler is best at dealing with and the underlying CPU architecture.

 

OK, let me amend that statement: A great C compiler target for code like Dhrystone or other C code that you're likely to find laying around, not written specifically for the 6502.

 

I had a similar experience with the DSP family I worked with: If you wrote C code specifically for that DSP, you could make it fly. But, if you tried to compile standard benchmarks "off the shelf," you could see a pretty big performance delta relative to more general purpose machines like ARM.

Link to comment
Share on other sites

 

OK, let me amend that statement: A great C compiler target for code like Dhrystone or other C code that you're likely to find laying around, not written specifically for the 6502.

 

I had a similar experience with the DSP family I worked with: If you wrote C code specifically for that DSP, you could make it fly. But, if you tried to compile standard benchmarks "off the shelf," you could see a pretty big performance delta relative to more general purpose machines like ARM.

So, let me see if I follow: A C compiler for the CP-1610 will not be good because if you try to run generalize, "off-the-shelf" benchmarks code on it, they will run poorly; therefore proving that the C compiler is not great on that platform.

 

Did I get my circular logic right?

 

I remember some time, not too long ago, when some people thought that even BASIC would not be that useful on the Intellivision.

 

dZ.

Link to comment
Share on other sites

In my opinion is that the CP-1600 is a better match for the machine model assumed by most 16-bit C code. That isn't too surprising: The CP-1600 is a cutdown of the PDP-11, on which the first C compilers were developed.

 

If you were to take Dhrystone and compile it for 6502 and CP-1600, I would expect better Dhrystone performance relative to the clock rate on CP-1600.

 

Let me put some technical reasons why I feel that behind my post. I'm not trying to make an appeal to authority here.

 

Off the shelf C code assumes int is the most efficient type to use for most things, and so benchmarks like Dhrystone use int for loop counters and arithmetic. Likewise for pointer arithmetic. Furthermore, the arithmetic rules in C guide most integer expressions to int.

 

So, most of the computations in Dhrystone—when compiled for 16-bit int and 16-bit pointers—will be 16 bits. I say "when compiled for 16-bit int and 16-bit pointers," as that would true for both CP-1600 and 6502 due to the rules of C. That happens to match the machine word size of the CP-1600. C doesn't allow for 8-bit int or short. You must use char for that. This puts the 6502 at a disadvantage on a benchmark that just assumes int and pointer arithmetic is the most efficient for the architecture.

 

To carry multiple 16 bit variables around, the 6502 must keep them in memory. A good C compiler, however, could keep hot values in registers, though. (Recent improvements to IntyBASIC aim to do exactly that.) That plays to a CP-1600 strength: the larger register file. Of course, the 6502 does have the zero page, and so if you use that as a large register file in the compiler, perhaps that helps. I'm not sure, though, to what degree the zero page is a compiler-managed resource vs. the programmer specifically allocating variables there.

 

Generic benchmarks like Dhrystone have never heard of the zero page.

 

I imagine (and I'm sure GroovyBee or someone with 6502 C compiler optimization experience could expand on this), C code optimized for 6502 would focus more on 8-bit variables where possible, with dynamic pointers statically allocated in the zero page (to leverage (zp),Y mode as much as possible), and arrays of structures broken apart into parallel arrays (to leverage abs,X and abs,Y addressing modes as much as possible.) In fact, I remember John Carmack mentioning that latter optimization for 6502 specifically.

 

None of those tweaks are needed to improve CP-1600 performance on C code. (Ok, the parallel array vs. array of structures tweak may help, to avoid scaling array indices.) The default model code like Dhrystone is written for more naturally fits the CP-1600 than the 6502.

 

Any why Dhrystone? I actually kinda hate Dhrystone. But, it's well understood, and it turns out you actually can predict the out-of-the-box performance of a surprising amount of architecture-neutral customer code by knowing how well a given architecture does on Dhrystone. It does overemphasize string operations, and it poorly represents heavy-math workloads. But, it does capture many aspects of control code. At the risk of sounding like an appeal to authority, I have spent waaaay too many hours staring at and analyzing Dhrystone as part of my previous $DAYJOB.

Edited by intvnut
  • Like 1
Link to comment
Share on other sites

In my opinion is that the CP-1600 is a better match for the machine model assumed by most 16-bit C code. That isn't too surprising: The CP-1600 is a cutdown of the PDP-11, on which the first C compilers were developed.

Personally, I think that the MSP430's GCC "C" compiler would make a good starting point for an adaptation to the CP1610.

 

If you were to take Dhrystone and compile it for 6502 and CP-1600, I would expect better Dhrystone performance relative to the clock rate on CP-1600.

Benchmark results are also dependent on the maturity of the compiler and the compiler vendor.

Link to comment
Share on other sites

In my opinion is that the CP-1600 is a better match for the machine model assumed by most 16-bit C code. That isn't too surprising: The CP-1600 is a cutdown of the PDP-11, on which the first C compilers were developed.

 

OK, I'll buy that. However, that's different from suggesting that a C compiler for the CP-1610 would not be good at all, which is what I thought you have suggested above and in the past.

 

Ultimately, the applications for which it will be employed (e.g., games) are rather narrow and could be optimized with specialized, hand-tuned libraries.

  • Like 1
Link to comment
Share on other sites

Actually, I personally think a C compiler would be great for the CP1600, and it's something I've toyed with. Raw C, though, doesn't provide any support libraries for game development, and for Intellivision development specifically, it doesn't provide a great way to talk about things like 8-bit RAM vs. 16-bit RAM and leverage that distinction. But, if you know what you're doing, I think you could do some interesting stuff from C on the CP1600.

 

In contrast, IntyBASIC has filled that niche nicely for the Intellivision. It's much better targeted at the Intellivision's specific capabilities and strengths. It provides libraries and direct support for Intellivision hardware, and is much more familiar and forgiving to a broad audience of potential game-writers.

 

My objection to C vs. insert-language-here is that if our goal was to increase the audience of game writers, C wouldn't have as much impact as a game-specific language or framework.

  • Like 1
Link to comment
Share on other sites

I think a C compiler for CP1610 would be nearly so efficient as IntyBASIC.

 

The local variables would be extremely inefficient, 3 instructions at least, I would suspect most people would end using static variables.

 

 

MVII #12,R1
ADDR SP,R1
MVI@ R1,R0
 
ADDI #5,R0
 
MVII #12,R1
ADDR SP,R1
MVO@ R0,R1

 

Another thing is the small stack size available, around 24 words. Unless using LTO-Flash or Hive 16-bits memory.

 

Also I foresee using char/unsigned char for accessing 8-bits memory, short/int for 16-bits memory, long support for 32-bits.

 

Some libraries would get supported but not stdio.h and string.h.

Link to comment
Share on other sites

Actually, I personally think a C compiler would be great for the CP1600, and it's something I've toyed with. Raw C, though, doesn't provide any support libraries for game development, and for Intellivision development specifically, it doesn't provide a great way to talk about things like 8-bit RAM vs. 16-bit RAM and leverage that distinction. But, if you know what you're doing, I think you could do some interesting stuff from C on the CP1600.

 

In contrast, IntyBASIC has filled that niche nicely for the Intellivision. It's much better targeted at the Intellivision's specific capabilities and strengths. It provides libraries and direct support for Intellivision hardware, and is much more familiar and forgiving to a broad audience of potential game-writers.

 

My objection to C vs. insert-language-here is that if our goal was to increase the audience of game writers, C wouldn't have as much impact as a game-specific language or framework.

 

We're connected :lol:

 

I think I can redesign the IntyBASIC expression parser and code generator for a C-like language very close to what we expect as C.

Link to comment
Share on other sites

I think a C compiler for CP1610 would be nearly so efficient as IntyBASIC.

 

The local variables would be extremely inefficient, 3 instructions at least, I would suspect most people would end using static variables.

 

 

Yes, automatic variables (local variables on the stack) are a general problem on the CP1610. You really need a decent register allocator to keep the hot variables in registers.

 

In terms of a 6502 vs. CP1600 comparison—the original purpose of this digression—I don't think the 6502 has much of an advantage here. You can, though, TSX followed by LDA $100,X or the like, though, if memory serves, and do something similar to your sequence above. And since you get indexing for a mere 1 cycle penalty (e.g. LDA $101, X, LDA $102, X, etc. to access different elements of a stack frame), the 6502 does get the benefit of low-cost random access to a stack frame.

 

The CP1600 really needs an indexed addressing mode.

Link to comment
Share on other sites

 

Also I foresee using char/unsigned char for accessing 8-bits memory, short/int for 16-bits memory, long support for 32-bits.

 

 

That gets tricky, actually. You kinda need sizeof(int) == 1 (due to word-oriented memory). You now have to decide whether CHAR_BITS is 8 or 16, and you quickly start to run into competing assumptions about what sizeof tells you vs. the relative bit-lengths of things. Stuff like struct layout also gets weird, potentially, unless CHAR_BITS is 16. And what happens to char variables on the stack?

 

C assumes there's one minimal addressable unit and everything is built from multiples of that, but the Intellivision has two.

 

I'm not saying you couldn't come up with reasonable answers for the questions, but I think it's fair to say that an efficient C implementation for Intellivision specifically (vs. the CP1600 in an idealized, large 16-bit RAM system) would have many caveats.

Edited by intvnut
Link to comment
Share on other sites

 

Yes, automatic variables (local variables on the stack) are a general problem on the CP1610. You really need a decent register allocator to keep the hot variables in registers.

 

In terms of a 6502 vs. CP1600 comparison—the original purpose of this digression—I don't think the 6502 has much of an advantage here. You can, though, TSX followed by LDA $100,X or the like, though, if memory serves, and do something similar to your sequence above. And since you get indexing for a mere 1 cycle penalty (e.g. LDA $101, X, LDA $102, X, etc. to access different elements of a stack frame), the 6502 does get the benefit of low-cost random access to a stack frame.

 

The CP1600 really needs an indexed addressing mode.

 

Another thing is the function calling, it should be simplified if possible to keeping first arguments in registers. Like IntyBASIC, programmers avid for performance probably would stick to functions with no arguments or use sequences of PUSH R0 before calling and a simple ADDI #2,SP to clean arguments in return.

 

 

' function(a, b, c, d);
MVI a,R0
MVI b,R1
MVI c,R2
MVI d,R3
CALL _function
 
_function: PROC
BEGIN
; Save arguments in stack for further processing or try to play around with registers for simple functions
; More processing needed to try to avoid BEGIN/RETURN
RETURN
ENDP
Link to comment
Share on other sites

 

That gets tricky, actually. You kinda need sizeof(int) == 1 (due to word-oriented memory). You now have to decide whether CHAR_BITS is 8 or 16, and you quickly start to run into competing assumptions about what sizeof tells you vs. the relative bit-lengths of things. Stuff like struct layout also gets weird, potentially. And what happens to char variables on the stack?

 

C assumes there's one minimal addressable unit and everything is built from multiples of that, but the Intellivision has two.

 

I'm not saying you couldn't come up with reasonable answers for the questions, but I think it's fair to say that an efficient C implementation for Intellivision specifically (vs. the CP1600 in an idealized, large 16-bit RAM system) would have many caveats.

 

Ok, you made my head to explode before even starting :lol:

 

I would avoid entirely the matter and would start with int variables only ;)

Link to comment
Share on other sites

 

Ok, you made my head to explode before even starting :lol:

 

I would avoid entirely the matter and would start with int variables only ;)

 

The times I've explored this, I decided to make CHAR_BITS == 16 and set sizeof(char) == sizeof(short) == sizeof(int) == 1. I planned to leave 8 bit memory to assembly libraries and purpose-built C code that knew the rules were different for that address range. The compiler would be oblivious, and most code would stick to 16-bit RAM.

 

And yes, to really be effective, that model assumes a fair bit of 16-bit RAM available.

 

 

 

 

 

Another thing is the function calling, it should be simplified if possible to keeping first arguments in registers. Like IntyBASIC, programmers avid for performance probably would stick to functions with no arguments or use sequences of PUSH R0 before calling and a simple ADDI #2,SP to clean arguments in return.

' function(a, b, c, d);
MVI a,R0
MVI b,R1
MVI c,R2
MVI d,R3
CALL _function
 
_function: PROC
BEGIN
; Save arguments in stack for further processing or try to play around with registers for simple functions
; More processing needed to try to avoid BEGIN/RETURN
RETURN
ENDP

 

I agree on passing arguments in registers as a default. I also played with an idea of using a common prolog to functions to allow some number of fixed-at-compile-time arguments, so you could do tricks like this:

.

; All 3 args in regs:
    MOV  a, R1
    MOV  b, R2
    MOV  c, R3
    CALL foo.0


; All 3 args fixed at compile time
    CALL foo.3
    DECLE a, b, c

; Last argument fixed at compile time
    MOV a, R1
    MOV b, R2
    CALL foo.1
    DECLE c


foo PROC
@@3 MVI@ R5, R1
@@2 MVI@ R5, R2
@@1 MVI@ R5, R3
@@0 ; function body
    ...
    ENDP

.

That ends up mostly being a code-size optimization, though, rather than a real cycle savings, and it's a layer of complexity we don't need if we assume "ROM is cheap."

  • Like 1
Link to comment
Share on other sites

For what it is worth, there is a semi-inofficial port of cc65 for the VTech Creativision. I'm not sure why it didn't become official, but it exists. I don't know if the Keyboard Component can run arbitrary machine code or would benefit from a cc65 port as well. Also I understand that it is far rarer than the Creativision is, so perhaps no real point trying to port the language and make STIC etc libraries.

Link to comment
Share on other sites

 

The times I've explored this, I decided to make CHAR_BITS == 16 and set sizeof(char) == sizeof(short) == sizeof(int) == 1. I planned to leave 8 bit memory to assembly libraries and purpose-built C code that knew the rules were different for that address range. The compiler would be oblivious, and most code would stick to 16-bit RAM.

 

And yes, to really be effective, that model assumes a fair bit of 16-bit RAM available.

 

 

 

 

I agree on passing arguments in registers as a default. I also played with an idea of using a common prolog to functions to allow some number of fixed-at-compile-time arguments, so you could do tricks like this:

.

; All 3 args in regs:
    MOV  a, R1
    MOV  b, R2
    MOV  c, R3
    CALL foo.0


; All 3 args fixed at compile time
    CALL foo.3
    DECLE a, b, c

; Last argument fixed at compile time
    MOV a, R1
    MOV b, R2
    CALL foo.1
    DECLE c


foo PROC
@@3 MVI@ R5, R1
@@2 MVI@ R5, R2
@@1 MVI@ R5, R3
@@0 ; function body
    ...
    ENDP

.

That ends up mostly being a code-size optimization, though, rather than a real cycle savings, and it's a layer of complexity we don't need if we assume "ROM is cheap."

 

I do something similar in P-Machinery 2.0 with the macro library. It doesn't offer a generalized function-calling framework, but it's a framework that facilitates creating function-looking macros that attempts to discern whether the arguments are registers, constants or variables and generates the appropriate calling sequence underneath. Also, if the underlying function expects as specific register as input but programmer called the macro with a different one, the framework will "MOVR" to the appropriate one.

 

A compiler could do fancier stuff since it knows the state of the flow at any point.

Link to comment
Share on other sites

Stuff like struct layout also gets weird, potentially, unless CHAR_BITS is 16.

Why? Compilers are allowed to pad structure members to convenient alignment boundaries. Just pad 8bit chars out to 16bit words. If the programmer overrides that behaviour with #pragma pack, generate the appropriate structure and the extra code needed to handle packing/de-packing.

 

And what happens to char variables on the stack?

They would also be aligned to 16bit words on the stack. ARM compilers do this too (except they align to machine word size e.g. 32/64bits).

Link to comment
Share on other sites

The compiler would be oblivious, and most code would stick to 16-bit RAM.

The compiler could generate the correct handling automatically. After SCRATCH RAM has been exhausted, the compiler would generate more complex code to handle chars stored in word space (the programmer would probably make use of the attribute directive as a hint to the compiler to achieve that).

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...