Which begs the question: what Coleco games DID you write??
Answering this would lead to implicit promises I'm not prepared to make.
None of the original games, however, I just did a couple of commissions. This is only really doable with C code, Z80 assembly would take a lot of work to rewrite for the 9900.
Great game! I didn't quite understand the GCC part. Is it written in C, and was the ColecoVision version also written in C?
Yes, the ColecoVision version was compiled using the SDCC compiler, and the TI port using GCC. The code changes needed were TI versions of the ColecoVision specific functions for graphics and sound, a couple of bug fixes that GCC picked out that SDCC did not, and a couple of workarounds for GCC bugs. It almost worked out of the box (so to speak).
This is great stuff! 2013 is a great year for new high-quality games for the TI-99/4A. If this is ever published as a cartridge with box and manual, then I will buy it.
Thank you. I don't intend to ever port this over to cartridge (it would still need the 32k without a lot of work - and I don't know, would a 32k-requiring cartridge be a bit weak for a modern release?) Besides the need to add bank switching to the code (for 8k banks), it uses more RAM than is available in scratchpad, so I'd have to solve that as well. I don't have time for the work it would take, which is why I released it as a disk title instead. CV plans to box up a few special editions with box and manual, though, I believe, but they will be floppy-based.
I had a thought last night about programming cartridges in C versus assembly. When you examine a cartridge, would it be readily apparently between one coded in C versus assembly? My thought is that when you program in C, the compiler takes care of all of the code and data placement, whereas in assembly the programmer makes those choices in his or her own style. Does that hold up to scrutiny?
Pretty much, yes. But the GCC compiler does frequently surprise me with rather good code, it's why I've adopted it so fully. In general, C code is recognizable for making a lot of jumps and in particular for its stack use. Once you know what the stack function looks like, you can pick out C pretty quickly, especially at the edges of functions where stack tends to be set up and cleared. For instance, c99 has a function that pushes onto it's stack, so you'll see a lot of "bl *r15" in the code (alternately, there's an 'inline push' option in c99 that will replace this with the two instructions that a push actually is, but I don't recall what they are. It makes the code much larger but much faster). Function calls are generally a bl *r12 followed by the address of the actual function. c99 was a fairly simple compiler with limited optimization, so you'll see a lot of jumping around and steps that could be combined.
DATA GRF1 * grf1()
BL 15 * push '1'
BL 15 * push '1'
BL 15 * push 'c$28'
DATA SAY * say(1,1,c$28)
(of course, that's source code, the assembly would have all numbers and no labels
GCC is a bit smarter, but you'd still start to recognize patterns. As a modern optimizing compiler, it's capable of re-ordering code and even removing any code blocks or function calls deemed unnecessary. In particular, it's surprisingly smart about the stack (to me, anyway), skipping it when possible for function calls and preferring registers. But all compilers are dumb somewhere, after looking at the code for a while you'd start to work it out. The stack based-storage is the main giveaway, not to say a programmer would never do that in assembly, but it's less common.
dect r10 * make room on stack
mov r11, *r10 * save return address
li r1, >A * R1=10
mov r1, r2 * R2=R1 (also wanted 10, compiler knew this was preferable to another LI)
li r3, LC0 * R3=LC0
bl @writestring * writestring(10,10,LC0)