apersson850
Members-
Content Count
1,063 -
Joined
-
Last visited
Content Type
Profiles
Member Map
Forums
Blogs
Gallery
Calendar
Store
Everything posted by apersson850
-
That's what the PME does. Take a look at the label FETCH in the first code window in my first post in this thread. But doesn't Forth need more than 256 opcodes? The whole idea being that you define new words all the time. Or there's more to it than the obvious? There are not as many as 256 p-codes, so that's not an issue for the p-system.
-
I was referring to Forth on the TI. I've not seen any byte-coded Forth there. Does it exist in some version?
-
What could they have done better with the 99/4a?
apersson850 replied to Tornadoboy's topic in TI-99/4A Computers
Somewhat like the arithemtic IF statement in Fortran. -
What could they have done better with the 99/4a?
apersson850 replied to Tornadoboy's topic in TI-99/4A Computers
You have to do DEF TRUE=-1 DEF FALSE=0 -
What could they have done better with the 99/4a?
apersson850 replied to Tornadoboy's topic in TI-99/4A Computers
No, that's neither the case nor the problem. What happens when you enter IF X THEN is that the computer executes an implicit IF X=TRUE THEN. That's already taken care of. When you write IF X<5 THEN the computer actually evaluates IF (x<5)=TRUE THEN for you. That's why you (probably) do not write the code IF (X<5)=TRUE THEN in your programs. The comparison against TRUE is implied. Thus this i as portable as it gets. What is not portable is assuming that these two have specific numeric values. That's mainly a problem in BASIC, where there are no specific boolean values. That's why IF X<5 THEN P=P+80 can be replaced by P=P-80*(X<5) in TI Extended BASIC, but it may fail on a different machine. That's also why both A and NOT A can be true at the same time, if A contains a value that's neither the one used for true nor for false in this particular BASIC. As stated before, testing if a value exists is pointless in many languages, as it does, by definition, exist as soon as you mention it. Or you get an error for mentioning it, before it's declared. Pascal can create variables on the fly, but in doing so, you have to have a pointer refering to the variable. Writing code like if pointer then, where you want to know if the pointer is referring to something, never works. Neither does just writing if pointer<>nil then, since a variable in Pascal can have any value when the program starts. You have to explicitly set pointer := nil when your program starts. If a new(pointer) is then executed, the pointer variable will change to something that's not nil. Note that in this case there's never any question about if pointer exists. What may change is if pointer^ exists or not. But again, whether such a variable has been created or not, i.e. if pointer is nil or not, it still a logical tests in the end. What you are actually asking is if (pointer<>nil)=true then. But you don't write like that, since it's assumed that the if statements compare the result to true, whatever true may be represented with in a particular machine. As you can see, this behavior is the only consistent one. Allowing for IF X THEN as a test for if X has been used before would be a deviation from logic and consistency. It should be handled by constructs like IF EXISTS(X) THEN or if x.exists then. Or, as you prefer, IF EXISTS(X)=TRUE THEN or if x.exists=true then. -
What could they have done better with the 99/4a?
apersson850 replied to Tornadoboy's topic in TI-99/4A Computers
In a program that runs a couple of concurrent processes, then let them do the work, you could perfectly well have start(processA); start(processB); while true do; That's what I would write. Not end it with while true=true do; The last statement prevents the main program from stopping, and then implicitly killing the spawned processes. In this kind of BASIC we have here, you can't write X="". You must use X$="". And string variables can't be used "stand alone" in an IF statement. In many BASIC versions, and especially for computers like the 99/4A, you don't declare variables at all. They are put into the symbol table as soon as they are encountered, or during a pre-scan. Which means that as soon as you write IF X you have mentioned X, and it's implicitly declared. In a language like Pascal, writing if x when x has not been declared will be trapped already by the compiler, so it will never get a chance to execute. The question if a variable is declared seems redundant, at least as far as the 99/4A is concerned. -
I can't edit my first post any longer. There's a point that should be a comma in the code that interprets DUPI. I also want to clarify that the purpose of the NOP instructions in the main interpreter is to make sure the parameter retreival code has the same addresses for all interpreter versions. When running the interpreter which handles code in VDP RAM, the code isn't fetched by MOVB *IPC+,R1 The code used is instead INC IPC MOVB *R13,R1 where R13 contains the read data address for currently used memory. VDPRD in this case. VDPRD does autoincrement by itself, but to keep track of how far in the code the PME has advanced, it must increment the instruction pointer, R8, explicitly. They have tried to use these NOP instructons as fillers only, but in some cases they are executed, something which of course slows down the system further. Had the 99/4A had CPU RAM only for program execution, and let the VDP manage the video memory for just that, video, then this complication had never existed. When the PME branches to code in a different type of memory, it replaces the whole inner interpreter by a new version. Well, that's not always true, since the same interpreter is used to execute p-code from VDP RAM and GROM. But it changes when changing to and from CPU RAM. Which in turn actually makes the p-code execute slower in the standard case, as p-code is by default loaded into the code pool in VDP RAM, and that interpreter has to execute more instructions.
-
What could they have done better with the 99/4a?
apersson850 replied to Tornadoboy's topic in TI-99/4A Computers
It isn't a variable name, it is a reserved word. Which version would you use? -
What could they have done better with the 99/4a?
apersson850 replied to Tornadoboy's topic in TI-99/4A Computers
That's up to you, of course. Would you also write IF TRUE=TRUE THEN instead of the more obvious IF TRUE THEN when you need to create something that runs forever? To me, the first one seems much more stupid than the second. In languages with strong typing, like Pascal, you can only do repeat xxx until done; assuming done is declared as a boolean, of course. But since BASIC usually can't tell the difference between a number and a boolean, that's what you get. If you type in A=6 PRINT A=7 then the printed result is 0. PRINT A<10 will give you the result -1. So the fact that all logical statements evaluate to a number, and then that number is tested, implies that testing only a number is equally valid. This is not any odd characteristic, it's a logical extension of how the system works. But as you write, you can't assume that "null" is zero. In a Pascal system, there's a system constant for "nil". It's usually zero, but to be sure things work you have to test if pointer=nil then. The danger with using this syntax in TI BASIC is this: A = 0 now A is false B = -1 now B is true C = 1 now C is true too A = NOT A this works, A is now true B = NOT B this works, B is not false C = NOT C this doesn't work, C is still true, because the two's complement of 1 isn't zero. Since Pascal separates boolean variables from doing bitwise logical manipulations of integers completely, this is not an issue there. -
What could they have done better with the 99/4a?
apersson850 replied to Tornadoboy's topic in TI-99/4A Computers
That's normal behavior in every language I use. Including variants of BASIC. So no, it's rather the standard. The IF statement evaluates an expression. If it comes out to -1 (in many cases, as it's the two-complement of zero), it's true. Some systems use 1 and 0, not -1 and 0. You can do IF A+7 THEN and it will be true in all cases except when A is -7. You can do it more complex, like IF (A=3)+(B=12) THEN, which would be equivalent to IF A=3 OR B=12 THEN. The same goes for Pascal, for example, where if done=true then is the same thing as if done then. -
What could they have done better with the 99/4a?
apersson850 replied to Tornadoboy's topic in TI-99/4A Computers
That's not true. IF X THEN 40 is a perfectly legal statement. It evaluates to a jump to line 40 if X is true, which normally means X is -1, but will work for anything that's not zero. -
No, it's not incorrect. But you misunderstood what I was referring to. I was referring to that you get this kind of support for overlapping different code in memory in your user's application. GPL and the p-system are completely separate, though. GPL only makes sure there is a branch to the startup code on the p-code card, then GPL, or its interpreter, isn't executed any more, until you exit the p-system. The only software in the console that's used by the p-system is some ROM routines for floating point math. The p-system IV.0 can use up to six disks. But since TI had a controller which could handle no more than three, they disabled the last three. The simplest thing you can do is add a controller capable of using four disks, like the CorComp I have. In such a case it's the same controller for the extra disk, so the p-system's table for the fourth disk only has to be populated with the same pointer as for the first three. It's a bit more complex when you add a disk that's handled by a different controller, as the pointer in the unit table in the p-system then can't be the same. You also have to create a PCB (Peripheral Control Block) somewhere in VDP RAM for this new controller. It's similar to a PAB in the normal operating system, just with some more information. Then it's doable. I'm using two RAMdisks with my p-system. One is a Horizon RAMdisk. But the p-system uses a different code to access peripherals than the console does. The current DSR for the Horizon card looks into data used by the console OS, when accessing a disk. Since the p-system accesses a disk with code running in a different location, this assumption in the DSR for the Horzion card fails. This was done to save a few bytes in the DSR, since memory space is at premium on the card. Fortunately, the p-system only needs the sector read/write capability, so it wasn't too difficult to write a DSR that does that in a way compatible with the p-system. You have to re-install the standard DSR when not using the p-system, though. System IV.0 can't use subsidiary folders, though. Hence a hard drive is of somewhat limited use, as it can only be one drive to the p-system. Unless a DSR is written that simulates several disks on the hard drive. Later versions of the p-system solved this by allowing the creation of virtual disks on a disk. Instead of just being able to create files on a disk, you could create a volume, which in turn could contain files. Only one level deep, though.
-
To avoid misconceptions, I want to clarify that by Pascal I mean the UCSD Pascal, since it’s the only meaningful implementation for the TI 99/4A. Which in turn implies that Pascal isn’t just another language for the TI, it’s also a completely different operating system. Which means a different way to work with the computer as well. But to return to the first question, then yes, Pascal is slower than theoretically possible. The fastest possible Pascal program would run at the same speed as a carefully written assembly program, doing the same thing. Pascal clearly does not. There are two reason for this. First, a compiler isn’t always as clever as a good programmer. Especially not a compiler with roots from the late 1970’s. The UCSD compiler isn’t famous for being any of the better at generating code, even by contemporary standards. Second, the p-system compiler generates p-code, not pure machine code. This doesn’t necessarily imply any speed penalty, as it’s not too unusual that a compiler generates an intermediate code, which is then eventually translated to machine code. But in the p-system, the p-code is the final stage. Indeed, the p-system implements most of its functionality by executing p-code, since the p-system is mainly written in Pascal. One of the targets with the UCSD p-system is just that; it should be self-compiling, so that new versions of the system can be compiled on the system itself. Machine code is executed by the TMS 9900 CPU in the computer. P-code, on the other hand, is executed by a virtual CPU called the PME, short for P-Machine Emulator. This is like a virtual CPU, having p-code as its native language. The PME handles data in various locations. Processing like arithmetic functions is done using a stack. Dynamic data that’s not relevant to push and pop is allocated on the heap. The system maintains a lot of data structures to keep track of code segments, constant declarations, global data and so on. In the TI, the p-code is interpreted by the PME. For each instruction, several machine instructions are executed just to find out how to interpret the p-code. Then the actual interpretation is done, and the PME returns to processing the next instruction. This overhead does of course imply a cost in time. Knowing this, several questions usually comes up. How much overhead cost is there? Why was it implemented like this? Can you do anything about it? Or is it actually a good idea? PME structure To understand the level of overhead, we have to look at how the PME works. At startup, the inner interpreter is loaded to CPU RAM PAD at 8300-8342H. It exists in different versions, depending on whether the p-code it’s supposed to execute is in CPU RAM, VDP RAM or GROM. P-code is a byte code, so it lends itself well to being executed from VDP RAM, or GROM, but addressing of the two is different, requiring two slightly different PME versions. Then CPU RAM isn’t autoincrementing, and doesn’t require and elaborate address setup, so it’s again different. Here is the inner interpreter for code in CPU RAM. It starts at FETCH (an instruction). Further down is code to get various parameters stored after the instruction itself. FETCH MOVB *IPC+,R1 Fetch next opcode SRL R1,7 Make word index MOV @PCODTAB(R1),R2 Fetch interpreter code address MOV *R2+,R0 B *R0 NOP LDUBB CLR R4 MOVB *IPC+,PASCALWS+9 Lsby R4 NOP LDUB CLR R3 Read byte after code MOVB *IPC+,@PASCALWS+7 Lsby R3 B *R2 NOP CLR R5 MOVB *IPC+,@PASCALWS+11 Lsby R5 NOP LDUBBG CLR R4 MOVB *IPC+,@PASCALWS+9 Lsby R4 NOP CLR R3 MOVB *IPC+,R3 JLT BIG SWPB R3 B *R2 BIG ANDI R3,7F00H MOVB *IPC+,@PASCALWS+7 Lsby R3 B *R2 The p-systems main workspace, PASCALWS, is located at 8380H. Some of the registers have pre-defined functions. R8 p-code instruction pointer. IPC R9 Frame pointer for activation record on stack. ARECP R10 Stack pointer. SP R11 Return link. R12 PME FETCH address pointer. FETCHP R13 Read data address for currently executing segment. PGRMRD, VDPRD or 0. R14 Global data frame pointer for current code segment. GLOBDATA R15 Memory type flag for currently executing p-code. PCODTAB This is a table in RAM, indexed by the p-code’s opcodes. The first entry contains the address to the code interpreting p-code with opcode 0, and so on, for all 256 p-codes. PME execution P-codes exist in many variants. The simplest is NOP. It does nothing. The interpreter’s code looks like this: NOP DATA FETCH As you can see, the PME will read the address to itself and re-execute itself. Five CPU instructions will be used to do this. One of the simpler is DUPI. It duplicates an integer on top of the stack. This is the interpreter’s code for DUP: DUPI DATA DUPI+2 DECT SP The real work is done here MOV @2(SP).*SP B *FETCHP The useful work takes two instructions. It takes the PME another six to execute them. Some p-codes have parameters inline in the code. They are stored right after the instruction. LDCB <UB>, Load Constant Byte, places the value of <UB> on the stack. It looks like this: LDCB DATA LDUB DECT SP The real work is done here MOV R3,*SP B *FETCHP Here the PME executes nine instructions, plus the two doing the work. But fetching the parameter is or course also real work, so it’s rather seven for the PME and four to do the job. Several p-codes are specifically designed to do tasks that frequently occur when executing Pascal programs. Accessing a variable in a procedure that’s one or more lexical levels above the current one is one such task. The p-code STR <DB>, <B> stores the value on top of stack in the variable with offset <B> in the procedure <DB> levels above the current one. STR DATA LDUBBG MOV ARECP,R2 Activation record pointer TRAVAREC MOV *R2,R2 Traverse activation records DEC R4 JGT TRAVAREC SLA R3,1 A R2,R3 MOV *S+,@8(R3) B *FETCHP Here the total number of instructions is dependent on how many lexical levels we must traverse. An average may be 19 useful instructions and five to run them. If we go into more complex instructions, like real arithmetic, the interpreter overhead diminishes. The PME calls the console’s floating point operations for these tasks, so there’s no difference in math speed or precision, compared to BASIC. Even more complex tasks, like calling procedures, requires so many instructions that the interpreter overhead diminishes. Especially if the procedures are in other segments. We can see that for the simplest instructions, the execution time may increase by six times, compared to the single instruction doing the work. But for more complex instructions, the increase gets smaller. In the last example, it’s a factor of 1.25 or less. Code size When the p-system, and the concept of the p-code, was first designed, it was for different reasons. One reason was to make it possible to execute the same program on different computers. As long as the computer had a PME, then the same p-code could be executed. Since the operating system also runs p-code, not only the application, but the entire p-system, with all its utilities, could be transferred to another kind of computer. For us, using the TI 99/4A only, this is only of value if we want to run Pascal code written for other machines, as it’s difficult to transfer object code from other systems today. The other reason was to make Pascal possible to run on machines with rather small memories. At that time, 32 Kbyte was a large internal memory in a computer. So compact code was a priority. P-code is a byte code. Instructions without parameters occupy one byte in memory. TMS 9900 instructions require two, four or six bytes, depending on the addressing. The p-code DUPI requires one byte, the equivalent TMS 9900 code six. LDCB requires two bytes, TMS 9900 code eight. STR requires three bytes, TMS 9900 code 24 bytes. It’s somewhere on the limit for where it would be better to call a subroutine, but then the difference between the p-code and the assembly code is reduced. The inter-segment calls would most certainly be implemented as subroutine calls anyway, since they imply running hundreds of instructions. Anyway, this difference in memory requirements means that much more complex programs can be loaded in the memory than would have been possible if they were compiled to native code. This is similar to Forth’s approach, where programs consist of a long line of addresses. But they are words, so each instruction there requires twice as much memory as for the p-code. Improve execution speed What can be done to improve execution speed? There are a couple of things. Knowing how the compiler generates code is one way. The first local variables are accessed faster than those longer down in a procedure’s local variable list. Code like x := x+1 executes slower than x := succ(x), because the compiler isn’t smart enough to generate an increment instruction on its own. Copy array data using moveleft instead of one element at a time. The most obvious other way is probably to support Pascal with assembly routines. The call interface is the same for a Pascal and assembly routine. Thus you can debug the algorithm first in Pascal, then add external to the procedure declaration, write it in assembly and link that to your program. A not so obvious way is to let a program convert a time critical procedure to assembly automatically. It’s possible to add a directive to a procedure when compiling, to support a conversion to assembly language. Then you run a native code converter program, which will convert the marked procedures from p-code to assembly automatically. Unfortunately there is no such program for the TI 99/4A, but the p-system does support the converted code. Since the support for executing such converted code is in place, I’ve played with the thought of creating a native code converter. The converters that did exist worked in such a way that they translated the instructions where the overhead is large. The simple ones, similar to those I pointed out above. Complicated p-codes, where the interpreter runs hundreds of instructions to execute one p-code, are left alone. As an example, a p-code sequence which does add, subtract and negate with integers, then call a procedure, contains these instructions: ADI Add integer SBI Subtract integer NGI Negate integer CXG Call global external procedure After a native code generator has processed such a segment, it would look like this: NAT Native code follows A *SP+,*SP S *SP+,*SP NEG *SP BL *R11 CXG Call global external procedure When encountering such a code segment, the p-code interpreter will run the machine code right after the NAT p-code. BL *R11 returns to the PME, and since it’s a BL, R11 will contain information to the PME about where to continue execute p-code. In this case with the complex CXG instruction, which isn’t converted. In this example, 18 machine instructions are reduced to 13. Only 3 are used to do some good, but the other to decode and execute the NAT instruction. Fortunately, in most real applications, there are more simple p-codes in a row than just three of them. When it’s about executing a loop, the same instructions may also be executed many times, giving more value for the cost. The cost in this example is 3 or 4 bytes of extra memory used, depending on if the NAT code is at an even or odd address. Machine code can only start at even addresses, so in half of the cases, one byte must be jumped over. Finally, the jump table for interpreting all p-codes is stored in RAM. Thus it’s possible to modify all codes. The fact that ATTACH doesn’t do anything can be changed, for example. Summary This was intended to give anyone interested a background to why execution speed of Pascal programs is what it is. A consequence is that as long as you compare irrelevant examples, like looping a thousand times, Pascal will always be slower than the most bare-bones languages, like assembly or Forth. The speed of Forth is accomplished by the fact that the interpreter reads a long row of code addresses, not opcodes. These addresses directly point to what to run (if it's a "final" word), instead of index into a table with adresses. So, what's the point of Pascal then? Apart from its generally good support for making programs that are reasonably well organized, the main advantage is the code size. Since all p-codes are bytes, the code is compact. But the operating system also provides good services for making programs that performs big tasks wihtout running out of code memory. P-code is fully relocatable, even whilst being executed. In case of a memory problem, the p-system can temporarily interrupt execution of a program, move it in memory to free up more stack space, then continue the execution. But that's not all. Without doing anything more than just using the unit concept, where you separately compile parts of your program, then use that from the main program (or from other units), you invoke the p-system's segment concept. A segment is a piece of code which doesn't have to be in memory unless it's actually executed. When exiting the procedures in the segment, it's marked as unused. If you call procedures in another segment, the first one you used can be erased from memory and the second brought in from disk. Such overlapping of code can be done in other systems too, even in Extended BASIC, but there is no other language for the TI where this happens automatically. If you want to, you can also make certain procedures in your main program removeable from memory, when they are not needed. Just declare a segment procedure, and you are done. So when looking at the development of a substantial program, which has a lot of code and also processes a substantial amount of data, then Pascal is fast. What you need to do is already there. Just use it. Then if execution speed is critical for parts of the program, you have to look at writing that in assembly. Or develop and use a native code converter...
-
What could they have done better with the 99/4a?
apersson850 replied to Tornadoboy's topic in TI-99/4A Computers
The lead tone is also there to allow the automatic gain control in the cassette unit (not all of them could set that manually) to "home in" on the signal level to expect. When storing in program image format, only one tone is needed, since then all data is one large block. But for individual records, you need to set it each time. -
Strangest place you've ever seen a TI....
apersson850 replied to cbmeeks's topic in TI-99/4A Computers
I used my TI for useful things well into the 90's. At that time I didn't have my own PC, but when I needed one, hauled a portable home from work. The TI 99/4A is a typical "home" or "toy" thing, with the fragile interconnections between console and expansion box. So it's not too frequently found doing industrial or other critical work. But the programmable calculators preceeding it, like the TI 59, them you could literally find anywhere. The most unexpected place I found one of these calculators was when I did my military service in Sweden in 1980. In the 26. armoured brigade staff, a TI 59 with the printer PC-100B was used to compute expected casuality rates for different kinds of combat. The output was used to scale the resources needed for medevac and field hospital service. My knowledge of that calculator came in handy, as some of the staff members managed to mess up the magnetic cards with the program, so they had to resort to keying it in from the listing. But they didn't know how to do that. Since I bought mine in 1979, I did know. -
z80 CPU on Corcomp Triple-Tech Card
apersson850 replied to retroclouds's topic in TI-99/4A Computers
The picture where the Z80 has been removed has two Hitachi static RAM chips instead of PROM on the board. Or in the sockets, actually. -
z80 CPU on Corcomp Triple-Tech Card
apersson850 replied to retroclouds's topic in TI-99/4A Computers
I don't have any Triple-Tech card, but I did use one for a little while. I borrowed it from a friend, in order to write a driver for the p-system, so the real time clock on the Triple-Tech card could be used to set the date in the p-system automatically. Unlike the standard operating system in the TI, the p-system tags all files with creation dates. Normally, you have to set the date manually, but if you have a real time clock in the system, you can write some code in the p-systems *SYSTEM.STARTUP file to set the date automatically. You're right about the purpose of the Z80 CPU. It isn't connected to the PEB address or data bus lines, but lives a life on its own, up in the corner of the card, as a print spooler. -
I have a black version 110, with the old style label. Most of my modules are virtual, but I've kept a few real ones. Alas, that one also doesn't allow for recursion. I have tried.
-
Optimize TMS9900 assembly code for speed
apersson850 replied to retroclouds's topic in TI-99/4A Development
The main reason for that the memory expansion has limited use by the system is of course that it's just that - an expansion. Hence the system must be able to work without it. It's different with the p-system, which requires the expansion. It makes extensive use of the 8 K RAM, for example, for the system's own purposes. -
It's not that it was the original version? I have version 110, and I've never seen version 100, but I've read about that it does exist.
-
Optimize TMS9900 assembly code for speed
apersson850 replied to retroclouds's topic in TI-99/4A Development
The main advantage with the metric system is that there is only one unit for one unity. Length has only one unit, meter, not inches, feet, yards, furlongs, miles and I don't know them all. We (in Sweden) used to have the same mess as you still have in the US today, or maybe even worse, since every region had at least two different local definitions of a mile (that's like you would have two different definitions of a mile in each state, and these two were not identical to the definitions in any other state), but we changed to the consistent metric system before I was born. -
Well, not all other languages. A difference between a language which does support proper recursion, like Pascal, and languages where you can get away with it, like BASIC or assembly, is that in Pascal, the overhead of creating a new activation record on the stack for each call is done automatically. So for each invocation of a recursively called procedure, Pascal doesn't only push the five word activation record (which among other things contain the return address) on the stack, but also space for the function's result as well as all local variables. Which means that if your function is more complex, and requires some local variables inside it to do its math, space for these local variables will be allocated so that each invocation in the recursive call chain has its own set of local variables. That becomes quite messy in BASIC, where you need to prepare an array, and keep track of a (stack) pointer into the array, for each local variable, yourself. For each GOSUB to the recursive procedure you must increase your "stack" pointer into the array, and for each RETURN you have to decrement it.
-
Move evaluation in games is frequently best evaluated using recursive techniques. The recursive factorial program is written in an ugly manner. This looks better and is easier to follow. The main program would be the same. I skipped printing the intermediate steps in the computation. 1000 REM Recursive factorial 1010 F=F*R 1020 R=R+1 1030 IF R<I THEN GOSUB 1000 1040 RETURN In a language which supports proper recursion it gets even more elegant. function factorial(n: integer): integer; begin if n>1 then factorial := n*factorial(n-1) else factorial := 1; end; Calling the procedure would then be like (when using the same variable names as in BASIC) f := factorial(i); In most realistic cases, the function should be declared as a real. Considering the word size of our TI, that's most certainly the case.
-
Optimize TMS9900 assembly code for speed
apersson850 replied to retroclouds's topic in TI-99/4A Development
Metric is the only sensible way to go. Once your assembly program is in there, you make the rules. It's only if you want to use various system supplied services you may be out of luck, if you use memory these services need to work properly. -
That's an excellent example of an application where Mini Memory is useful. One of the more useful combinations you can use on the TI 99/4A is Extended BASIC, using 24 K expansion RAM, with assembly support in the 8 K RAM part. But you can't have both Extended BASIC and Mini Memory at the same time. I have used Mini Memory, or preferably modules with 8 K RAM (or even more, with some bank switching) together with Pascal. The p-system has the advantage that it resides on a card in the PEB. Thus it's not dependent on which module is inserted in the console, but can use anything that gives an advantage there. It's for example very easy to use the Mini Memory as extra variable space, accessible directly from Pascal. But in combination with TI BASIC, the best method would be to use Editor/Assembler to develop software (assembly language) which can then be loaded from TI BASIC with the Mini Memory module inserted. Let TI BASIC run where it's designed to, and spend your clever creativity on the assembly language instead. In this case it has 36 K RAM at its disposal.
