Jump to content

apersson850

Members
  • Content Count

    1,063
  • Joined

  • Last visited

Everything posted by apersson850

  1. I've never really used any TI 99/4A that hasn't been the real thing. But when the 99 was the computer I had, I mainly used the p-system, and Pascal, for my software development. In that case I wrote everything that was doable in Pascal, and that's quite a lot, in Pascal. When the code worked, I used it as it was, if that was satisfactory. If there was a speed problem, I'd think about what part of the code was best to optimize, and then convert that to assembly. Normally I had a good idea about which part would be a good candidate for conversion to assembly already when starting out, so I made sure I wrote that part as a Pascal function/procedure in such a way that it would be easy to convert it to assembly. Thus I didn't spend time designing things like data entry or disk access in assembly, since there are other things around these activities that are slowing them down anyway. Like running the drive or me typing on the keyboard. But a procedure to sort numbers or whatever would be very good to convert to assembly, to speed it up. Especially since such tasks often are time consuming.
  2. That makes sense. Here's a stub of an interpreter for a byte-coded (to save memory) stack machine (similar to the PME, that runs the p-code card). SP is stack pointer, CP is code pointer. EREC is the current environment record, inside which the currently executing procedure's local variables are stored. Each op-code is assumed to define the instruction completely. Even with this very simple approach, the instruction fetch and decoding loop is five instructions. Then add the instruction itself, which for these simple examples adds 2-5 instructions to interpret one instruction code. So here we have ten instructions to accomplish what two could do, or seven to do what one could do, if it was normal assembly code that was running instead of interpreted code on byte level. interp: MOVB *CP+,R1 SRL R1,7 MOV @INTTAB(R1),R0 BL *R0 JMP interp ; Code for ADD values at stack, return result on stack addc: A *SP+,*SP B *R11 ; Code for INC top of stack incc: INC *SP B *R11 ; Code for push integer immediate to stack push: MOVB *CP+,R1 MOVB *CP+,@R1LBYT DECT SP MOV R1,*SP B *R11 ; Code for local store integer (offset into local variable area in byte after instruction) locst: MOVB *CP+,R1 SRL R1,7 MOV *SP+,@EREC(R1) B *R11          
  3. Reading data from GROM or VDP RAM isn't that extremely slow, as long as you read consecutive addresses. But when you start modifying the address, the overhead to load a new address and then read data is substantial. At least the GPL interpreter runs from 16 bit ROM. My console, which has 64 K RAM built in, on 16 bit wide bus, is about 110% faster than an original TI 99/4A, when you run programs that has both code and workspace in expansion RAM. That is, in memory that normally is only 8 bit wide. This give you an idea about how a "real" Commodore 64 competitor version could have performed, with little change to the rest of the architecture. Of course, all sensibly written software doesn't speed up that much, since it has workspace in RAM PAD, which already is 16 bit wide.
  4. I'm one of the few here who were a member of the Swedish user's group Programbiten. If you want an international touch, don't forget us.
  5. What I was thinking about is the language type in the 99/4A. I presume most p-systems don't have to differentiate between code that must be loaded in the CPU RAM memory expansion vs. code that can be loaded in the video memory. I don't know what happens with a code file generated on a system without this flag being handled.
  6. To have any chance of running, it must produce p-code for the UCSD PME (P-Machine Emulator) version IV. The p-system operating system version numbering uses the Roman numeral to distinguish between different PME versions (different p-codes) and the Arabic digit to distinguish between variants using the same PME. Then you must also make sure the compiled file contains the necessary information to load properly on a TI 99/4A.
  7. TI 99/4A consoles sold in Europe only differed in the video output, not in the character set. We had to define our characters ourselves. It's probably some character code that's represented by the € sign when looked at in a different environment.
  8. The different 8 K memory blocks have their address lines split up, so that they are sent to this or that circuit,depending on the state of their associated CRU bit. Thus console ROM is kept off when RAM is enabled at the same addresses. Due to the implementation of various memory mapped I/O in the console, I've chosen not to implement any write-through, though.
  9. I should have some pencil drawings from "back then". I think they are still in the attic. Some photos of the install too. Internal memory expansion 1 Internal memory expansion 2 Internal memory expansion 3 As can be seen it's the piggy-back principle. As far as I can remember, I got the idea from someone who mounted 64 K RAM to get 32 K RAM (the double capacity was just to make it 16 bit wide). But I thought that when I had 64 K RAM in there, I could just as well make it useful. When running software which doesn't rely on ROM in a cartridge, you can use the cartridge memory space, >6000 - >7FFF, as an extra buffer, for example. Since you can switch the RAM in and out as you like, you can even both use a cartridge with ROM (like Extended BASIC) and use the same area as an 8 K buffer at the same time. The only demand is that you access the buffer via assembly, from a program that runs somewhere else. Or you can copy the console ROM to RAM, then play with modifying the interrupt vectors and get your own interrupt service routine. Or you can switch the ordinary 32 K RAM in and out, to get an extra buffer there. Or you can let the computer use the ordinary 32 K RAM normally, and the internal one for a buffer, if it's important that the program executes at original speed. Being able to overlay all memory with 16 bit wide non wait state RAM implies that the speedup of assembly programs is about 110 %, if you compare programs that runs entirely within the normally 8-bit wide memory expansion (both workspace, code and operators). Most programs do have at least the workspace in fast console RAM, and then the speedup is less dramatic.
  10. My own internal memory expansion, 64 Kbytes, can be disabled by CRU setting selective CRU bits. By default, the 64 K RAM is enabled at the addresses where you normally find a 32 K memory expansion, and disabled elsewhere. Then, by setting CRU bits at base address >400, I can enable 8K RAM blocks overlaying internal console ROM, DSR space etc., to eventually get contiguous 64 K RAM in the console. I can also disable the internal 32 K RAM, which then will send memory access instructions to the external memory expansion, if available. Thus my design allow two 32 K RAM expansions to exist at the same time, with access to either one or the other at any specific moment. I also have hardware, enabled by a CRU bit, which detects and inserts a wait cycle in VDP accesses, so that it's not necessary to add NOP instructions to handle the VDP correctly. Via a switch on the outside, the mapping of the internal 32 K RAM expansion can be inverted, if you want the console to start up with the internal 32 K RAM disabled, and then enable it with the CRU bit instead. Thus such a design is more flexible, but still completely transparent, than a "fixed" internal 32 K RAM expansion.
  11. Yes, I have. Not too useful, perhaps, but was an interesting experiment once. I remember the graphics program TI Artist had the ability to add your own input device driver, so there I could use it to the full extent.
  12. The only real use for LOAD and IAQ I've seen was to implement hardware single step debugging capability. A debugger program sets up a shift register, which shifts through a suitable number of IAQ signals, then generates a LOAD (non-maskable) interrupt. The "suitable number" of instructions was chosen to give the debugger what's needed to call the user's program and execute exactly one instruction there, then interrupt back to the debugger to observe the result. The debugger delivered with the Editor/Assembler package actually supports this, but the hardware isn't there in a standard TI 99/4A. It probably is in a TI 990.
  13. I have listed programs to RS232, to be able to save the file in HyperTerminal and then print it to a printer connected to Windows. That's no problem, so handling a file that's a saved program to load that later ought to be possible too.
  14. All this is of course because as far as the TMS 9900 is concerned, A15 doesn't exist physically. There are only 15 address pins on the device, not 16. When you address a byte in memory at location >1234 or >1235, the CPU will actually do the same thing from a hardware point of view. It will read the word at >1234, then present the left or right byte to you. The fact that the hardware in the TI 99/4A is designed to do actual byte addressing doesn't change how the CPU looks at it. When it comes to CRU bits, as they are hardware devices, you can't connect them to a pin that doesn't exist. Hence these two code examples are equivalent. LI R12,>1F00 SBO 1 LI R12,>1F02 SBO 0
  15. As far as I could see, what I happened to get was already available on the documentation server.
  16. I made my own clock once. A card that plugs into the expansion box. This was before any of the commercially available real time clock cards had been launched. My card is based on the MM58167A from National Semiconductor.
  17. I have a 32 KRAM expansion inside the console, which doesn't interfere with the expansion box. Actually, I can turn off the internal 32 KRAM expansion, using a CRU bit, and then the console will run in the 32 KRAM in the PEB, if there is one. My internal 32 KRAM is really a 64 KRAM memory, and with different CRU bits, I can turn on one 8 KRAM bank at a time, eventually enabling 64 K contiguous RAM over the whole address range of the TMS 9900. But that hides access to all memory mapped devices too, ot course.
  18. The problem isn't the processing time of the TI, the problem is that the approach is flawed. Anyone who has spent some time on a real race track knows that you need your brake markers a bit before the curve, or it will not work. Not if you are going at max possible speed between the curves, that is. If you aren't, then nothing really matters here. But I agree that having accelerometer data could be used to optimized the tables. That's kind of what you do in real racing, when you keep trying the same curve, with adjustments of the braker marker position in between attempts. Brake later and later until you can't take the curve any longer.
  19. To beat most human operators you only need to run at the same "safe speed" all the time. Most human operators run too fast and derail frequently, which gives an average speed below the safe speed. The problem with the accelerometer approach is that to run at optimized speed, you must start slowing down before entering into the curve. You can't detect that with an accelerometer reporting centrifugal force only.
  20. It wouldn't work. When the accelerometer senses sideways acceleration too high, it's too late to do anything about it. The car's inertia will keep the speed above the acceptable level for too long. If you apply dynamic braking in any magnitude that's likely to make a difference in that case, then you'll lose tire grip. Thanks for the compliment. I'm an automation/advanced motion engineer by trade, so I'm quite used to developing stuff like this.
  21. The tuning is table based. There's a different set of tables for each car. Each table then starts when the selected car passes a photo cell. Then it keeps pairs of power levels and time until you hit the next photo cell, which triggers a new table. Just enter a power value low enough to make the car run through the whole track without derailing. Then estimate an optimized level for the beginning of a segment, estimate the time it will last and enter that. Already after doing that, you have a good enough control to favorably compete with most people. Then you can fine tune with more segments with different power levels, if desired. I did that mostly with the more advanced cars. The good thing with this is that when the table times out, you always end with a power level that's low enough to run all around the track. Thus if the car derails, the table will have timed out before you have time to put it back on track. As you can run anywhere with this low level, it doesn't matter where the car is put back on the track. As soon as it hits the first photo cell, it will run from there anyway. As I used a few resistors in combination to run the car, it's like having different gears. There were six combinations, with 1 representing no power feed, 2-4 various resistors connected and 5 full power. 0 disconnected power and instead connected another resistor in parallel with the track, thus providing dynamic braking. The manual handle could do the same thing. Level 3 allowed the cars to go around the whole track without any regulation. I've attached a PDF we handed out at the event. Hope you enjoy the Swedish language challenge. Bilbana.pdf
  22. The BASIC for the TI990 mini computers, on which the 99/4 and 99/4A BASIC versions are based, uses OLD.
  23. The computer controlled car has a piece of self-adhesive reflective tape on it. If you look at the photos, you'll see a few black blocks in some places. They prevent the photocell from looking across the track and thus seeing cars on a track further away. Of course you could use a 99/4A instead. The purpose of my setup was to show what you could do with an industrial controller paired with a standard slot car race track set. I worked for an R&D departement, and displaying what you really do there isn't feasible in most cases. So we decided to show something people could relate to, kids could play with and in a way that we didn't have to label "Don't try this at home", but rather encourage to do try yourself, if you are interested. We had three pairs of cars. Vistor who wanted to compete got a reasonably easy car to drive, and competed against the computer's "standard" car. Those I noticed knew slot car racing got the computer's "advanced" opponent. Finally, those who thought the computer's car was lame, were matched with the "impossible". That one I had tuned to such an extent that it took a few manually run laps for it to get the correct temperature. Only then did the programmed control for it work properly...
  24. For the lack of a working printer with serial or parallel port, I've used my 99/4A connected serially with a small laptop, which acts as a print server. It reads the incoming text on the serial port (in reality a USB - RS232 converter) and prints it on a networked laser printer. So for sure you can communicate serially between different computers. For a game application, I'd recommend setting one of the computers up as a server, the other as a client. With regular intervals, the client writes a pre-determined block of data to the server, which responds by returning a similar block to the client. That's the most efficient transfer mode. If you don't need block transfer, instead set up a short message to be sent with regular intervals from the client to the server. The server will respond with a similar message. The message is coded with a data identifier and some data. By sending data at regular intervals you make the transfer deterministic, which is good in real-time games. Then you queue up data you want to transfer. Say you have two players in a game, each played on one computer. The only action you can do is throwing a bomb to a certain location. When no bomb is thrown, you repeatedly send PXxxxx or PYyyyy, to tell the other computer where your player is. If you throw a bomb, the messages BXxxxx and BYyyyy are queued. They will be sent by next two messages, and then the communication routine returns to sending player positions again. Also add a message ID, like a digit from 0 to 9, incremented at each transfer. The same digit is returned by the other computer each time. This way, both will know that new messages are coming from the other computer, and that they are received and returned. If you feel it's critical, add a checksum to make sure each message is valid. But it doesn't matter in a game application. If you hit the player you may get DXxxxx and DYyyyy back, to tell that he's dead at a certain position. You can easily define messages like Status Restart, Status Lost, Status Stealth or whatever is applicable to your game. This message structure is similar to what is used in CAN networks in cars. By keeping the transfer routing ticking by itself, eating data from the queue, you can easily separate the tasks of communication and handling the data to send. You need to write som interrupt-driven communication, of course, but that's doable. Ten messages per second serially shouldn't be impossible, and that's pretty good for a game played by a person, if you keep the data simple. As you can perhaps understand I've implemented exactly this, but in a different context than gaming on a 99/4A.
×
×
  • Create New...