Asmusr Posted March 25, 2018 Share Posted March 25, 2018 (edited) Here's a bit of (untested) assembly code for writing attributes for a sprite to VDP that both supports EC and adjusts the y coordinate. This is assuming that the VDP write address been set up in advance. MOVB @COLOR,R0 ; Get sprite color MOV @X_POS,R1 ; Get X coordinate JGT NO_EC ; Skip if x > 0 ORI R0,>8000 ; Set EC bit in color "TAG" AI R1,32 ; Shift 32 pixels NO_EC SWPB R1 ; Swap to MSB MOV @Y_POS,R2 ; Get Y coordinate DEC R2 ; Adjust y for screen SWPB R2 ; Swap to MSB * Write to VDP MOVB R2,@VDPWD ; Write y MOVB R1,@VDPWD ; Write x MOVB @PATTERN,@VDPWD ; Write pattern MOVB R0,@VDPWD ; Write color and EC Edited March 25, 2018 by Asmusr 1 Quote Link to comment Share on other sites More sharing options...
+mizapf Posted March 25, 2018 Share Posted March 25, 2018 Q7: What is “TAG” in Fig 2-20 of the data book? The fourth byte in the screen attribute table was early clock + color code? How’s that “TAG?” Lack of a better name. How would you call a data element that contains a color and another function bit unrelated to color? 1 Quote Link to comment Share on other sites More sharing options...
Airshack Posted March 25, 2018 Share Posted March 25, 2018 Here's a bit of (untested) assembly code for writing attributes for a sprite to VDP that both supports EC and adjusts the y coordinate. This is assuming that the VDP write address been set up in advance. Thanks for clarifying the pseudocode example this this Assembly code example Rasmus. The fog is starting to lift regarding the Early Clock bit. What I think (hope) I’m beginning to understand is: A. A sprite’s horizontal x-coordinate can only be described in the table’s range 0-255, since it’s that’s the limit of that second byte of storage in the Sprite Attribute Table. The limit is simply a function of the structure of this table which allows only one byte for horizontal position. That byte being byte #2 of the sub-block describing the Sprite position. B. Since the Sprite origin is the top-left corner, the programmer may wish to use the Early Clock bit ON, to allow the hardware to subtract 32 from the horizontal descriptor (in the Sprite Attribute Table), so sprites can blend in from left to right, as they emerge from the left side of the screen. **** The 32-bit shift is performed by hardware. C. Your code can manage horizontal ranges from -32 to 255, test for x<o, and then invoke the early clock until the Sprite is fully visible (x>=0). At which point you would turn the early clock off for that Sprite. So.... If I wish for a 16-bit wide Sprite to have only its right 8-bits showing on the left side of the screen, I need to do the following: Setup Screen Attribute Table like this: Byte 1 : any (Y) horizontal position 0-255 (except 208) Byte 2 : vertical position X = 24 ( since 24-32 = -8 ) Byte 3: any name Byte 4: MSB = 1000 (EC on); LSB = color code That’ll do it right? Sent from my iPad using Tapatalk Quote Link to comment Share on other sites More sharing options...
Asmusr Posted March 25, 2018 Share Posted March 25, 2018 (edited) Byte 4: MSB = 1000 (EC on); LSB = color code That’ll do it right? This part is wrong, it's only one byte so there isn't a MSB and LSB. The bits are like this EXXXCCCC where E is the early clock bit, X doesn't matter and C is color. In hex the value of the early clock bit is >80 and the second hex digit is the color. [edit: that's probably also what you meant!] Edited March 25, 2018 by Asmusr 1 Quote Link to comment Share on other sites More sharing options...
Airshack Posted March 25, 2018 Share Posted March 25, 2018 Right. I was thinking B as in Bit. I understand the EC is set by the most significant bit in the fourth byte. Perhaps, my bad for using the wrong phraseology. Yes. Besides this, I must have it correct? Sent from my iPhone using Tapatalk Pro Quote Link to comment Share on other sites More sharing options...
Airshack Posted March 25, 2018 Share Posted March 25, 2018 (edited) I should have said most significant Nibble, least significant nibble. I’ll move along to a few sprite test programs now... Edited March 25, 2018 by Airshack Quote Link to comment Share on other sites More sharing options...
matthew180 Posted March 25, 2018 Author Share Posted March 25, 2018 Q2: On to the Sprite Attribute Table... Why is the top ROW of y-pixels located a -1 when the leftmost COLUMN of x-pixels are located at 0? The top left pixel position is (-1y,0x)? Why not 0y,0x? This has to do with time and memory access. The 9918A is a scan-line based device, i.e. it only figures out what needs to be displayed on the screen one scan line at a time. Without going into the gory details of it all, there is a lot going on during a scan line to get the picture drawn on the screen. Figuring out what sprites to draw on a scan line takes quite a bit of time, and so the VDP does a lot of the sprite processing *during* the current scan line, which means the sprites appear on the following scan line. For example, during scan line 0, the sprite list is processed and sprites with a Y-coordinate of 0 are remembered, up to a limit of 4-sprites since that is the max number of sprites the VDP can put on a single line (there are only four sprite shift registers). During the horizontal blank period, the pattern data for any of the sprites found on that scan line (scan line 0 in this example) are loaded into the sprite shift registers. The shift registers are used during the next scan line (which would be scan line 1 in this example) to provide the pixels that make up the sprite. So during scan line 1, the sprites for scan line 0 are shifted out, while sprites for scan line 1 are being determined. Thus, spites always show up one line after their specified Y-position. So to get a sprite on scan line 0, it needs to be processed on line -1, which is the same as a Y-position of 255. Higher level languages can hide this from you, but when working directly with the hardware you have to deal with it yourself. Also note that the collision flag will also be off by 1 line compared to a sprite's Y-position as well. That is because collisions are detected by the VDP during scan-out of the pixel data from the sprite shift registers. Q9: Lastly, is the EC something I need to manage (endlessly setting and clearing and setting...) throughout my code? Is this normal or is it either set or cleared upon initialization of the VDP registers at startup? I think you are realizing this. You have to manage the early-clock bit in your own code, and update the sprite table if you want to have sprites that can move in from the left of the screen. It is a pain, but that is just how it goes on the old systems. Lots of limitations. The 80's era of coin-op arcade machines were similar, since most of them were 8-bit machines but the screen resolutions were consistently more than 256 pixels wide. So they had to deal with a 9-bit X-position on an 8-bit computer. In a sense, the early-clock bit is just like that. You are dealing with X-positions that are greater than 255, so you have to manage that yourself. The easiest way to deal with it is to keep your sprites confined to the visible display and don't "slide in" from the left. You don't have this problem in the Y direction because there are only 192 visible lines, and that easily fits into the range of a single byte, with enough range left over to slide in and out from the top and bottom of the active display. As for a default setting for the bits, there is none. The sprite attribute table is in VRAM (not VDP registers), which is made up of DRAM chips, and the contents of DRAM at power on or reset is going to be random. It is up to you, the programmer, to initialize all your VDP table data, as well as the VDP registers. Don't assume anything and you won't have any surprises. **** The 32-bit shift is performed by hardware. Everything dealing with the 9918A is done in hardware. It is a task-specific IC and does not have a CPU or any kind of "processing" capability in the general sense. It consists of a bunch of counters, shift registers, fixed state machines, and random logic. Also, technically, in the case of the 32-bit shift of the early-clock, no shift of the data actually happens. It is literally allowing the clock signal to the sprite shift registers to happen early in the scan line, hence the name "early clock". If the sprite shift registers start shifting early, then the sprite appears sooner during the scan line. You can also just think of it in the logical sense without knowing the technical details about how it works. You have one byte for a sprite's X-position, so 0..255, and you also have 0..255 visible screen positions. With the early-clock bit set to 0, the X-position has a range of 0..255, i.e. an unsigned byte. With the early-clock bit set to 1, the X-position is partially signed and represents X-positions from -32..223. The visible screen is always 0..255, so just map accordingly. 5 Quote Link to comment Share on other sites More sharing options...
Airshack Posted March 26, 2018 Share Posted March 26, 2018 (edited) Thanks again guys for helping me stumble through this material. I’ll have to write a few routines to test what I’ve learned from your valiant efforts at instructing me. Edited March 26, 2018 by Airshack Quote Link to comment Share on other sites More sharing options...
Airshack Posted April 2, 2018 Share Posted April 2, 2018 How does one test on real metal once a program is assembled and running in Classic 99 via the E/A cart? Specifically: How do I port a file from a Classic99 FIAD and/or DSK image to my TI-99/4A? My RS-262 isn't modded so there's no physical connection between my TI and PC, at the time. What do you guys do in your workflow? I have a nanoPEB as well as a full-up PEB with Lotherek SD based floppy emulator, which uses .hfe. Also, I'm FlashGROM99 capable. In the past I programmed with XB256 & Harry's compiler; created a .bin with Module Creator 2.0; copied the .bin to FlashGROM99 SD. Worked great! Things are different with assembly... Any and ALL suggestions appreciated. My attempts at creating a .bin with Asm994a yield "ADDRESS ERROR: Relocatable address not allowed in cartridge object!" I'm guessing everything has to be fixed in a cartridge binary...which is something I'm also working on understanding. Also, I played around with both TI ImageTool and TI99dir...couldn't figure anything out.? Thinking the nanoPEB is currently my best option but not sure of the steps necessary to get this done. I'm probably overthinking something easy. Quote Link to comment Share on other sites More sharing options...
LASooner Posted April 2, 2018 Share Posted April 2, 2018 Isn't there software for the Lothartek Drive that converts from DSK to HFE? I haven't done it for a while but I did it when I was trying out some XB programs on real iron Quote Link to comment Share on other sites More sharing options...
Airshack Posted April 2, 2018 Share Posted April 2, 2018 Isn't there software for the Lothartek Drive that converts from DSK to HFE? I haven't done it for a while but I did it when I was trying out some XB programs on real iron Like you, I haven’t Lotherek’d in a good while. I vaguely remember doing something like this via a Lotherek download HFE manager? Lotherek’s site is low on TI documentation. jjs Quote Link to comment Share on other sites More sharing options...
+mizapf Posted April 2, 2018 Share Posted April 2, 2018 (edited) Also, I played around with both TI ImageTool and TI99dir...couldn't figure anything out.? If you want to create a HFE image for Lotharek, all you do is create a new floppy image in HFE format, then copy your desired files on that new image. (File->New->Floppy image, Image type: HFE image) If you come from a DSK image, open that image, copy-paste the files into the HFE image. If you have TIFILES, drag and drop the TIFILES into the HFE image. [Edit: Using TIImageTool] Edited April 2, 2018 by mizapf 3 Quote Link to comment Share on other sites More sharing options...
Airshack Posted April 2, 2018 Share Posted April 2, 2018 (edited) Using.....TI Image Tool! Thanks! I was having little success with a nice constant mix of DISK ERROR 16s and a few DISK ERROR 6s until I got the .hfe formatted properly: DSSD, 80tracks. I believe my problem was with attempting to create Double Density disks. Apparently the TI disk controller didn’t like that? Anyway, thank you for getting me across yet another learning bridge! Progress! Edited April 3, 2018 by Airshack 2 Quote Link to comment Share on other sites More sharing options...
Omega-TI Posted April 6, 2018 Share Posted April 6, 2018 Looks GREAT! Quote Link to comment Share on other sites More sharing options...
Airshack Posted April 6, 2018 Share Posted April 6, 2018 (edited) ... Edited April 16, 2018 by Airshack Quote Link to comment Share on other sites More sharing options...
+Vorticon Posted May 12, 2018 Share Posted May 12, 2018 Hi. So I'm experimenting with the load interrupt process using a mix of assembly and XB. Essentially what I'm doing is having XB call an assembly routine which sets up the vectors for an unmaskable interrupt when the LOAD pin on the side connector (#13) goes low. According to Nouspikel's site, this requires providing a new workspace address at >FFFC and an entry point address for the interrupt service routine at >FFFE. The way I have this set up is that XB CALL LINK's to an assembly program which sets up the interrupt as above and either returns immediately to XB or services the interrupt routine when the LOAD line goes low in which case an XB parameter is modified then an RTWP is issued which exits the service routine back to the main assembly program and from there back to XB. Well, as you might have guessed, there are issues... When an interrupt is detected, I get weird error messages from XB and the program gets totally corrupted, and frequently the console locks up. I vaguely recall that XB programs start at >FFFF when the 32K memory expansion is present and grow from there as compared to assembly programs which by default start at >A000 in high memory. If that is correct, then setting up the interrupt will likely overwrite some of the XB program already in memory and cause the crash. In other words, is XB incompatible with the LOAD interrupt process? Quote Link to comment Share on other sites More sharing options...
Asmusr Posted May 12, 2018 Share Posted May 12, 2018 (edited) If you look at the memory area from >ff00 in a debugger while writing a small xb program, it does use that area but it stops at >ffe7, and if I set >fffc to >ffff to >ff that's not overwritten (not even by new or bye). The debugger in Classic99 can trigger a load interrupt, btw. Edited May 12, 2018 by Asmusr Quote Link to comment Share on other sites More sharing options...
+mizapf Posted May 12, 2018 Share Posted May 12, 2018 Don't forget to "debounce" the LOAD interrupt. The easiest way is to CLR the workspace pointer at FFFC on entering your handler, then wait in a loop, then restore FFFC. While I still had my console, I had a reset and a load switch at the back, and it worked quite reliably. I don't remember issues with XB. Quote Link to comment Share on other sites More sharing options...
+Vorticon Posted May 12, 2018 Share Posted May 12, 2018 If you look at the memory area from >ff00 in a debugger while writing a small xb program, it does use that area but it stops at >ffe7, and if I set >fffc to >ffff to >ff that's not overwritten (not even by new or bye). The debugger in Classic99 can trigger a load interrupt, btw. Thanks for the clarification. I was hoping that would be the case! Don't forget to "debounce" the LOAD interrupt. The easiest way is to CLR the workspace pointer at FFFC on entering your handler, then wait in a loop, then restore FFFC. While I still had my console, I had a reset and a load switch at the back, and it worked quite reliably. I don't remember issues with XB. That may very well be my issue. I'm attempting to interface the Smally Mouse USB mouse interface Matthew180 gave me a while back, and there is a pin on that interface that gets triggered every time the mouse is moved and I'm using that as an interrupt signal. The likely problem is that even a small nudge of the mouse results in multiple pulses being generated from that pin, and so I suspect that this is triggering multiple interrupts before the service routine is done processing. What happens if >FFFC is cleared and an interrupt is triggered? Would the interrupt be ignored? If that's the case, then it would simple enough to clear >FFFC at the beginning of the service routine to preempt any interference from additional interrupts. I have a feeling it's going to be more complicated than that though Quote Link to comment Share on other sites More sharing options...
+mizapf Posted May 12, 2018 Share Posted May 12, 2018 I used such a debounce in many programs that made use of the LOAD interrupt. I don't remember where I got that trick from, but it worked reliably almost everytime. By the way, the LOAD interrupt is available on a key in MAME, too, so you may test it. AORG >FFFC DATA WORKSP,HANDL HANDL CLR @>FFFC LWPI WORKSP LI R0,DELAY DEC R0 JNE $-2 LI R0,WORKSP MOV R0,@>FFFC ... When you trigger the interrupt, the vector at FFFC/FFFE is used. In the first instance, it starts the code at HANDL with workspace WORKSP. The return address is in WORKSP+28, the return workspace is in WORKSP+26. Then you clear the workspace component of the vector and do a loop. While in that loop, a new LOAD interrupt occurs, the vector is (0000, HANDL). Thus the return address is stored at 0x001C, and the return workspace is stored at 0x001A. This fails, because there is ROM, but the CPU does not know that. Since the workspace is 0, you must set it with LWPI so that the loop works. You have to adjust the loop length for the maximum time that a LOAD interrupt may still occur. When it is clear that no further interrupt occurs, you restore the vector and do the interesting stuff. Quote Link to comment Share on other sites More sharing options...
+Vorticon Posted May 12, 2018 Share Posted May 12, 2018 Thanks! Great trick! Quote Link to comment Share on other sites More sharing options...
matthew180 Posted May 13, 2018 Author Share Posted May 13, 2018 I used such a debounce in many programs that made use of the LOAD interrupt. I don't remember where I got that trick from, but it worked reliably almost everytime. By the way, the LOAD interrupt is available on a key in MAME, too, so you may test it. A simple resistor and capacitor on the switch tied to the LOAD input would have provided some simple hardware debounce and the software loop would probably not be necessary. The problem with this approach for Vorticon's purposes is that the LOAD is being driven by a microcontroller, so the input is not bouncing. That may very well be my issue. I'm attempting to interface the Smally Mouse USB mouse interface Matthew180 gave me a while back, and there is a pin on that interface that gets triggered every time the mouse is moved and I'm using that as an interrupt signal. The likely problem is that even a small nudge of the mouse results in multiple pulses being generated from that pin, and so I suspect that this is triggering multiple interrupts before the service routine is done processing. If that is the case then a timeout is not going to solve this, interrupts will continue to come. Several things come to mind: 1. Disable interrupts immediately upon entering the routine, and re-enable them when you are done. Keep in mind though, that if the interrupts are coming in fast and furious, it could easily overwhelm the 99/4A. As soon as you are done with the routine and re-enable interrupts, another could be right there waiting. 2. Modify the SmallyMouse code to limit the interrupt rate. The code is available, I had to compile it myself and load it on the microcontroller. 3. You could put a triggered one-shot between the interrupt and the 99/4A to limit the interrupt rate. Once an interrupt triggers the one-shot, any more interrupts are blocked until the one-shot timeout, which you could adjust to something sensible (once every 30ms or so would be a mouse update every two video frames). Quote Link to comment Share on other sites More sharing options...
+mizapf Posted May 13, 2018 Share Posted May 13, 2018 A simple resistor and capacitor on the switch tied to the LOAD input would have provided some simple hardware debounce and the software loop would probably not be necessary. Hehe ... this is the difference between hardware and software people. My words would end with "... hardware mod would not be necessary". 1. Disable interrupts immediately upon entering the routine, and re-enable them when you are done. Usually yes, but we were talking about LOAD, aka NMI (on the 9995), so you cannot disable it. Also, the interrupt mask is automatically set to 0000. The 9995 specs say that NMI is edge-triggered, which is not explicitly stated in the 9900 manual. Quote Link to comment Share on other sites More sharing options...
apersson850 Posted May 13, 2018 Share Posted May 13, 2018 The TMS 9900 data manual states that the LOAD inerrupt should be gated with the IAQ signal, to make sure the LOAD input to the CPU lasts for one instruction only. This would effectively make it an edge triggered input. The old Cortex computer, available as a kit in the 1980's, made use of the LREX external instruction to initiate a shift sequence through a number of flip-flops, clocked by the IAQ signal. After a couple of instructions, this flip-flop chain would cause a LOAD interrupt. This was used by a debugger to implement single-stepping of programs.It would set up registers R13-R15 to point to the next instruction to be executed, so that a faked return from context switch could be done to the instruction that should be executed. By programming the debugger to execute LREX RTWP, it would set up this delayed LOAD interrupt (LREX), branch to the instruction to execute (RTWP), actually execute the instruction at hand and then, after two IAQ pulses, trigger a LOAD interrupt which returns execution to the debugger. The TIBUG debugger, delivered with the Editor/Assembler package, can do the same thing, but there's normally no hardware in the 99/4A to support it. Quote Link to comment Share on other sites More sharing options...
+mizapf Posted May 13, 2018 Share Posted May 13, 2018 The solution that I proposed above has the problem that the LOAD interrupt must not interrupt the handler before CLR has been executed. If it were edge-triggered, like on the 9995, there would not be a problem, but if it is level-triggered, and it cannot be masked, it will very likely still be active when the handler is started. I am not fully sure, but it could be the case that the first instruction of the interrupt handler is guaranteed to be started, and LOAD must wait until completion of the current command. I know that interrupt handling is suspended for the time when a XOP or BLWP is being executed. I noticed that I may have done it wrong in MAME, clearing the LOAD interrupt on entry (which means an edge trigger). When I try to remember harder, my applications of the LOAD interrupt never actually required to return. For instance, I added the LOAD to my Speecoder to allow for re-entering the program when you plugged in a cartridge, but this is a single trip only. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.