Jump to content

Sam Bushman

Members
  • Content Count

    27
  • Joined

  • Last visited

Everything posted by Sam Bushman

  1. Thanks Chilly, that makes sense. I still have some confusion about where these programs reside in the executable's address space. I currently have the following assumptions (which some are incorrect): 1. Since no OS runs on the Jaguar, the address space of the executable is the same as the address space of the memory mappings for the Jaguar console (there is no indirection between the address the program is poking and the actual hardware memory address). 2. The .org directive tells the assembler that the following code should begin at the expressed address in the executable's address space (G_RAM in the initial example). 3. When the executable is loaded onto the Jaguar, code assembled after the .org directive will loaded into the system address space address equal to the definition of G_RAM (meaning the beginning of the GPU local memory). 4. I would think the various GPU programs would clobber each other when the assembled object files are linked together (as they all use the .org directive with the address G_RAM). Clearly this cannot all be true, as all the GPU programs being used by the various renderers must exist in the executable in order to be copied to GPU local memory when GPUload is called in the first place. What I am not understanding is how the .org directive effects the memory layout of the final executable and the addresses the linker assigns to the various labels of the GPU programs. I'm a bit of an assembly noob, so thank you for your patience and time. Cheers.
  2. HI guys, I am currently looking through a test program Atari wrote in 1995 for a 3D renderer for the Atari Jaguar. I am trying to learn a little about how one can generate 3D graphics on the Jag. The program is made up of C code that defines structures for storing different renderers and input handling code for switching the active renderer. Each of these structures stores the starting address and length of the stored renderer and a function pointer to the GPU drawing code. Each iteration through the main loop, the C code copies the active renderer program into GPU local memory, updates the 3D scene variables, and calls the renderer's draw function. My question deals with how this GPU code is assembled and stored in memory. Each renderer has the following lines in it's source file: _gourcode: .dc.l startblit, endblit-startblit .gpu .include "globlreg.inc" .include "polyregs.inc" .org G_RAM startblit: ... ... endblit: For each renderer structure, the *code label (gourcode in the above example) is referenced for storing the starting address and length of the GPU code for copying purposes. A label between the startblit and endblit labels is referenced for storing the GPU drawing function pointer. My interpretation of this code is that the .gpu directive tells the assembler to treat the following code as GPU assembly (properly handling GPU-specific instructions) and the .org directive causes the address of startblit to be the first address of GPU internal memory (therefore causing the assembled code that follows to be loaded in this memory range). What my question is, since all 6 renderers in this demo define their GPU programs in the same way (including the usage of the .org directive), how does the assembler handle the address that each of these programs are loaded at when the final program is loaded into Jaguar memory? It appears to me that each GPU program would overwrite whatever was previously written to G_RAM, and therefore not allow the C code to copy different renderer's GPU code into GPU internal memory. If more code from this test program is needed to answer my question, please let me know. Thanks for any help and have a happy new year guys!
×
×
  • Create New...