Jump to content
IGNORED

Interrupts vs Polling and CDFJ


Andrew Davie

Recommended Posts

I've been thinking about the CDFJ bankswitch scheme does stuff.

It appears to allow fairly hefty chunks of ARM code to run, and as far as I can tell even while the 6507 is doing stuff.

Regardless of if I'm correct about that or not, the following occurred to me....

 

 

Currently all bankswitch schemes in PlusCart do tight-loop polling of address bus to look for changes, and then feed appropriate data to the data bus. That's the gist of it.

 

Now why can't the ARM be setup with a regular interrupt (say, twice the 6507 -- that is, let's say about 2.4 MHz) to check the bus.  Then it can service the data still quickly enough, but allow the ARM itself to spend all that extra time doing other stuff in the background.  I can't see any reason why this couldn't be done, and it would be a game-changer in terms of handling complex bankswitch schemes.

 

And another thing -- instead of code-copying blocks of data and missing servicing the 6507, now this can be done comfortably "in the background" and the 6507 will never miss a beat. On that subject, anyway, we should be looking to DMA those blocks, not code-copy them.

 

 

Link to comment
Share on other sites

The amount of time it takes to read the address bus and put the corresponding value on the data bus is fairly large. There's only about 400ns (rough estimate) between when the 6502 puts an address value on the bus and when it expects the data value to be returned. Tight polling helps ensure that as soon as the address value changes the new value is worked on right away. If you only polled at a 2.4MHz rate it would eventually be at just the wrong time so that you end up missing the deadline and leave the previous value on the bus too long. There's also the overhead of interrupt processing. That is fairly minimal on ARM, but if you're doing it 2.4 million times per second it's consuming quite a few cycles and leaving very little time for useful work.

 

What I've done with my projects is to anticipate what the next address will be, pre-calculate the corresponding data value, and then poll for it. As soon as the address changes to what I expect I put the data value on the bus and then use the rest of that cycle to do some computations for the next value. This allows time spent on reversing bits for playfield values, looking up colors in a table, masking sprites etc.

 

Regarding DMA. Most banking schemes should be updating pointers or offsets to switch banks because copying a block of data would take too long. If there were block transfers being done, DMA would definitely be a good idea though.

 

If you're asking this because you've encountered a banking scheme which has too much latency it may be the flash cache mechanism. ST claims 0 cycle wait states with their flash accelerators, but I've noticed when debugging firmware with a logic analyzer that there are occasionally visible delays due to flash caching that can cause a missed deadline. It's a bigger problem on the 7800 because the 6502 is clocked faster in that one. Eventually we should run all the critical banking code from SRAM to avoid these caching penalties. I've also noticed that it's dependent on code location in the flash. So making a change to a completely unrelated portion of code can move everything over just enough to cause a problem. Just like in 6502 ASM when you forget to org a lookup table and some changes push it across a page boundary and leave you with an extra cycle in your display kernel all of a sudden.

  • Like 2
Link to comment
Share on other sites

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...