Jump to content

phaeron

Members
  • Posts

    4,112
  • Joined

  • Days Won

    24

phaeron last won the day on December 30 2023

phaeron had the most liked content!

8 Followers

Contact / Social Media

Profile Information

  • Gender
    Male
  • Location
    Bay Area, CA, USA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

phaeron's Achievements

River Patroller

River Patroller (8/9)

8.4k

Reputation

  1. Definitely can, though it would make any use of extended memory from the main thread rather interesting. Even if there's only one bank and you're just independently toggling the CPU and ANTIC bits with separate ANTIC access, you'd have to create interrupt-safe ways of doing the read-modify-write of PORTB from the main thread.
  2. You are correct, player/missile graphics can only be positioned at 160-pixel resolution. They can't be positioned at hires pixel granularity. The reason for this is that only the playfield can display hires (ANTIC modes 2, 3, and F) and it does so through a special path. Everything else in GTIA, including player/missile graphics, collision detection, and priority runs on lores pixels. This is also what causes all of the strange behavior with hires modes, including interactions with P/M graphics and the color restrictions.
  3. If you are making a new cartridge type, then it is a good idea to fully decode all the address lines. But here we're talking about reimplementing an existing cartridge type, and for that you would want to match the original cartridge's behavior as closely as possible. Otherwise, any difference in behavior is an opportunity for software from the original cartridges to not work on the new cartridge implementation. The thing to realize about old cartridges is that most of them had their banking logic implemented in a very small number of common 74LS chips -- generally 1-2 at most. There's not a lot of gates available for complex logic, and so incomplete decoding was very common. For the Williams cartridge type, schematics or pictures for the original PCB aren't available, but it's likely that the implemented consisted of at most a latch/flip-flop chip and maybe one additional chip with misc gates. Two chip select lines on the ROM means that the latched disable signal can be combined with /S5 for free, and then all that's needed is a bit of logic to clock the latch from accesses to /CCTL and drive RD5. ANDing together A4-A7 would have added additional complexity to the PCB for no real benefit, and possibly real cost if it required another chip. Generally that extra decoding complexity has only been seen in cartridges that were intended to stack with another cartridge, such as the ones designed to work in the pass-through slot of a SpartaDOS X cartridge and need to co-exist with the /CCTL range decoded by SDX. The generally available description for the Williams cartridge types is somewhat underspecified, so Altirra implements it as follows: A4-A7 is ignored for /CCTL accesses. R/W is ignored, and any /CCTL access triggers a bank switch change. This means that a read can trigger a bank switch, including from ANTIC. This is similar to behavior seen on real XEGS and AtariMax cartridges.
  4. 4.30-test4 is old and predates the fix, latest is 4.30-test7:
  5. If you are using Altirra 4.20, there is a regression in that version that causes this lockup; it's a bug in that version of the emulator that interacts with another bug in the game (Star Raiders 5200 accesses a nonexistent PIA chip). I'm going to be releasing a 4.21 release to address this and a few other bugs, but in the meantime the latest 4.30-test release also has the fix. You can find the test releases either further up in this thread, or by using the Help > Check For Updates menu option with the update channel changed from Release to Test (which will just link back to the latest relevant post in this thread).
  6. The asterisk (*) button is used to toggle between speed and non-speed controls. The default keyboard mapping in Altirra maps this to the - key, next to 0. You'll probably want to come up with a more ergonomic mapping for the game, probably using the keypad (if you have one!). You can detach the cartridge and use the controller test that appears to verify the mapping to the original 5200 controllers. Altirra doesn't emulate the 7800, it's a completely different architecture. The 7800 is a 2600 duct-taped to a Jaguar predecessor.
  7. Ah, in the Altirra BASIC manual? Yeah, that's wrong, I'll fix it.
  8. I would recommend replacing the VSEROR/VSEROC handlers instead of trying to reuse the OS ones with XMTDON. The stock handlers are meant to be used for transferring SIO data frames and will try to send bytes on their own; the VSEROC handler also may not do anything if the checksum sent flag hasn't been set. Some of the garbage characters you're seeing may be the result of the VSEROR handler racing against your code for writing to SEROUT.
  9. All interrupt bits IRQST, except bit 3 (serial output complete), are forced off (1) unless the interrupt is enabled in IRQEN -- any interrupt events for disabled interrupts are lost. The serial output complete interrupt is special as it isn't queued, you can read it at any time regardless of IRQEN bit 3. Thus, you need to enable the serial output ready interrupt in IRQEN bit 4 to use it. Upon doing so, POKEY will assert /IRQ on the CPU when it happens. This means that you are locked into one of two general approaches. If you have all IRQs enabled on the CPU, then you need to use an IRQ handler, If you have all IRQs masked, then you need to poll. The two approaches don't mix, as you can't use interrupt handlers for the keyboard while polling the serial port. The reason for the instability when trying to use bit 3 alone are due to a subtle issue in the way POKEY's output shift machine works. What bit 3 and 4 actually mean are as follows: Bit 4 (serial output data needed / ready) means that the last byte queued in SEROUT has been loaded into the output shift register, and SEROUT can take another byte. Bit 3 (serial output transmission complete) actually means "not shifting" -- it goes to 0 when the output shifter is active, and then back to 1 when it's done. The problem is that if the serial output shift register is idle, loading a new byte into SEROUT doesn't immediately load the shift register. This only happens on the next output bit clock edge from timer 2/4. So when you're writing to SEROUT for the first byte in a transmission at 19200 baud, or the CPU's taken too long and the shifter's gone idle in the middle, it can take up to 94 cycles for the shifter to load and or bit 3 to go to 0. It then takes another 940 cycles for the shifter to complete and for it to go back to 1, assuming you haven't queued another byte into SEROUT in the meantime. This results in a problem: If you try just waiting until bit 3 goes to 1, you can exit before the byte ever started shifting. Your code either stomps the last byte in SEROUT with a new one, losing a byte, or changes the POKEY configuration before the last byte finishes, corrupting the last byte. If you try waiting for bit 3 to go to 0 and then back to 1, interrupts can cause your code to miss the whole thing, and then your code gets stuck forever waiting for a 1 -> 0 transition it already missed. The key to making this robust is bit 4 (serial output ready), as it tells you when the shifter has loaded: Write the first byte to SEROUT. For each subsequent byte: Wait for serial output ready (bit 4). Write the next byte to SEROUT. After the last byte, wait for serial output ready (bit 4) to tell when serial output complete (bit 3) is valid. Wait for serial output complete (bit 3). At this point, the transmission is complete, and you can change timer 3/4 and serial port settings for something else. You will also want to set CRITIC ($42) during this operation. This disables OS vertical blank stage 2 so it doesn't blow the timing of serial port routines. It's not as critical for output as taking too long just results in a gap instead of lost data, but doing so will ensure that you can output continuous bytes. Side note: the delay of the shifter loading until the next bit clock is also the reason for the unexplained footnote in the hardware manual about bit 3 being used for two stop bits. Bit 3 is asserted after the stop bit finishes sending. Therefore, if you wait for bit 4 and then bit 3 after each byte, then quickly load the next byte into SEROUT, the next byte is guaranteed to only start another bit period later. This produces two stop bits instead of one. As for how to compute the AUDF3/4 values for a particular baud rate, the standard AUDCTL value of $28 for serial port operation links timers 3 and 4 into a single 16-bit counter running at machine clock rate (1.77/1.79MHz). This combined timer needs to fire twice per bit as the serial clock produced alternates between input and output edges. The value to load into the counters is the number of cycles per period, minus 7. Therefore, the value to load is (rounded to the nearest integer): [1789772.7 ÷ 2 ÷ baud] - 7 (NTSC) [1773447.5 ÷ 2 ÷ baud] - 7 (PAL) For 19200 baud, the value to load in NTSC is 40 ($28) -- with the LSB going in AUDF3 and the MSB in AUDF4. Reversing the calculations gives the actual baud rate transmitted by POKEY as 19040 baud.
  10. I ran into the flash timing problems when doing video output experiments with a Pico, worked great until the framebuffer was bigger than 16K and then it blew up, until I moved the buffers to RAM. Apparently people have managed to get just-in-time flash access to work for this kind of scenario, but they have to overclock the heck out of the RP2040 and flash (266MHz/133MHz) and disable the XIP cache to get byte read commands through as fast as possible. (For those who don't know, the Pico uses a serial flash with a 16K, two-way set associative execute-in-place (XIP) cache in front of it. Works great most of the time, but if you miss the cache it is sloooowwwww.)
  11. Yes, it's a known bug/limitation in the OS: GET/PUT wouldn't be any better, unless you buffer enough data to ensure that the first cassette record is written before DOS fetches another sector.
  12. NTSC broadcast standard timings and timecodes aren't really relevant here except to note that the Atari deviates from them. The rate at which RCLKLO/HI count is based on the rate at which the VBI occurs, which is solely based on the rate that the master clock is divided down by the clock generator and the horizontal/vertical counters within ANTIC. NTSC machine master clock of 14.31818MHz divided by 8 gives machine/bus clock of 1.7897725MHz. ANTIC counts 114 cycles per scan line for a horizontal rate of 15.700KHz and 262 scan lines per frame for a vertical rate of 59.9227Hz, as Rybags noted above. Using an approximation of 60 ticks/second instead of 59.9227 ticks/second gives an error of about 0.13%. After a full day, this results in timing being off by 1 minute 51 seconds. Not ideal if you're running a clock for a 24/7 BBS that's on for weeks, perfectly fine for shorter time measuring uses, and cheaper to compute. Trying to correct for this in hardware to achieve more standard display timing is a different issue. Changing the horizontal and vertical counts is impractical as it would require modifying ANTIC internals, and where that's feasible, would break software relying on precise cycle timings. Speeding up the entire system clock is safer, at the cost of slightly speeding up the output and raising the pitch of the POKEY a tiny bit. This is what you'd expect to see from most emulation devices that output standard video timings, because otherwise trying to run at true rate would result in periodic jank. But video mods for the original computers are the opposite and run non-standard video timings, because they want to stay locked to GTIA without the cost and downsides of trying to buffer entire scanlines or frames.
  13. The drive latch is just the lever that closes the disk drive mechanism, like on the 1050. It's connected to the "ready" input on the disk controller chip. To close it in Altirra you just need to mount a disk image. The 1450XLD disk controller could be a lot faster than the 1050, but since it has no track buffer, it's only as fast as the sector interleave order on the disk. This means that if you take a standard disk formatted by a 1050 and stick it in a 1450XLD, it can't run any faster -- the computer will read each sector super-quick, only to just wait longer for the next sector to arrive under the head. Unfortunately the 1450XLD controller also has no provision for formatting with a fast skew, so disks that it formats don't read any faster than normal. If anyone has seen a real 1450XLD reading a standard interleaved disk fast, I'd be interested as it's hard for the controller to do so when it has neither prefetch nor any memory or code to buffer a track. You can format a disk with a faster skew on another disk drive, though, and that will read a lot faster. The fastest that the 1450XLD can read single density is 2:1 interleave, which is still half of max throughput but way faster than any SIO-based disk drive. The issue in emulation is that ATR images don't encode sector order, so whenever you mount an ATR, it'll always use standard interleave. You can either save disks in ATX format after formatting on a drive that can do fast sector skew, or use the built-in option to reinterleave the disk: You have three problems in that screenshot: The 1400XLOS.BIN you have is set to the wrong ROM image type. It needs to be set as an XL/XE OS image, not a 1400XL/XLD Handler ROM. You can change this by double-clicking the entry or choosing Settings... and changing the ROM image type. The label on that 1400XLOS.BIN image indicates that Altirra has auto-detected it by CRC32 as the same as the bog-standard v2 ATARIXL.ROM image that everyone uses for XL/XE machines. You can use it for 1400XL/1450XLD emulation, but there is nothing 1400/1450 specific about it and using it over any other regular XL/XE OS will not give any special 1400XL behavior. You don't have the handler ROM for the 1400XL built-in voice and modem devices, which is part of what does make the 1400XL different than a plain 800XL. This is a separate 4K ROM than the OS ROM and you can't use an OS ROM for it. Note that Altirra does not emulate the voice device; the voice handler will run with simulated times, but it does not generate audio output.
  14. My guess is that something during the hibernation cycle is causing the default audio device to drop out and be recreated. Generally the Windows audio system covers for this, but sometimes it results in hiccups that cause applications to lose the audio device. Altirra is playing a continuous audio stream from the start, so it doesn't benefit from the inherent reset that occurs when playing one-shot sounds like the messagebox does. The fact that you can record sound but not hear it implies that it's an audio API issue since the emulator's audio mixing pipeline is still working. One thing you can try is switching the default audio API from to WASAPI if it is not already that or Auto. This setting is under System > Configure System > Audio > Host audio options... > Audio API. In particular, it may be set to WaveOut, which is generally reliable but also pretty old. WASAPI is the newer API and also Altirra directly sees audio output change events when using it, so it's more likely to be able to recover from the audio device dropping out. The default being WaveOut is historical and I'll probably switch it over now that Windows XP is no longer supported.
  15. https://www.virtualdub.org/beta/Altirra-4.30-test7.zip https://www.virtualdub.org/beta/Altirra-4.30-test7-src.7z Fixed an crosstalk issue between input ports 1/2 and 3/4. Fixed registration of file types for the current user only and added support for more direct entry to the default settings page for the program (Windows 11 only). Tweaked the inactive selection colors in dark theme. Super Archiver full emulation now supports slow disk speed. Added initial BitWriter emulation. The BitWriter emulation is currently a bit fragile, it relies on the emulator to render the track and this is difficult with some of the ways that protections interleave sectors. There are some problems with the way that the BitWriter software works that makes it a bit unreliable even for plain unprotected disks, and I haven't figured out how to improve it yet. The BitWriter itself is fairly simple, it consists of a 6520/6821 PIA, 8K static RAM, a shift register, an up/down address counter, and a 4us clock divider. Writing is relatively simple. The BitWriter, when enabled, bypasses the FDC and shifts raw bits out from RAM to the drive. It can only do FM (single density) as its clock divider is hardcoded to a 4us bit cell and it doesn't have enough RAM to buffer a raw MFM track. The hardware automatically drives the shift register from the RAM, so all the firmware needs to do is upload to the RAM and trigger the write. The BitWriter RAM is not directly accessible, however, and can only be accessed through the PIA and by clearing, incrementing, or decrementing the counter. The way that the entire 8K RAM has to be accessed is goofy, it requires the firmware to manually toggle the counter clock 16 times, which is very slow. Software must also encode data into FM before writing, and this also requires a bit reversal since for some reason the hardware shifts LSB first. The BitWriter software does this on the computer, so about 6.5KB must be transferred over the SIO bus per track written, which is also not fast. But otherwise, the BitWriter can basically write any flux pattern with a fixed clock. Reading is where it gets ugly. The BitWriter has no support at all for reading, so this is done through the FDC's Read Track command, with the BitWriter only being used for track buffer RAM. The Read Track command has a number of known problems due to the inability to identify address marks, which are simply returned in-band with other plain data bytes. This means that the software has to guess where the address marks are, and its heuristics for doing this are very weak. This means that it commonly identifies bogus address marks and writes tracks with bogus IDAMs in the middle of sectors. This is mostly harmless, but it's a mystery why it doesn't do basic validation like checking if the track and sector number are remotely valid. It also seems to have trouble with identify the wrap point for the track and will occasionally drop a sector -- from what I can tell it simply checks for about 140 bytes to match, which isn't enough with a blank sector in the vicinity. As part of this work, the Read Track command is now implemented in the FDC for all full drive emulators. The track rendering routine is an improved version of the one that was originally put in for the 815 and can deal with some quirks, but can fail when sectors overlap. I've found that some ATX images also have coarse and inaccurate sector timings, so the track renderer will attempt to vary gap III and shift sectors to fix this. I'm not familiar with the specifics of either version of Pac-Man, but there are a couple of differences in the platform that could explain this. First, the 5200 OS is more lightweight than the 800 OS, which can change the program timing. POKEY provides a hardware random number generator that is often used, so changes in timing can result in changes in generated random numbers. The second issue is that the 5200 uses analog joysticks instead of digital. This requires an additional frame to read, and on top of that, games often do averaging to reduce noise in the readings, which can in turn change move timing. A few 5200 games also have explicit support for analog movement instead of just converting the analog input to digital, Star Raiders being an example. For 5200 games that do support options, these can usually be activated by pressing controller keypad buttons at the title screen.
×
×
  • Create New...