Jump to content

phaeron

Members
  • Content Count

    2,915
  • Joined

  • Last visited

  • Days Won

    15

phaeron last won the day on March 10

phaeron had the most liked content!

Community Reputation

4,874 Excellent

1 Follower

About phaeron

  • Rank
    River Patroller

Contact / Social Media

Profile Information

  • Gender
    Male
  • Location
    Bay Area, CA, USA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. For a color monitor, sure, a higher quality cable and a monitor with better color separation would reduce or eliminate the jailbars. For a monochrome monitor, however, it's the opposite and the better your monitor was the more you saw them. Pretty sure you could easily see the chroma subcarrier on an Apple Monitor ///, because not only did it have enough clarity and bandwidth, that was the point -- it was designed for 80-column text and didn't do any luma/chroma separation. The intensity of the artifacts would also have depended upon the computer model. The 800 has the weakest chroma signal, the 800XL in between, and the 130XE the strongest. Not sure about the 400, need to check that one.
  2. This is most likely bleed-through of the chroma signal. The chroma signal is pure AC and has no DC component, so it doesn't bias the luma, but I'm guessing it might have cause a visual brightness shift due to the gamma curve and the pattern, and also clipping at black. The Atari's signal also does not invert on lines or fields so it would have shown up as a stable pattern of vertical jailbars. I can probably emulate the brightness shift directly in the palette, more than that and the artifacting engine would have to be involved (though I need to hit that in order to fix unaccelerated scanlines anyway). Who was it who caused me to open this particular can of worms again? Next you guys are going to ask for hum bars. I should start pretending that I don't know about some of these really old artifacts, that way I don't have to admit my own age. Bug, happens when the color is pitch black and it doesn't update properly. Monochrome mapping makes it a lot more likely to occur. Fix queued.
  3. http://www.virtualdub.org/beta/Altirra-3.90-test13.zip http://www.virtualdub.org/beta/Altirra-3.90-test13-src.zip Fixed horizontal position being slightly off when Alt+Shift+clicking in the debugger. Change default overscan setting to Normal. Add H.264 + MP3 recording setting to work around an apparently long-standing and unavoidable bug in the Microsoft AAC Encoder, which introduces random oink errors into the encoded stream. Strangely this bug seems to exist in all versions of Windows that have this encoder (7-10), and has also been reported in SDK samples and OBS Studio. 😞 Update some video recording error dialogs to new style. Add bluish-white phosphor setting. The 'yr' and 'yw' debugger commands now take ? as the path, too. Add /reset switch to allow for selective settings reset. I'll throw it on the list. Don't think it's OS, as I've upgraded to 18632.418 without seeing this issue. But I'm not exactly sure what might be going on here. It sounds like a crash that is also bypassing the crash handler, which makes it difficult to get info without a debugger. But if it's in the program, then I'm not sure how it would have gone unnoticed for so long until now. Window layouts are one thing that come to mind -- you can try /reset:windowlayouts on 3.90-test13 to nuke the window layouts for both default operation and the debugger and see if that helps. If that doesn't work, launch with /portabletemp to have the emulator launch with clean settings in-memory to see if one of the existing settings might be a problem. If it still crashes, then we can try to fish the crash report out of the Reliability Monitor if Windows Error Reporting is seeing it (View Reliability History in Start search on the latest Windows 10 version).
  4. The OS SIO routines don't have built-in support for customizing transfer rates. SIO sets AUDFn directly for 19200 baud, so the only way to use them for high speed is to either copy them to RAM and patch them (which only works for specific OSes) or try to inject code into the SIO processes to change the baud rate at the right time. I have seen a hack where the SIO serial interrupt routines are hooked to implement XF551 high-speed mode by switching the baud rate right before the data frame. US Doubler is different as the command frame also has to be sent at high-speed. Typically the first byte of a send operation is kicked manually by writing directly to SEROUT, which makes finding an appropriate interception point more difficult. But then you still have all the other fun of US Doubler style high-speed to deal with, like drives that use speeds that are too high for interrupt-driven operation, drives that don't activate USD high speed mode until the speed is queried, drives that corrupt bytes in high speed mode if you don't enable other options, etc.
  5. This is the right place, I've only posted it here. a8rawconv supports decoding images to two 5.25" Apple II formats, DOS 3.3 (.DO/.DSK), and raw nibble data (.NIB). This mode is auto-selected when you tell the tool to output to a file with one of those extensions. Just a warning, these format decoders haven't been extensively tested -- IIRC I implemented them while helping to archive a couple of Apple II disks and only tested it with those. The Apple II disk format also has a very flimsy checksum that can pass even if the sectors are corrupted, so you'll definitely want to check the contents of any decoded disks. The tool does not currently support the Apple II 3.5" formats, however.
  6. DSKINV wasn't added in the 1200XL OS, it's been part of the OS from the beginning. It just gained the ability to customize the disk sector size with DSCTLN starting with the 1200XL, instead of being hardcoded to 128 bytes. DSKINV also isn't limited to D1: only, and $0300 isn't $30+DriveID for SIOV. $0300 is DDEVIC, $0301 is DUNIT, and the SIO device ID used is DDEVIC + DUNIT - 1. DSKINV sets DDEVIC to $31 so you select the disk drive with DUNIT values of $01 for D1:, $02 for D2:, etc. If you are using SIOV the you need to set both DDEVIC and DUNIT. If you try just setting DDEVIC to $30+index then the code will break if DUNIT has been left as anything other than $01 from a prior request.
  7. High-speed patches normally apply to SIO and DSKINV calls into SIO, so they'll work for either entry point. This applies both to high-speed OS patches and PBI device intercepts. If you wanted to support the 400/800 and double density, then you'd need SIOV. Seems you're targeting XEGS here, though, so DSKINV would work as long as you updated DSCLTN. Note that DSKINV will not adjust the sector length for boot sectors, that's your job. Using one over the other is mostly a wash -- you'd potentially save a little code with DSKINV, but using SIOV is also easy.
  8. Probably due to the 19 sectors making that track a bit tight and causing a slight track overrun. The typical diagnostic for writing issues is to have a8rawconv re-image the written disk to see if it finds problems in the written disk. 0.92 has some problems with this occasionally -- I need to get around to finishing the changes I have in flight as my dev branch has some improvements to the flux encoding code and is able to write this image to successfully boot on a 1050.
  9. The reason for the recursive crashing on warm reset is that the cartridge sets APPMHI to $9C1F, which is one byte higher than is necessary to open a GR.0 screen. The display handler reacts to a memory allocation failure on screen open by falling back to a GR.0 screen, which in this case causes an infinite loop. Arguably, there is an off by one here in the Display Handler's memory checking, but it's pointless to set APPMHI for a GR.0 screen for this reason. As for DISKIV, there are two variables used by the Disk Handler. One is the format timeout, and the other is the sector size if on an XL/XE OS. DSKINV pushes the sector size into the transfer length in the DCB (DBYTLO/DBYTHI) and sets the timeout (DTIMLO) to either the format timeout or the normal disk access timeout, depending on the command. You should not normally need to call DISKIV to set these values as it is invoked by cold start processing, skipped only for a diagnostic cartridge. If you bypass DSKINV and go directly to SIOV, you are responsible for setting the variables that DSKINV normally would for you: DDEVIC, DSTATS, DTIMLO, DBYTLO, DBYTHI. The default timeout for a regular disk access is $07, while the format timeout defaults to $A0 but is usually raised to the value received from D1: by the status command at the start of the disk boot. DSKINV also normally does this update automatically after a successful status command to any drive. (This is the reason for format commands failing on real disk drives when certain PC-based disk drive emulators are in the SIO chain that report unusably low format timeouts.) Or, you could just use $FE as the highest known value (XF551). By the way... probably want to clear the keyboard character buffer before displaying the "are you sure?" format prompt. Not so good if you happen to have pressed Y first before pressing Option.
  10. Easiest way to think of it is like a dumbwaiter or the little sliding tray at the teller in U.S. banks. First person puts an item in the tray and sends it across to the second person, who does something with it and sends it back. Only one side has access to it at the same time and in the meantime the other side is waiting in the meantime. The more difficult scenarios are when you want to sending data in both directions at the same time (double buffering) and make actual use of both CPUs in parallel. That's where the single semaphore bit, lack of '816 interrupts, and V1 timing race bug become troublesome.
  11. Unless you have a clean signal and intend to intentionally lower the tones via aliasing, I believe it'd be the other way, you'd need at least 10.6KHz to capture both tones. 11.025KHz will work with a clean tape as long as you have good decoding filters. 4 bit is going to be marginal for real-world tapes, as volume levels vary and it is possible to recover FSK data from a near dropout. You could go all the way down to 1-bit with prefiltering, but at that point you might as well predecode the FSK as well.
  12. The only window you need is the normal display window, but the emulator needs to be stopped in the debugger (Debug > Run/Break). Once stopped, holding Alt and clicking/dragging looks like this:
  13. Altirra remounts R/W disk images as virtual-R/W if it encounters a file I/O error trying to update the file. This keeps the modified image in memory so you have a chance to resave it somewhere else. You should also see the disk drive indicator blink. I don't have Google Drive at home, but I have used it at work and have seen some rather interesting I/O errors from its mapped drive even with a standard tool like robocopy. A search for issues shows some comments by people that seem to indicate that Google Drive locks files during a sync without using a mechanism like oplocks or shadow copies to allow concurrent access to the file. This is bad ju-ju for applications trying to write to the file as it will cause sharing violation errors during file access. For floppy disk images it might be possible to use a retry loop since the write access is only going to be required during the update, but for hard drive images the file needs to be opened read-write from the beginning and the delay will be too long. I'm not terribly fond of doing workarounds like this, it makes the application I/O logic messy to work around interference for other programs. I should also point out that mounting images read/write on a synced folder is also risky in that Altirra relies on the file not being updated without its knowledge while it is mounted, since it caches the file in memory for reads and does incremental updates to the file for writes. Any updates to the file without the emulator's knowledge will result in a desync between the emulator and the file on disk, and result in corruption of the emulated disk.
  14. Sorry, I should clarify -- for this one there are no concerns, I'm just lazy. Will get to it in a future test release. Should be reasonably accurate, since that option works by scaling down the color contribution before color correction takes effect. Black and white TVs effectively do the same by ignoring the color subcarrier (imperfect filtering or duty cycle aside). Just not as convenient as a separate option, as you'd have to whack the setting and remember what it was before. Perhaps a more important concern is how white black and white displays actually were -- probably not the same white as #FFFFFF on a modern display. Most of the mono displays with video input that I used were amber or green, but I had a TRS-80 Model 4 that I think had a bluish-white display.
  15. Sorry, have too many other things going on to work on RastaConverter right now -- I don't even remember which computer I had the build tree on -- but I do have some ideas, if anyone else jumps in. It's not necessarily as simple as just emulating what the actual hardware does, because of the existing emulations that don't faithfully reproduce all the bugs. This includes not only software but now also hardware emulators, where GTIA's P/M graphics logic is replicated in FPGA. It's also not trivial to mimic the hardware behavior as I had to completely rewrite Altirra's sprite engine at the time to fix it, and RastaConverter's is written differently. The idea that I had was, instead of replicating the bugs, detecting the problematic cases and forcing the error to $HUGE_VALUE instead to kill the evaluation. This would just cause RC to avoid the problematic cases. I don't think this will affect quality as the corner cases in question aren't very useful and don't seem to be hit much given how huge this thread has gotten without too many instances. This would also avoid carrying additional state from one scanline to the next, which would be necessary to emulate some of the cases in question otherwise and complicate evaluation. Another approach would be to just add a tactical nuke function to the program, where you can tell it to rewrite the bad line to do nothing but set existing state for the next line, and thus have it begin re-iterating that line from scratch. Still a manual process to notice and correct the issue, but easier to fix and potentially useful in other cases. If you are using Altirra to track down the offending line position, Alt+clicking on the display when the debugger is enabled will display the scanline number under the cursor. In the most recent test versions, Alt+Shift+click will also drop you in the vicinity of that beam position in the history, to show what code was being run around that time.
×
×
  • Create New...