Jump to content
IGNORED

Best practices for sensing disks and content, creating multi-disk software?


jmccorm

Recommended Posts

For pieces of software which span multiple (3 or more) disks:

Before I reinvent the wheel, are there any best practices for how assembler applications should handle this situation?

 

There are multiple user situations such as:

  • One ordinary-sized physical or virtual drive. The system needs to recognize this simple one-disk situation, sense what drive is in the system, ask the user to flip, sense the new disk, and verify again before loading.
  • Multiple physical or virtual drives with all disks already sitting in its own unique drive.
  • A lesser case with multiple drives, but not enough drives to hold every disk. (such as 3 drives, but 5 content disks to swap between). There's that potential side-issue of optimizing which disk you suggest a user remove (in order to avoid disk thrashing where you turn around and ask them to put back in the disk that they removed) but this seems like an application-specific problem for the programmer to puzzle out.
  • A large physical drive (double sided / double density) or a virtual mega-drive that have multiple application disks copied onto it in such a way that the software can identify and recognize all the disks contents on that single disk.

Is this a problem that has been discussed and solved as a general case? From the programmer's point of view, what's the best way to handle disk and drive identification, determining content location (what files on what disks), disk swapping, etc? Are there any good code snippets or flowcharts to work from?

 

Very rough estimate, I'm currently looking at somewhere between 512-1024KB between multiple programs and multiple data files. It can easily skyrocket from there. I was developing my own idea of how to determine what file is going to be on what disk, how to see what disks are currently in the system, how to ask the user for a specific disk, that sort of thing. Is there a best practice, guide, or some good code snippets for this?

 

Also, I've written plenty of code which does disk I/O (both on the Atari 8-bit and on more modern machines) but I haven't done any disk I/O in assembly language on the Atari 8-bit. Can anyone recommend some good Assembler disk I/O guides? Is async I/O possible on the Atari 8-bit in a way that doesn't freeze up the system, or are there careful ways that I/O needs to happen to minimize disruption? Is it possible just to do some simple disk sensing without disruption?

 

Thanks again all,

the jmccorm

 

Link to comment
Share on other sites

512KB+ is quite a lot of data to put on disk. I'd suggest using a solution that is friendly to both high-speed operation and emulated megadisk storage. If you really wanted to be authentic, you could of course subject everyone to swapping 90K disks. Unless you're planning on shipping on physical disks, though, a large proportion of people will be able to use megadisk images. Those who don't would have trouble getting your software onto physical disks anyway.

 

If you do decide to support multiple disks, put an ID on each disk, check it on swaps, and allow retries if there is a mismatch. "One attempt at pass or crash" gets old really quick.

 

Raw sector-level disk I/O is easy to do with the Atari OS since it's effectively abstracted with logical block addressing. It's basically just write drive number to DUNIT, buffer pointer to DBUFLO/HI, command to DCOMND, sector number to DAUX1/2, JSR DSKINV. Single density disks are just 128 byte sectors numbered 1-720, enhanced density 1-1040, and double density 1-720 with a little more work for 256 byte sectors. The Atari Operating System Manual has everything you need to know. On top of that, every kind of disk hook on the planet knows about the high level DSKINV and low level SIOV entry points, so this will also get you support for emulator acceleration and running off of IDE/CompactFlash hard drives, disk images on cartridges, Parallel Bus Interface, OS-based ramdisks, etc.

 

The Atari hardware fully supports asynchronous disk I/O operations. The OS doesn't. This means that you would need custom disk routines. Although there are readily available ones, I'd discourage doing this for a few reasons. First, it takes about 20% of the CPU and also requires partially limiting audio and strictly limiting display interrupts. Second, doing so will exclude most forms of fast disk access -- high-speed SIO, PBI disk, and cartridge-based disk emulation. For the data sizes you're dealing with that's potentially a difference in minutes of load time. Third, many of the async I/O routines I have seen are buggy and fail more often than standard routines. Little things like transmission errors and protocol delay violations can screw you. I suppose it might be neat if you could do some form of streaming, but with an effective data rate of ~1KB/sec it's hard to do much interesting.

  • Like 3
Link to comment
Share on other sites

The thought I had when contemplating this sort of thing:

 

Use a filing system. Though AtariDos 2.x has the disadvantage of only 10 bits of sector pointer info which gives annoyingly small disk sizes, ie 1K * the sector size (128, 256, 512).

 

For disk IDing, have filenames like DISK01.ID, DISK02.ID. On the smallest capacity media these files would be one per floppy. If using media with higher capacity that allows doubling the data per image then 2 or more of the ID files reside there.

The ID files are just dummy entries to flag the correct volume. Maintain an index of your resource files and what disk they're expected on, then on media change/press fire/key and look for the relevant ID file. It's presence should indicate your required resource files are also present.

 

Sensing disks at the hardware level - the Atari drives support the Status command which can indicate stuff like density, door open etc but it's rarely if ever used.

  • Like 3
Link to comment
Share on other sites

I'd suggest using a solution that is friendly to both high-speed operation and emulated megadisk storage. [ ...] If you do decide to support multiple disks, put an ID on each disk, check it on swaps, and allow retries if there is a mismatch. "One attempt at pass or crash" gets old really quick.

 

Raw sector-level disk I/O is easy to do with the Atari OS since it's effectively abstracted with logical block addressing. [ ... ] On top of that, every kind of disk hook on the planet knows about the high level DSKINV and low level SIOV entry points, so this will also get you support for emulator acceleration and running off of IDE/CompactFlash hard drives, disk images on cartridges, Parallel Bus Interface, OS-based ramdisks, etc.

 

The Atari hardware fully supports asynchronous disk I/O operations. The OS doesn't. This means that you would need custom disk routines. Although there are readily available ones, I'd discourage doing this for a few reasons.

 

Normally, I'd take admonishments of what should or shouldn't be done with a grain of salt. From you? I think I'll respect that. Part of what I was after was to avoid a potential problem with blocking on disk I/O during an error condition. If there is real-time activity that I want to continue to run during an IO attempt in my main code (even on some soft disk id sensing), perhaps I should hook my live/interactive events to be triggered by DLI?

 

Good advice on mismatch retries, will do. Now I'm agonizing over the choice between using DSKINV with a raw disk, or with using DSKINV on a disk with a DOS filesystem (which would make manipulating multiple files easier during development). I wouldn't be against using the actual Atari DOS routines, but I know that it holds onto a lot of memory that I'm going to want. The downside for using a raw disk is that it may prevent people (now or in the future) from diagnosing/resolving some of their own issues, and from consolidating multiple low-density disks on a higher density medium, which I'd prefer for them to be able to do for themselves (but not a requirement since I could rebuild for that).

 

On the Atari side, does someone offer a "lite" DOS compatible read-only library?

 

On the PC side, can you recommend a tool which takes command line arguments to add files to an ATR image with a DOS format? And the same for placing binaries at different offsets onto an ATR image with no filesystem?

 

Use a filing system. Though AtariDos 2.x has the disadvantage of only 10 bits of sector pointer info which gives annoyingly small disk sizes, ie 1K * the sector size (128, 256, 512).

 

For disk IDing, have filenames like DISK01.ID, DISK02.ID. On the smallest capacity media these files would be one per floppy. If using media with higher capacity that allows doubling the data per image then 2 or more of the ID files reside there.

The ID files are just dummy entries to flag the correct volume. Maintain an index of your resource files and what disk they're expected on, then on media change/press fire/key and look for the relevant ID file. It's presence should indicate your required resource files are also present.

 

Sensing disks at the hardware level - the Atari drives support the Status command which can indicate stuff like density, door open etc but it's rarely if ever used.

 

If I go with a filesystem, I think that'll be the method that I use. That will also merge well with another concept I was working with, which was one (or more) supplemental disks that, months later, would update some targeted pieces of existing code and data. The main disk is going to maintain some live session data and I don't want that disk replaced on an update. Which reminds me...

 

I have to see if there is a way to get real disk writes from emulators so that the code doesn't think that every time is the first time it has been launched. Those virtual disk writes are a continuity killer. If I can't do it programmatically, I probably need to figure out some way to entice the user into intentionally saving their disk image every time they use it, like with some sort of customization. I've got no way to detect that they didn't save the last time they fired it up, so on the other side of the carrot and the stick, I can't create any direct punishment for not saving.

 

I haven't yet found a 'door open' status in a register, but I'll look some more. I do like one operation I saw, which was to spin up the drive and get ready for a physical I/O operation that is to follow. I wonder if that command fails if the door is open? I really want to know when the door is open because this is, in part, a parody. I'd like to represent the disk-swapping event with an unflattering/ironic/amusing on-screen representation. Like a pizza guy opening up the open, taking out the baked pizza, and putting a fresh new one in. (Probably not by using that one specific analogy, but that's the general idea.)

 

On the other hand, disk swapping and door status sensing is something that could be turned into a compelling mechanic (as part of a larger mini-game) if it was using sparingly. Hmmmm.....

 

and try to make it so the Atari only prompts for the disk change if the status says to or the file name isn't found... checking first is best

 

I'm starting to see that people have a lot of bad history with multi-disk software. Glad I'm asking. Will do.

Edited by jmccorm
Link to comment
Share on other sites

and you might go a step further and have the computer prompt the user to make sure all drives and disks be on and loaded, the computer then polls the drives allowing the computer and use that information for the disk swapping logic, if there is a problem at all it can fall back on allowing the user to enter how many drives are connected to the computer and use that information for the disk swapping logic.

Link to comment
Share on other sites

Disk swap management from 1 to 4 standard 90k drives on up 15 double sided double density drive can be considered since that what was possible without much modification on the floppy side. If you want to go insane you could consider hard or fixed disk management as well, That's 15 32 meg drives by who knows how many (96?) or more partitions. ah but what about Mathy's CD solution. I just can't stop myself :)

Edited by _The Doctor__
Link to comment
Share on other sites

Hmmm, afaik, the "door open" status works with a 1050 drive, not sure if it works with an 810 drive, it does not work with an XF551 drive. Not sure if other floppy drives support the door open status. Some demos used this 1050 feature and therefore did not work on the XF551 drive (e.g. The Big Demo by HTT, there is an XF551 patch available for it), so I would not use it. Nowadays a lot of Atarians are using SIO2xxx devices, think it also does not work there. Not sure if any of the emulators do support this 1050 feature...

 

Attached is an old 1050 drive demo by AMC-Verlag, which shows this "door open" status and a few other things (write protect switch on/off, Power-LED on/off, Busy-LED on/off), simply test it on a 1050 drive !

 

WRSWITCH.zip

  • Like 2
Link to comment
Share on other sites

Normally, I'd take admonishments of what should or shouldn't be done with a grain of salt. From you? I think I'll respect that. Part of what I was after was to avoid a potential problem with blocking on disk I/O during an error condition. If there is real-time activity that I want to continue to run during an IO attempt in my main code (even on some soft disk id sensing), perhaps I should hook my live/interactive events to be triggered by DLI?

The problem with doing this is that DLIs are non-maskable interrupts and thus run at highest priority. Exceed 8 scanlines within a DLI and you are guaranteed to cause a serial input overrun and fail a transfer at 19200 baud. That's less than 900 cycles of run time.

 

If you've just got little bit to run like a bit of color cycling or a counter, you can run it off of the VBI or a POKEY timer and safely interleave it with the standard SIO routines. Any sort of actual game logic and you're going to need a custom disk routine. The standard routines only use the foreground task for simply polling for completion of the interrupt-driven transfer, but you have no guarantee of that. With PBI or a high-speed SIO routine, it's more likely straight polling with IRQs off. What you don't want to do is try to wedge your own meaty foreground task into the standard SIO routines -- that way lies madness. Easier to use a custom disk routine where you actually know what code you're hooking into rather than some unknown and potentially patched OS.

 

Now I'm agonizing over the choice between using DSKINV with a raw disk, or with using DSKINV on a disk with a DOS filesystem (which would make manipulating multiple files easier during development). I wouldn't be against using the actual Atari DOS routines, but I know that it holds onto a lot of memory that I'm going to want. The downside for using a raw disk is that it may prevent people (now or in the future) from diagnosing/resolving some of their own issues, and from consolidating multiple low-density disks on a higher density medium, which I'd prefer for them to be able to do for themselves (but not a requirement since I could rebuild for that).

Your choices here are either DSKINV/SIOV with a raw disk or going through CIO to a DOS filesystem. Direct DSKINV + DOS filesystem doesn't really make sense unless you are writing DOS itself. There's no need to do that when there are a variety of off the shelf DOS-based loaders and mini-DOSes to use. If you manage to use plain DOS calls through SIO, then you will have the most flexibility as you will be able to run on any DOS. The good news is that most DOSes will be able to do burst I/O to read sectors directly into target buffers so you get about the same throughput as direct raw sector access.

 

The main issues with using a DOS filesystem are file count/size and memory. Classic Atari DOS 2.x filesystems only support 64 files max and they are very slow to seek in large files -- as in, read the entire file up to the seek point. The MyDOS format lifts the file count limitation via subdirectories and supports large disks but is still very slow to seek. They'll also cost you around 6K of RAM unless you use a mini-DOS replacement. The SpartaDOS filesystem is much faster at seeking but requires an even heavier runtime. None will be able to beat raw sector access for simplicity and code size, as you practically just need a list of start sector and sector counts for the files and a very small read loop.

 

There is one more advantage to using DOS/CIO, which is that it makes it trivial to also use H: in emulators or PCLink on real hardware for direct host file access. This can potentially be very helpful in development, especially if you hot-reload files. With raw sector access you would need to rebuild the disk image.

 

Ultimately, unless you are making something like an open world streaming game -- which would be cool -- loading is probably not your critical problem and you should try to keep your program design and memory constraints loose enough that you don't need to commit to a particular option yet. This'll also make things easier if you change your mind and need to do something different like load off of a banked cartridge.

 

I have to see if there is a way to get real disk writes from emulators so that the code doesn't think that every time is the first time it has been launched. Those virtual disk writes are a continuity killer. If I can't do it programmatically, I probably need to figure out some way to entice the user into intentionally saving their disk image every time they use it, like with some sort of customization. I've got no way to detect that they didn't save the last time they fired it up, so on the other side of the carrot and the stick, I can't create any direct punishment for not saving.

This problem is most likely limited to Altirra as I believe it is the only emulator that defaults to virtual write mode, and might even be the only one that even has it. There is absolutely zero way for you to bypass virtual write mode programmatically from within the emulation, by design. It would be a security vulnerability if you could. IMO, this is a problem for me to find a solution for on the emulator side, to make it clear through the UI when the disks are in ephemeral or persistent state and to try to avoid unintentional data loss.

 

I haven't yet found a 'door open' status in a register, but I'll look some more. I do like one operation I saw, which was to spin up the drive and get ready for a physical I/O operation that is to follow. I wonder if that command fails if the door is open? I really want to know when the door is open because this is, in part, a parody. I'd like to represent the disk-swapping event with an unflattering/ironic/amusing on-screen representation. Like a pizza guy opening up the open, taking out the baked pizza, and putting a fresh new one in. (Probably not by using that one specific analogy, but that's the general idea.)

 

1050s can sense an open door and will fail read calls immediately. The failure is reported via FDC status bit 7 (not ready). 810s cannot and actually try to bump the head and read sectors with an open drive, taking a couple of seconds to fail. Enhanced firmwares for the 810 detect disk changes by watching for the write protect status to toggle, since it will both be obscured and unobscured at some point during a disk change. However, the drive firmware can check this directly much faster than the computer can by issuing status commands, and there might be a problem doing so (it may require an actual write command).

 

 

  • Like 5
Link to comment
Share on other sites

  • 3 weeks later...

The problem with doing this is that DLIs are non-maskable interrupts and thus run at highest priority. Exceed 8 scanlines within a DLI and you are guaranteed to cause a serial input overrun and fail a transfer at 19200 baud. That's less than 900 cycles of run time.

 

If you've just got little bit to run like a bit of color cycling or a counter, you can run it off of the VBI or a POKEY timer and safely interleave it with the standard SIO routines. Any sort of actual game logic and you're going to need a custom disk routine. The standard routines only use the foreground task for simply polling for completion of the interrupt-driven transfer, but you have no guarantee of that. With PBI or a high-speed SIO routine, it's more likely straight polling with IRQs off. What you don't want to do is try to wedge your own meaty foreground task into the standard SIO routines -- that way lies madness. Easier to use a custom disk routine where you actually know what code you're hooking into rather than some unknown and potentially patched OS.

 

You've given some great input, and I've used that to give this a bit more thought. Thank you.

 

GENERAL (HIDING I/O BECAUSE IT IS BORING, TAKING TWO MAJOR PATHS):

I really like the idea of making sure that I/O isn't the thing that shuts everything down while it does its thing. BORING. I think, depending on situation, I might choose something cheap which tries to stay out of the way of any existing OS routine (like a small graphics window with page flipping (or character set flipping) to keep a fresh animation on-screen during a long read). Otherwise, my preferred route is going to be a custom I/O routine with a slow regular stream of I/O. More on that choice a bit later.

 

UNDERCLOCKING (SLOW AND STEADY WINS THE I/O READING RACE):

I've been pouring over Chapter 10 of the Altirra Hardware Reference manual. I see lots of discussion on going faster than the default 19.2 Kbaud, but not so much going slower. (Not as popular of a topic, right?) It looks like, universally, the sector reads are going to be buffered, so that much is fine. I'm going to have to read through this chapter a few more times. I know that AUDF3 and 4 are combined into a single 16 bit channel, with a standard divisor (plugged into AUDF3, I believe) of $28 for a 19040 baud with NTSC. I only saw it mention in passing, but it looks like that the frequency generation isn't just the Atari's internal clock but also the clock signal which is sent over the SIO bus? So on both sides, I'm going to be able to dial in the exact transfer rate that I want? (And hope their firmware doesn't barf at something outside the expected range. But I wonder if I can have some really stretched cycles here and there without penalty.) There are going to be exceptions.

 

THE EXCEPTIONS:

This was your number one caution. The standard OS vectors may have custom routines hiding behind them. The underlying media might be completely different than what I expect (a flash cartridge instead of magnetic media) so if I attempt my own routines, they could fail horribly. So what I think I need to do is checksum the OS routines and whitelist the ones that I know that I can replace. After that, depending on what I find, either whitelist or blacklist the remaining hardware devices which may or may not cooperate with underclocking. So this goes back to the first paragraph. Where I can whitelist, I can plug in my custom slow and steady I/O routine and hide the boring I/O behind the scenes. Where I can't, I might have to try to use a CPU light intermission to wallpaper over the delay with eye candy.

 

THE MISSING 8-BIT COMMON LIBRARIES:

One way or another, I want the program to figure out what will or won't work when it first launches. (Not in the middle.) Even better if it determines a path, stores it on disk, and reuses it on the next launch if it can determine that nothing important has changed. (So that's a small part of my need for avoiding data being lost when an emulated virtual disks isn't written back when the emulator closes.)

 

Either the Atari 8-bit is missing a ton of community-created common libraries that I would have expected to be decades old, or I'm looking in the wrong place. Is there a capability sensing/enumeration library? I'm hoping that there is something already out there which will go through a system's configuration and sort out any of the unique configuration parameters that a programmer might want to interrogate, use, or avoid. So, we're not just talking about the CIO handler list, but things like dual pokeys, 65C02 processor, memory expansion type, storage devices, etc etc. I know a number of different places where I'd want to use that, but figuring out the OS / driver / device situation is just one specific example where I'd want to whitelist or blacklist software paths based on capabilities.

 

ALTIRRA DEBUGGING AND APPLICATION SOUNDS DURING DISK I/O:

I'm spending more time learning Altirra's debugger and I took a look at what was going on with the AUDFn, AUDCn, and AUDCTL registers during disk reads. (Since I can't directly read them from the POKEY, I'm doing a break on updates and sometimes I'm running a manual .pokey interrogation.) Channels 3&4 combined, okay, expected that. I won't touch them. My question was, can I carefully avoid the OS I/O routines and do my own sounds (related or unrelated) during disk I/O?

 

So it looks like if you zero out SOUNDR ($41), the standard OS routine disables noises during I/O by keeping the volume set to 0 on the audio channel. That's fine. Let's put that to the side now and look at the typical situation. I hope I'm not reading this next part wrong, but what I'm seeing is that the clicking sound that happens during disk I/O may not be bleed-through (crosstalk) on the cables, but from the OS routines regularly switching all of AUDC1 through AUDC4 between A8 and A0 (flopping back and forth between zero volume and half volume). On the surface, it seems like a good explanation for the popping noises on each command.

 

This is also going to be a problem for doing my own sound during disk I/O (if I'm trying to work along side the OS routines). I don't see that audio channels 1 and 2 are being used, and I can live with the most significant nibble of AUDC1/2 being set to $A, but it looks like it is also flipping the volume off-and-on for those two channels, too. I can get around it (by poking AUDC1/2 back or by replacing the routine) but I had hope it wasn't going to get in my way. Either that, or it is using AUDF1 and AUDF2 in some way that I'm not seeing and understanding.

 

CLOSING:

So that's where I'm at right now. I have a few specific questions here and there, but aside from those, I was just wanting to put this out there and bounce the ideas off of my more experienced peers. Normally, this kind of thing might be a minor consideration, but I plan on pushing a lot of content through the machine. I'm hoping that users will appreciate large amounts of content when I/O isn't the regular foreground activity that brings everything to a grinding halt. If a program is a restaurant, then loading from disk is neither the sizzle nor the steak. It is the guy at the register who stops you in your tracks until they get paid.

Edited by jmccorm
Link to comment
Share on other sites

I've been pouring over Chapter 10 of the Altirra Hardware Reference manual. I see lots of discussion on going faster than the default 19.2 Kbaud, but not so much going slower. (Not as popular of a topic, right?) It looks like, universally, the sector reads are going to be buffered, so that much is fine. I'm going to have to read through this chapter a few more times. I know that AUDF3 and 4 are combined into a single 16 bit channel, with a standard divisor (plugged into AUDF3, I believe) of $28 for a 19040 baud with NTSC. I only saw it mention in passing, but it looks like that the frequency generation isn't just the Atari's internal clock but also the clock signal which is sent over the SIO bus? So on both sides, I'm going to be able to dial in the exact transfer rate that I want? (And hope their firmware doesn't barf at something outside the expected range.) There are going to be exceptions.

19200 baud is always available. Anything else is solely up to the disk drive and what high-speed protocol it supports. With the majority of drives, you there will only be one specific high speed available at most. There may also be none. A plain 1050 or disk emulator with high speed disabled will only support 19200 baud and nothing else.

 

You can't use a slower speed. It's a wire, not a flow-controlled pipe. The data's coming in at 19200 baud or whatever high speed the drive is using and you either read it at the correct speed immediately or it gets lost. Flip it around and it's virtually the same for sending to the drive. High-speed requires a protocol for the drive to tell that high-speed mode is required, and there are several protocols.

 

19040 baud for NTSC is just a little inaccuracy on POKEY's side; a few percent off between the computer and the drive is fine. Note that when receiving, this causes POKEY to bring in the bits a little slow, but the bytes are still going to come in at the same rate regardless. The discrepancy is made up in the start/stop bits. So again, you can't use this to deliberately slow down the transfer.

 

The SIO bus has clock lines, yes. Almost nothing uses them, the standard disk drive protocol doesn't, and often the pins aren't even connected on the cables and drives. The computer and drive have to agree on the transfer speed.

 

So it looks like if you zero out SOUNDR ($41), the standard OS routine disables noises during I/O by keeping the volume set to 0 on the audio channel. I hope I'm not reading this next part wrong, but what I'm seeing is that the clicking sound that happens during disk I/O may not be bleed-through (crosstalk) on the cables, but from the OS routines regularly switching all of AUDC1 through AUDC4 between A8 and A0 (flopping back and forth between zero volume and half volume). On the surface, it seems like a good explanation for the popping noises. This is also going to be a problem for doing my own sound during disk I/O (if I'm trying to work along side the OS routines). I don't see that audio channels 1 and 2 are being used, and I can live with the most significant nibble of AUDC1/2 being set to $A, but it looks like it is also flipping the volume off-and-on for those two channels, too. I can get around it (by poking AUDC1/2 back or by replacing the routine) but I had hope it wasn't going to get in my way. Either that, or it is using AUDF1 and AUDF2 in some way that I'm not seeing and understanding.

 

The counters for channels 3 and 4 are required during disk transfers. The beep-beep sound effect is a side effect of the audio circuits for those channels still being active. It's actually a bunch of clock pulses getting sent to the audio circuits during the transfer that causes the tones and not the OS flipping the volumes on and off. Changing SOUNDR tells the OS to keep the channels muted so you don't hear anything, but the ch3+ch4 audio circuits are still active. You can still use them for your own audio, but only in volume only more or using the clock pulses from the serial transfer. See Total Daze for an example.

 

Channels 1 and 2 are indeed free during disk transfers. They can be used by the serial port, but typically for full-duplex serial port communications or cassette tape I/O, not SIO commands. The one gotcha is that a regular serial port routine is likely to stomp all over POKEY registers like AUDC1/2, AUDF1/2, and AUDCTL during its transfer, because AUDCTL is needed and partly out of laziness to share common code with cassette routines. That means you can't necessarily just try to play on ch1+2 in the background; it requires some coordination.

 

Crosstalk noise is completely separate from the standard beep-beep-beep sound. It's low in level, though, so it won't be noticed if you're playing music. The popping noises you're hearing may be from when the computer is sending the command frame, since writes to the drive sound like pops instead of tones.

 

 

  • Like 2
Link to comment
Share on other sites

Those are both quite clever! I'd love to see the source on those, but particularly the first one.

 

I've done a lot more experimentation (mostly via the Altirra debugger) using things like this:

 

ba w D200 l16 "eb AUDC1 A8"

ba w D200 l16 "eb AUDC2 A8"

ba w D200 l16 "eb AUDC3 A0"

ba w D200 l16 "eb AUDC4 A0"

ba w D200 l16 "g"

 

That's letter "ell" 16 in the middle. The command above will silence all the beeping during disk reads. Changing either AUDC3 or AUDC4 to A8 will restore the loading noise. And it can be made softer by using a lesser volume, like A2. I was also able to put notes into AUDF1 and AUF2 and hear them play normally via diskio. But I'm cheating.

 

Experimentation is proving that it is does as Phaeron describes, and with extreme regularity, setting AUDC1/2 to A0, which takes away their volume. So if you're playing notes, you've got to catch it right after the IO routine fires off, and change it back (likely resulting in a clicking noises) or you'd have to rewrite part of the I/O routine in order to get it to leave AUDC1/2 alone and then using an interrupt-driven player with channels 1/2. I'm going to guess that's what the first demo did.

 

I'm still trying to wrap my brain around where the sound of a sector loading comes from. Here is my guestimation:

 

The loading sound isn't a frequency that's directly entered into the POKEY. As best as I can figure, it is a side-effect of latching itself with the SIO data input line and reading the data. But how? The POKEY is set to synchronize with the data by producing a 19200Hz/baud clock. The clock is initially halted, but begins when it sees a start bit in the incoming data line. It then expects 8 bits of data and a stop bit. That's 10 pieces of data, each lasting 1 cycle apiece. It then halts its 19200Hz clock and waits (not long) for the next start bit. It starts the clock, receives the data, shuts off the clock, and waits to synchronize once again.

 

So what I *think* we're hearing on the loading noises is a waveform which inverts every time the POKEY synchronizes to a start bit. In an assumed case, it would produce a "1" for 10 cycles, and produce "0" for 10 cycles, so you've got 20 cycles inside of 19200Hz time slices. It works out (19200/20) and creating tone somewhere around a best case of 960Hz (via Phaeron). The sound waveform is a combination of the communications rate (19.2 Kbaud or 19200Hz) which creates a full waveform at the speed of which two bytes of data are pushed through it, plus some extra delays.

 

I'm hoping that's somewhere in the neighborhood. I haven't seen it explained that deep and specific, so that's my best swing at it.

Edited by jmccorm
Link to comment
Share on other sites

The Atari could do completely silent loading, but the SIO cables have an unshielded audio-in wire so you'll always hear the serial data unless you make a better SIO cable or disconnect audio-in.

I get that - but why even make the noise in the 1st place?

Link to comment
Share on other sites

I get that - but why even make the noise in the 1st place?

 

SPECULATION:

They probably wanted to give you some feedback on how a load or save was progressing. Disk and tape IO are processed through the POKEY, so auditory feedback seems like a natural way of doing it. My guess is that they wanted serial bus transfers (disk loads) to sound at least somewhat similar to like cassette transfers, and it was done entirely in hardware with no CPU being consumed. This would have been a decision made when they designed the POKEY chip.

Link to comment
Share on other sites

 

The counters for channels 3 and 4 are required during disk transfers. The beep-beep sound effect is a side effect of the audio circuits for those channels still being active. It's actually a bunch of clock pulses getting sent to the audio circuits during the transfer that causes the tones and not the OS flipping the volumes on and off. Changing SOUNDR tells the OS to keep the channels muted so you don't hear anything, but the ch3+ch4 audio circuits are still active. You can still use them for your own audio, but only in volume only more or using the clock pulses from the serial transfer. See Total Daze for an example.

 

 

I get that - but why even make the noise in the 1st place?

 

 

 

SPECULATION:

They probably wanted to give you some feedback on how a load or save was progressing. Disk and tape IO are processed through the POKEY, so auditory feedback seems like a natural way of doing it. My guess is that they wanted serial bus transfers (disk loads) to sound at least somewhat similar to like cassette transfers, and it was done entirely in hardware with no CPU being consumed. This would have been a decision made when they designed the POKEY chip.

 

Um, guys, read Avery's post. He told you exactly why the sounds are there, and he did it before either of you posted.

Link to comment
Share on other sites

I get that - but why even make the noise in the 1st place?

Well, others have pointed out that the sounds are free (that is, no extra code involved since Pokey's using those channels).

 

But the question you're asking is the aesthetic one. I've debated many times whether it was a good feature or a bad one. In the end, I think it comes down to two things:

 

1. The purchaser of a 400 or 800 had probably never owned a computer before. The sounds wouldn't be unwelcome because you wouldn't know any different.

 

2. Atari was marketing these machines as really friendly, including the feature that there's one central expansion interface. Not only does everything hook up to the same place (well, unless you bought one of the non-SIO printers or modems) but you can hear the devices talking to the computer in a way that spotlights the SIO feature and makes you appreciate what's going on.

  • Like 4
Link to comment
Share on other sites

The loading sound isn't a frequency that's directly entered into the POKEY. As best as I can figure, it is a side-effect of latching itself with the SIO data input line and reading the data. But how? The POKEY is set to synchronize with the data by producing a 19200Hz/baud clock. The clock is initially halted, but begins when it sees a start bit in the incoming data line. It then expects 8 bits of data and a stop bit. That's 10 pieces of data, each lasting 1 cycle apiece. It then halts its 19200Hz clock and waits (not long) for the next start bit. It starts the clock, receives the data, shuts off the clock, and waits to synchronize once again.

 

So what I *think* we're hearing on the loading noises is a waveform which inverts every time the POKEY synchronizes to a start bit. In an assumed case, it would produce a "1" for 10 cycles, and produce "0" for 10 cycles, so you've got 20 cycles inside of 19200Hz time slices. It works out (19200/20) and creating tone somewhere around a best case of 960Hz (via Phaeron). The sound waveform is a combination of the communications rate (19.2 Kbaud or 19200Hz) which creates a full waveform at the speed of which two bytes of data are pushed through it, plus some extra delays.

 

Close... you've got a lot of the essential elements.

 

The audio is indeed related to serial bit shifting, specifically asynchronous receive mode. POKEY requires a 2x clock for serial operations because it needs timing for both the leading edge and center of bits -- the leading edge of the start bit resets timers 3+4, and then bit sampling starts one clock period or half a bit later. The divisor for 19200 baud is $0028, which means that the audio circuitry is receiving pulses at 38KHz -- twice the baud rate. However, the audio circuit is effectively a divide by two, so it produces a 19KHz tone. You can't hear this.

 

The part that you do hear is from this tone getting interrupted. Whenever POKEY is waiting for a start bit, timers 3+4 are frozen, and thus the audio circuit stops toggling. Crucially, the audio circuit receives 19 clock pulses per byte and not 20, so the output effectively inverts once per byte as you have determined. The result is a waveform that can be decomposed as the sum of a square wave at byte rate, plus small bursts of inaudible ultra-high-pitched noise. The byte rate that determines the audible pitch is independent of the bit rate, which is why loading from a 1050 sounds noticeably lower in pitch than from an 810.

 

This only applies when receiving. When sending from the computer to the drive, synchronous mode is used instead and timers 3+4 never stop. All you get is the 19KHz tone and it sounds like a small thump instead of beeping.

 

Note that neither of these noises depend on the data being sent or received. The leakage noise heard when POKEY's audio is turned off, however, does. It manifests as spikes whenever the data line changes, so you can actually hear it changing in character as different data patterns are loaded.

  • Like 6
Link to comment
Share on other sites

The part that you do hear is from this tone getting interrupted. Whenever POKEY is waiting for a start bit, timers 3+4 are frozen, and thus the audio circuit stops toggling. Crucially, the audio circuit receives 19 clock pulses per byte and not 20, so the output effectively inverts once per byte as you have determined. The result is a waveform that can be decomposed as the sum of a square wave at byte rate, plus small bursts of inaudible ultra-high-pitched noise. The byte rate that determines the audible pitch is independent of the bit rate, which is why loading from a 1050 sounds noticeably lower in pitch than from an 810.

Wow. You know that many have waited over 35 years for such a great explanation of how that sound is generated? I feel guilty for reading such a well-crafted answer. Thank you!

  • Like 3
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...