I have always been wary of the "direct-hardware-access mantra", as the incompatibility that its future-broken approach engendered proved to be a pain-in-the-ass, both in PAST and PRESENT time, especially now with the emergent wave of next-gen HW upgrades, and their corresponding low-level code design and support, which always run aground when having to deal with direct-hardware relics.
But for tapes ??? Do you really expect a lot of new development in Atari tape hardware ???
And in this case you do want, and even need, to go to the hardware at least partially anyway. The OS is not designed for what we are doing. You can reuse portion of the OS, which might be even riskier than going directly to the hardware because you depend on specific OS versions. And the OS is not reliable for handling higher bit rates, let alone that is buggy even for the standard bit rate.
Now, with respect to ECC support on tape-loading code, I intuitively sense that there has to be a "coded-density" sweetspot for the actual combination of [encoding+media] on hand... one at which error correction and overall throughput efficiency could be sustained.
In other words. what is the MAXIMUM bits-per-hertz that could be encoded-and-recorded (and then read/corrected) with the Atari computer (as host) and the Atari 1010 (with OEM HW)? Has anyone explored or measured this?
There is no simple, absolute or unique answer to this. It depends a lot on how it is recorded, on the tape, and of course, on the specific user device. Some are better than the other.
I ask this because, in my humble opinion, this should be the actual beginning or foundation of this discussion, and it would help us evaluate the worthiness of ECC, or any any other error-correction approach.
At this point, my intuition tells me retry-by-rewind is the overall best way to handle tape-loading crap, assuming the encoded density does not push the magnetic-media capabilities to its bare limits...
Well, I never said the retry-by-rewind is a bad approach. I was the one who invented this method after all. Jeje. I just said that the error correction idea is valid and it can certainly be used.
In my case, I am using compression, which in many cases is more useful than anything else. I am decompressing on the fly with zero cost. If using error correction that wouldn't be possible anymore. You can't decompress on the fly if the data is not valid yet. That's a similar problem vitoco mentioned when used a XEX structure. It would need to be decompressed at the end, or at least after the error was corrected. But then it won't be zero cost anymore. It might take some time to decompress several blocks, and you need space to store the temporary compressed data.
I would say that depends on the specific case. If you don't need a XEX structure, and if compression won't reduce too much the number of blocks, then error correction might be a reasonable alternative. But I think we agree that a single redundant block is not enough. You need one every so many blocks. This would increase overall loading time, of course.
To be honest, if I were back in time I might explore alternative synchronous modulation. We talked about this long ago. It might be interesting to see if it is possible to do RLL on tapes. That would mean a huge saving on the effective bit rate. But even using "simple" MFM encoding will be an improvement.
Edited by ijor, Wed Jun 20, 2018 1:33 PM.