Jump to content
IGNORED

classic battle atari 8bit vs commodore 64


phuzaxeman

Recommended Posts

All homecomputers are built to offer a certain set of features at minimum costs. The mapping you suggested would have added an external logic, which would drive costs up.

 

Well, there would have been a variety of ways to do it. I simply thought it a little strange to use a new CPU, though I guess they sold enough C64's to make it worthwhile.

 

Shadowing $FFxx to some other page of ROM shouldn't have required much in the way of extra logic. There's already a PLA on there which needs to decode the top 8 address bits, so mapping things as:

 

$0000-$AFFF -- RAM

$B000-$EFFF -- ROM

$F000-$FEFF -- I/O

$FF00-$FFFF -- Shadow of something lower

 

Shouldn't have cost any more silicon. Though I'll admit that having the $C000-$CFFF block not be used by BASIC was somewhat handy, having a standardized format for relocatable code would have been handier still.

 

Possible format:

Bytes 0-1 Format select (for compatibility with old style, could use a specific impossible address to indicate a relocatable file)

Bytes 2--Length in pages(n)

Bytes 3-n*256+2 --Data

Bytes n*256+3 to end -- Pairs of bytes giving addresses to be patched

 

Load protocol:

Read header and n

Set memtop = (memtop & $FF00) - n*256

Load n*256 bytes starting at memtop

Until end of file:

Read two bytes (l and h)

Add memtop/256 to (h:l)

Jump to memtop

 

Such a standard would have made it easy to have multiple system utilities loaded simultaneously, without conflict.

 

Incidentally, did any system use casette formats with well-designed error-correction protocols? Commodore's system was not particularly fast, and even though it recorded everything twice its error-recovery abilities were nowhere near what they should have been for that amount of overhead. For example, if the header of a file gets overwritten, the remainder of the file is completely unrecoverable.

 

Suggested protocol (suitable for VIC-20 or C64).

Each file is divided into chunks of up to 2Kbyte; each chunk has up to 32 blocks of 64 bytes each. When possible, chunks smaller than 1K are to be eliminated (unless the total file size is below 1K, include two chunks in the 1K-2K range rather than one very short one). Each chunk should contain a header, followed by the data, followed by a footer, followed by three 64-bytes blocks of error-correcting info. The header and footer for each chunk should contain a semi-random identifier for the file, a chunk number within the file, the load address, the number of bytes, an indicator whether it's a header/footer, and the file name. The error-correcting info would simply be the xor's of the data in blocks 0, 3, 6, 9, etc. followed by the xor's of the data in blocks 1, 4, 7, etc., followed by those of 2, 5, 8, etc.

 

The overhead for a protocol like that would be 5 blocks per 32 blocks or portion thereof, plus a small header on each block, but it would be able to accommodate easily the loss or corruption of up to three consecutive blocks per chunk (a pretty big dropout) provided the block headers were intact. If a block header got damaged, the file could not be loaded readily but a suitably-written utility would still be able to recover it (the utility would have to be able to buffer blocks of data before knowing what should be done with them, and then process them after it reads the footer).

 

Anyone know if any system used anything remotely like that?

Link to comment
Share on other sites

The problem with small computer tape systems is that

a - they're primitive to start with, and only record serially (as opposed to IBM tapes which were 9 track, 8 bits + parity)

b - a single bit error can be enough to cause framing to drop out of sync, thus the entire remainder of the block might become lost

 

As such, shorter block sizes are more reliable. But the Atari was no better - a single bad block would often cause read errors on subsequent blocks as well.

 

The better error correction systems use at least cross-parity systems, and there are even more complex systems. Any decent error detection system these days also has provision for correction of multiple minor errors in a block of data.

 

Lots of info linked from http://en.wikipedia.org/wiki/Error_correcting_code

 

The added overhead (disregarding the parity track) was often in the order of 10% though.

Edited by Rybags
Link to comment
Share on other sites

The problem with small computer tape systems is that

a - they're primitive to start with, and only record serially (as opposed to IBM tapes which were 9 track, 8 bits + parity)

b - a single bit error can be enough to cause framing to drop out of sync, thus the entire remainder of the block might become lost

 

As such, shorter block sizes are more reliable. But the Atari was no better - a single bad block would often cause read errors on subsequent blocks as well.

 

The better error correction systems use at least cross-parity systems, and there are even more complex systems. Any decent error detection system these days also has provision for correction of multiple minor errors in a block of data.

 

Lots of info linked from http://en.wikipedia.org/wiki/Error_correcting_code

 

The added overhead (disregarding the parity track) was often in the order of 10% though.

Ahhhh, the good old days of tape.

 

Hey, lets play Zaxxon. Ok, put the tape in, type load, press play. You know it takes 20 minutes to load so you occupy yourself for a bit and come back 19.5 minutes later. Wee, the counter is upto 297, we're almost there.........

 

"Load error" <blinking cursor>

 

F@$K!!!!!!!!!!!!! G$D D!&N PIECE OF G&D D#@NED SON OF A B&%$H F#!KING S&^T!! <falls to knees and crys>

 

That about sums it up.

Link to comment
Share on other sites

b - a single bit error can be enough to cause framing to drop out of sync, thus the entire remainder of the block might become lost

 

I would say the right approach to implementing a cassette storage system would be to expect that anything that corrupts so much as a single bit in a block may just as well trash the entire thing, but arrange things so that the loss of a small number of blocks will be recoverable. It's not difficult, and it doesn't require a whole lot of overhead. But I'm unaware of any serious efforts to implement something like that.

Link to comment
Share on other sites

I don't recall error correction utilities like what you describe. Recording multiple streams was the usual drill for everybody I knew.

 

The Atari tape was kind of crappy in my experience. I had both the 410 and 1010 recorders and got the same results. The system would take a file name, but not search on it. It was also picky as hell. Error correction would have mattered, even with the speed hit, IMHO. I saved a lot because it was just not that good.

 

Using C-15 & C-30 normal bias tapes seemed to be the most robust.

 

I would use a lot of tapes to hold programs in progress, two at a shot for major changes, one quick shot for minor ones I could live without.

 

Making final tapes took all day. A C-30 or 60 would hold a fair number of programs, indexed by the counter and 10 count gaps between.

 

I would have totally used the error correction, on an Atari, all things considered.

 

Hate the bring up the CoCo again, but it's tape system was really great. It supported filename search, so you could just press play, and go, letting it find the right program. This also cut down on indexing and errors. If in doubt, just rewind the thing and let it go. As mentioned already, it was fast. Was very robust too.

 

I liked tape systems where one could use whatever they wanted for recording. As audio tech improved, those systems saw more utility. (CD / HiFi VHS / Reel / Computer / Mp3)

 

The Atari dedicated program recorder was a mistake, IMHO.

Link to comment
Share on other sites

I liked tape systems where one could use whatever they wanted for recording. As audio tech improved, those systems saw more utility. (CD / HiFi VHS / Reel / Computer / Mp3)

 

The Atari dedicated program recorder was a mistake, IMHO.

 

Why did dedicated cassette machines run at 1.875"/sec? Making them move the tape faster would have been trivial, and it would have allowed for higher data rates. Though even at 1.875"/sec I've seen custom tape formats on the C64 that could be read faster (using custom code) than could a floppy when using the C64's default loader. One of those tape loaders even showed a title screen and played music while it was loading.

Link to comment
Share on other sites

The tape drive was expensive enough as it was, let alone having to use non-standard mechanisms (I remember my drive cost $160 here in 1983)

 

It would have been a trivial issue to have a dedicated cassette interface inside the computer (like so many others used) so that a standard deck could have been used instead.

 

But I guess the "game machine" origins dictated that a minimal system would be low-cost, of course remembering that the original Ataris were very modular and didn't even have BASIC built in.

 

The FSK encoding system in itself is flawed in that it needs several waveform transitions per bit, whereas there were simpler systems which other systems used which allowed same or better reliability yet higher bitrates.

Link to comment
Share on other sites

All homecomputers are built to offer a certain set of features at minimum costs. The mapping you suggested would have added an external logic, which would drive costs up.

Well, there would have been a variety of ways to do it. I simply thought it a little strange to use a new CPU, though I guess they sold enough C64's to make it worthwhile.

Why is it strange? If you own MOS and can do any modification to any IC, then it's a good choice to add wanted functions into parts you are about to add to your new computer anyway.

 

Shadowing $FFxx to some other page of ROM shouldn't have required much in the way of extra logic. There's already a PLA on there which needs to decode the top 8 address bits, so mapping things as:

The PLA is just a logic grid, there is no registers or anything else in it. Btw, in the C128 they had MMU registers mapped to $FF00 which replaced the CPU port functions. However, I didn't like this solution since you couldn't put a bitmap at $E000 anymore.

 

Shouldn't have cost any more silicon. Though I'll admit that having the $C000-$CFFF block not be used by BASIC was somewhat handy, having a standardized format for relocatable code would have been handier still.

Relocatable code? Those 8 bit computers don't even have any kind of AllocMem in kernel, don't even think of more...

 

Such a standard would have made it easy to have multiple system utilities loaded simultaneously, without conflict.

Yes, but such standards only got common for homecomputers in the 16 bit age. You are asking for too much.

 

Incidentally, did any system use casette formats with well-designed error-correction protocols? Commodore's system was not particularly fast, and even though it recorded everything twice its error-recovery abilities were nowhere near what they should have been for that amount of overhead. For example, if the header of a file gets overwritten, the remainder of the file is completely unrecoverable.

Actually every bit is stored 4 times. The file is stored 2 times, and when it gets stored, a positive and a negative version of every bit is recorded. Well, and then there is parity for every byte too. You CAN make a very sophisticated recovery program for that, but then again tape was crap anyway and everybody switched to diskettes as soon as possible.

Link to comment
Share on other sites

Later Ataris have relocatable handler support for PBI devices, although I can't think of many things that used it.

 

But, the nature of the 6502 is that it is a poor processor so far as allowing for relocatable code. At least the 68000 brings in the base-register concept, and has instructions to do long branches, Branch to Subroutine, and Trap # functions.

Link to comment
Share on other sites

The C64 defenitely loses to the A8 when it comes to looks.

 

What would you rather have on your desk?

 

post-6369-1185724615_thumb.jpg

 

post-6369-1185724630_thumb.jpg

 

Don't forget that only the XL series is looking that good. Comparing the C64II with the XE , the C64 wins.

 

Nah... XE line has sleeker looks than the 64C (II). Plus, since the 64C was released later than the XE, it looks like there may have been some design copying (notice the similarities).

 

post-6369-1185726470_thumb.jpg

 

post-6369-1185727225_thumb.jpg

Link to comment
Share on other sites

It would have been a trivial issue to have a dedicated cassette interface inside the computer (like so many others used) so that a standard deck could have been used instead.

 

I agree that using a custom dedicated tape drive was a big mistake. An interface for a standard deck would have been much better. There was no need to include that in the computer. You could make such interface as a small optional box that would connect to the SIO port.

 

And yes, the FSK idea, at least as implemented is ugly. IMHO, the tape-subsystem is the worst component of the A8. Ok, the separate audio channel is really nice, but it was barely used.

 

Error correction would have been possible. It is also possible to implement error recovery (retry, not error correction) by software only (without the need of restarting from the beginning).

 

Why did dedicated cassette machines run at 1.875"/sec? Making them move the tape faster would have been trivial, and it would have allowed for higher data rates.

 

Hmm, I'm not sure it would be very helpful here . The slow data rate here is related to the very primitive electronic logic. And it doesn't require state of the art to increase the data rate. Look at those hardware tape turbo units developed in Europe (true, they were developed years later).

Link to comment
Share on other sites

But, the nature of the 6502 is that it is a poor processor so far as allowing for relocatable code. At least the 68000 brings in the base-register concept, and has instructions to do long branches, Branch to Subroutine, and Trap # functions.

The 6809 was actually very good for relocatable code and OS-9 took advantage of that.

It also had two stack pointers so you could use one for an OS and the other for user programs.

Most people just used the user stack pointer like another index register.

The 68HC11/12 micro-controllers are very similar to the 6809 but they dropped the user stack pointer and made some instruction changes.

Link to comment
Share on other sites

The tape drive was expensive enough as it was, let alone having to use non-standard mechanisms (I remember my drive cost $160 here in 1983)

 

Changing the tape speed on a typical computer tape deck would require nothing more than changing a resistor. Since the dedicated tape decks had other changes in the electronics (at least the better ones), I wouldn't think that should have been an issue at all.

 

It would have been a trivial issue to have a dedicated cassette interface inside the computer (like so many others used) so that a standard deck could have been used instead.

 

A dedicated tape deck, even at 1 7/8"/sec, can support significantly higher data rates than would have been practical with a standard one using 1980's electronics (using modulation techniques similar to those of 14,400-baud modems would have allowed good data rates, but the electronics would have been prohibitively expensive). As to why computer manufacturers completely failed to take advantage of this, I have no idea.

Link to comment
Share on other sites

And yes, the FSK idea, at least as implemented is ugly. IMHO, the tape-subsystem is the worst component of the A8. Ok, the separate audio channel is really nice, but it was barely used.

 

When using an analog tape deck, it's not possible to do much better than an FSK system reliably. But when using a digital tape deck like that on the Commodore [don't know about the Atari] other encoding methods become available. I'm not sure if the Commodore's hardware allows reading both rising and falling edges, though. No good reason for it not to, but that doesn't mean anything.

 

Hmm, I'm not sure it would be very helpful here . The slow data rate here is related to the very primitive electronic logic. And it doesn't require state of the art to increase the data rate. Look at those hardware tape turbo units developed in Europe (true, they were developed years later).

 

Look at the software tape turbo routines invented for the Commodores. Tape loaders faster than the default floppy loader.

Link to comment
Share on other sites

Hate the bring up the CoCo again, but it's tape system was really great. It supported filename search, so you could just press play, and go, letting it find the right

I think the CoCo used transitions between low and high rather than peaks or lows to determine what data was on the tape. I believe TANDY had a patent on it and it made the CoCo tape interface less sensitive to noise.

The Model I tape interface had been a little flaky even though it was very slow baud rate. Level I BASIC was something like 250 baud and Level II was 500.

Tandy obviously learned their lesson.

Link to comment
Share on other sites

Hate the bring up the CoCo again, but it's tape system was really great. It supported filename search, so you could just press play, and go, letting it find the right

I think the CoCo used transitions between low and high rather than peaks or lows to determine what data was on the tape. I believe TANDY had a patent on it and it made the CoCo tape interface less sensitive to noise.

The Model I tape interface had been a little flaky even though it was very slow baud rate. Level I BASIC was something like 250 baud and Level II was 500.

Tandy obviously learned their lesson.

Cassette tape systems were based on tone-decoding, much like how tone dialing works. The signal on the tape would alternate between two frequencies and a simple circuit would recognize which one and alternate a data bit. It would take a certain number of cycles for the detector to change states, so your two frequencies had to be higher if you wanted faster rates. Higher frequencies mean better QC and more maintenance is required, and the higher bit rate means tape dropouts are more of a problem. Everyone picked a value that they thought would lead to the fewest service call, I bet.

 

On the Atari, Pokey generates the tones for recording, but the player has the hardware to decode them allowing the tape drive to talk on the same serial bus as everything else, although without any command control.

Link to comment
Share on other sites

When using an analog tape deck, it's not possible to do much better than an FSK system reliably. But when using a digital tape deck like that on the Commodore [don't know about the Atari] other encoding methods become available.

 

What are you talking about? The C64 tape was as much analog as the Atari. The difference is that the C64 encoded at the interface, while the Atari encoding was done by the computer.

 

Look at the software tape turbo routines invented for the Commodores. Tape loaders faster than the default floppy loader.

 

I know, and there are turbo loader for the Atari as well. But what's the relation with this and getting higher data rates by using faster tape speeds?

Link to comment
Share on other sites

Cassette tape systems were based on tone-decoding, much like how tone dialing works. The signal on the tape would alternate between two frequencies and a simple circuit would recognize which one and alternate a data bit.

 

The circuit would translate from analog to digital. But the digital outputs doesn't necessarily means "data bit". You can use digital encoding techniques combined with FSK. We talked about this some time ago if you remember, and I understand that's precisley what some C64 turbo software do.

Edited by ijor
Link to comment
Share on other sites

What are you talking about? The C64 tape was as much analog as the Atari. The difference is that the C64 encoded at the interface, while the Atari encoding was done by the computer.

 

My understanding was that the C64 directly controlled, and read, flux reversals on the tape. Audio tape decks use a bias oscillator.

 

To use a slightly crude analogy, if one were to attempt to copy a print a photograph as straight grayscale using most types of ink-based press, the results would be terrible. There's no good way on most types of presses to render a 50% gray. On the other hand, if one were to convert gray shades into dot patterns, one could achieve decent results even with a fairly crude press. Provided one didn't overly push the resolution limits, a pattern of dots which comprised 50% of the area on the page would render as a 50% gray.

 

Audio tape decks always record using a bias generator which in effect turns the continuous audio signal into a pattern of strong highs and strong lows. This greatly improves audio fidelity, but is of no benefit when simply storing '1's and '0's. Further, to avoid aliasing, audio tape decks always have filtering circuitry which further changes the relationship between what's physically stored on the tape and the audio it represents.

 

If you take an audio tape deck and feed it a signal consisting of very sharp squared-off waves in the pattern, say, "110010", the signal one receives on playback will be considerably 'rounded'; though this is in some measure due to imperfections in the tape, it is to a much larger measure due to the processing circuitry included in a normal tape deck.

 

Perhaps Commodore simply included an audio front end inside the tape deck and continued to use the existing bias circuitry, etc. I really can't think why they would do that, though. Circuitry to process the signal to/from a tape head directly would be as simple and cheap as the circuitry to do a decent job of processing analog audio into digital.

Link to comment
Share on other sites

My understanding was that the C64 directly controlled, and read, flux reversals on the tape.

 

It doesn't. And actually the concept of flux reversals/transitions is not used, that's a digital recording concept. And here it is analog linear (with mixed bias, of course) recording.

 

Perhaps Commodore simply included an audio front end inside the tape deck and continued to use the existing bias circuitry, etc. I really can't think why they would do that, though.Circuitry to process the signal to/from a tape head directly would be as simple and cheap as the circuitry to do a decent job of processing analog audio into digital.

 

Digital recording requires (to do it efficiently and reliably) a different type of head and a different magnetic coating. Furthermore, IIRC digital recording on tapes wasn't used until the mid 80's.

Link to comment
Share on other sites

Cassette tape systems were based on tone-decoding, much like how tone dialing works. The signal on the tape would alternate between two frequencies and a simple circuit would recognize which one and alternate a data bit. It would take a certain number of cycles for the detector to change states, so your two frequencies had to be higher if you wanted faster rates.

 

Systems that send data over a phone line cannot rely upon a particular phase relationship among different frequencies. Thus, a 300 baud modem is limited to a data rate significantly below the frequencies used, so that each bit contains a number cycles of the proper frequency (about 6-7 in the case of 300 baud). Audio stored on audio cassette tends to be a little more predictable. The SuperCharger stores one bit for each full wave of signal. If a dedicated casette player were used, each half-wave could represent a bit of data or even more.

 

For example, if each half-wave is allowed to be 60us, 80us, 100us, 120us, 160us, or 200us (maximum frequency would be about 8KHz--well within range for even a cheap tape) and duty cycle must be maintained at 50%, then sixteen bits could be stored in 920us--an average of 57.5us a bit. In practice, the decoding required for that information density might be too complicated to be practical. Nonetheless, even with a 50% duty cycle restriction and the ability to only read two lengths of pulse, it would be trivial to achieve a density of one bit for every two maximum-density half-waves--three times the density achieved on each copy of a program stored using Commodore methods.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...