Jump to content
IGNORED

Why single density, and then whey "enhanced" density?


wood_jl

Recommended Posts

It's so nice that this site is peppered with technical types who can break this stuff down for us laymen.

 

(1) This question is just begged! Granted, the complexities (and obvious expense of) double-sided drives is obvious. But, why was the Atair 810 single-density, to begin with? Apple, Commdore, and IBM had 140k, 170k, 180k, and Atari had 88K? Why? With the incredible cost of a disk drive to begin with, how much (less) was saved in making it Single Density, when the competition clearly did not?

 

(2) **YEARS** after that debacle, when the 1050 came out, why this more sectors-per-track business? I mean, Percom and the rest of the aftermarket had already pointed the way. The US Doubler (from ICD) showed how easily a 1050 could be converted to true double-density. Why? There must have been a reason?????

 

Thanks for replies, love reading about A8 tech and speculation on such!

Link to comment
Share on other sites

Single... I guess Atari were paranoid in some ways, e.g. enormous RF shielding on early machines.

Atari 810/1050 use FM recording, C= 1541 GCR and most DD drives use MFM. MFM encoding is more efficient in that a greater amount of data can be encoded in a given linear space.

 

1050 ED - a bit of a joke for sure. I guess one reason might be that it kept the cost down a little in that you need less RAM for a sector buffer compared to 256 byte sectors. It also would have made developing DOS 2.5 somewhat easier than if it had to cater for both sector sizes although I doubt this contributed much to the decision. ed - according to the FAQ it does use MFM for the ED mode, but possibly the overhead of many small sectors meant they couldn't fit too many more per track.

 

Sector size also matters in that there's that overhead for all the timing and sync marks. Not sure what the exact figure is and it can vary but fairly sure it's somewhere around 20-30 bytes equivalence per sector. So, smaller sectors are somewhat less efficient than using large ones in that regard.

On the minus size though, given the limited Ram amounts most people had at the time, 128 byte sectors means the overall DOS footprint can be made smaller since the buffers are smaller and the DOS code itself can be smaller if only one sector size needs catering for.

 

Back to 1050 ED - another thing is that since DOS 2.x uses only 10 bits worth of sector link within the file system, only 1024 possibilities means that the last 16 sectors are unused by DOS. Changing to 11 or 12 bit links would have meant breaking the filing system such that compatability problems would have arised e.g. with file utilities. Plus supporting both types of file link would have further bloated the code.

Edited by Rybags
Link to comment
Share on other sites

It's so nice that this site is peppered with technical types who can break this stuff down for us laymen.

 

(1) This question is just begged! Granted, the complexities (and obvious expense of) double-sided drives is obvious. But, why was the Atair 810 single-density, to begin with? Apple, Commdore, and IBM had 140k, 170k, 180k, and Atari had 88K? Why? With the incredible cost of a disk drive to begin with, how much (less) was saved in making it Single Density, when the competition clearly did not?

 

(2) **YEARS** after that debacle, when the 1050 came out, why this more sectors-per-track business? I mean, Percom and the rest of the aftermarket had already pointed the way. The US Doubler (from ICD) showed how easily a 1050 could be converted to true double-density. Why? There must have been a reason?????

 

Thanks for replies, love reading about A8 tech and speculation on such!

According to Wikipedia the early Apple II drives were only 113.75kiB.

 

I agree that it was a poor business decision not to make the 1050 true double density, and I also don't understand why they didn't make a double-sided version. The 1050 was designed to use either the WD2793(single sided only) or WD2797(double-sided capable) FDC.

Link to comment
Share on other sites

Despite the capacity limitations, it felt to me as a kid that the A8 disk solution was smoother than either the Apple II or C64 setups. The Disk II was dog slow most of the time because Apple DOS 3.3 transferred everything a byte at a time internally, and of course the stock 1541 is infamous for its stock speed. Atari DOS was also better integrated into the OS and BASIC. Beyond that, though, Atari just didn't seem to do much to advance the platform as a whole either in hardware or software.

Link to comment
Share on other sites

Yes, I think C= was almost unique in their I/O philosophy with disks in having the Dos onboard the device.

 

Really the only advantage being that it used no Ram on the computer and meant quick startup times (of course lost due to shitty stock speed).

 

C= disk operations though are somewhat less flexible and generally more work needed on the coders side to do most things. Even something as simple as getting a directory is somewhat of a joke.

They could have had the thing running at similar speed to some of the turbos but the rush to market and corner-cutting meant they released what they did.

 

Atari don't get out of jail there either though, the OS could have had a little more support. Support for device-independant Binary Load would have been good and cost little in terms of space. Also, much of the extra Rom in the XL OS was wasted, IMO an onboard DOS or at least part thereof would have been of much more use than the near pointless Self-Test and extra charset.

  • Like 1
Link to comment
Share on other sites

Despite the capacity limitations, it felt to me as a kid that the A8 disk solution was smoother than either the Apple II or C64 setups.

 

Never used an Apple but it sure was smoother than the C64 with its endless loading times, no auto boot and tedious loading of programs. I always thought my Atari boot menus were way cooler than having to type LOAD "*",8,1 or such.

 

With all the third-party vendors selling DD drives by then, it would have made sense for Atari to match them with a DD 1050 or better them with a DS/DD one. As strange as it may seem today, floppy prices were a concern for cash-strapped teenagers hoarding bootleg games. I remember a case of 10 discs eating away a monthly allowance with a single disc priced like a cone of ice cream. Being able to fit twice or even four times as many on a disc would have been an argument "pro Atari" or at least to buy a new drive rather than stick to the old one and use "flippies".

 

The XF551 was too late (and apparently too flimsy) to make a difference although I remember coveting one in a local computer store.

 

As for DOS issues, didn't Atari have a DD-DOS on their hands from the 815 anyway? And all those third-party vendors must have had some kind of DD-DOS, too.

Link to comment
Share on other sites

Atari had the 810 on the market first. Through some strange desicison, they never released the 815, tho everything was ready to go. So from then on, everything was limited to 128 byte sectors. The 1050 retained the 128 b sector limit for compatability reasons. No idea why they didnt do the whole DSDD thing. Maybe the $$$ factor?

James

Link to comment
Share on other sites

Possibly the potentially enormous price of the 815 was a deterrent back in the day. By the time mech prices came down there were 3rd party drives which did a better job and XL/1050 was not far away.

 

Also we have to remember Atari was essentially being run at the time by people with little clue, they never positioned the 8-bit computers very well as a business or creative machine.

Link to comment
Share on other sites

The only reason I've been able to make sense of is obfuscation. Atari flat out did not want a Commodore (put any name here) disk to work in an Atari drive or the other way around so they purposely used obscure disk layouts including the backwards 2nd side track order on their last released drive, the XF-551, coupled with data inversion on every disk platform. At every turn it seems, they went the long way around to get there, and not a coincidence at all that we wind up where we are. Now am I going to be accused of being a conspiracy kook? You think about it more, while I go find my aluminum foil hat before it's too late.

Link to comment
Share on other sites

Actually, Atari were closer to what you'd call an industry standard than C= were.

Atari SD disks can be read on the older PC 5.25 drives, I doubt the Commodore ones would have any chance, they use GCR as well as having different number of sectors per track throughout different zones.

  • Like 3
Link to comment
Share on other sites

Yep,

 

it would have been great if the 810 had been DD/180k right from the start. Then the 1050 would also have been min. DD/180k or even DSDD / 360k.

 

If you are a proud owner of a 1450XLD you may know that it has a double-sided drive. But again, Atari made one of its best decisions (?) so it is just 2x 130k (advertized as a 254k drive, since Atari counted enhanced density as 127kbytes). And err, the early DOS versions or the 1450XLD floppydrive itself did not use a disk double-sided, instead it refered to read-write-head 1 (or side a of a disk) as drive 1 and read-write-head 2 (or side b of a disk ) as drive 2. One can read this in the manual of the greatest (?) DOS Atari ever made - DOS 3 (another one of those great decisions)...

 

If the 810 had been 180k from the start then all commercial productions and most PD productions would have been released on 180k disks and maybe later on 360k disks. Copying e.g. bootdisks from 2x 180k to 360k or vice-versa would be much easier than copying bootdisks from 90k/130k to 180k/360k. And there would be no need to check if a sector has 128Bytes or 256Bytes and err, the bootsectors would most likely have been 256bytes per sector also...

 

-Andreas Koch.

Link to comment
Share on other sites

I recall the history of Atari Disk drives being tied much closer to Tandon.<?> Specifically Atari tried to develop their own drives in house and failed on several levels i.e. price, reliability. Tandon came up with the 1050 which is why we see 1050s with Tandon stickers on their ROMs.

 

I am less sure on Tandon's role, if any, in the 810.

 

So the answer would probably be pretty close to the question "Why did we get cheap ass parts and obsolete drives from Atari?" Atari tried and failed. Tandon came to their rescue by sticking them with a bunch of old parts and designs they wanted to get rid of.

 

There had to have been some cooperation, I specifically would cite the Atari 9VAC power supplies used and the voltage doubler circuit in the 1050 as and example of management team decision forced upon the people that actually had to do the design.

 

Then too, more memory in a 1050 would have meant an easy way to defeat copy protection. Just adding the ability with a couple of SIO commands like 'load drive memory' and 'run from drive memory' would have made it trivial. Maybe a dozen bytes of code in the ROM would have got the job done.

Edited by ricortes
Link to comment
Share on other sites

Yes, Happy Board is essentially a better CPU with more Ram and firmware with extended SIO set allowing such flexibility.

 

But if we remember Ram pricing around 1980, you'd probably be talking in the order of 40 bucks extra, possibly more, to make the drive with a 6502 instead of 6507 and give it 4K Ram.

One main reason for more Ram being able to buffer an entire track - 18*128 = a little over 2K, 26*128 = a little over 3K, so may as well give the thing 4K - residual left over could execute user code.

 

I'm not sure Atari got stiffed with obsolete mechs, given that it's so simple to upgrade 810 or 1050 to do DD, it's more a case of Atari going the cheap route with the disk controller/firmware side of things.

Link to comment
Share on other sites

Actually, Atari were closer to what you'd call an industry standard than C= were.

Atari SD disks can be read on the older PC 5.25 drives, I doubt the Commodore ones would have any chance, they use GCR as well as having different number of sectors per track throughout different zones.

What he said. I've run out of working XF551s, so I'm using a kryoflux with a generic 360K PC floppy drive to "rip" Atari SD, DD, and DSDD disks to my laptop.

Link to comment
Share on other sites

4k static ram would have been a world beater. You are of course right about the prices. I remember buying a 4k static ram expansion for my SBC and I think I paid about $140 circa 1977-1980. Dynamic RAM would have added too much complexity IMO.

No trivial exercise, someone compiled the data for memory prices. :)

http://www.jcmit.com/memoryprice.htm

 

There was an interesting interview with ICD where they said originally they were working on a copy device and ended up going enhanced hardware and SD. I wonder if it was supposed to work with their 128 byte 6810 RAM expansion.

 

So much info of what was going on during the early years that is lost. Makes you wonder just how much of the design was based on prices for chips and what was available. I don't see it as a coincidence the base design of 1050 is so close to a 2600. Some bean counter probably said they had to use 6507 and 6532 chips because of excess inventory or something.

Link to comment
Share on other sites

I was probably way underestimating, I was thinking DRam prices, not SRam.

 

4K SRam around 1980 would have probably been more like 2-3 times the price I quoted.

 

Re using 6507 - I imagine that given Atari probably used 30 million of them in a 15 year period that they'd probably negotiated some sweet prices once the 2600 was well established.

It makes sense to use it if you can put up with only 8K linear addressing and don't need Interrupts to use the 6507 - also the reduced pin count means lower costs producing the board.

 

6532 - was there any other equivalent device incorporating GP I/O, Timers and Ram ? For the tasks required it was perfectly OK for the job although of course more Ram in the thing wouldn't have gone astray.

But again, thanks to 2600 sales probably another part they were getting for next to nothing.

 

Even Joe Public can get nice discounts at retail level when you buy in lots of 10, 100, 500. I imagine that given Atari's habit of making/buying stuff in ludicrous stockpile amounts that they were getting many components at some pretty good prices.

Link to comment
Share on other sites

I'm certain it all came down to price. The 1050 is controlled by the same chips Atari was buying for the 2600 (with an added 128 bytes of RAM). By the time the 1050 came out, DD format had already been established (by Atari, no less) and there was no need for the crippled ED format. Atari probably made an extra buck or two per drive by not using a larger RAM chip.

 

Think about how much was wasted inventing a new format and an incompatible DOS to go with it.

 

(as far as why single density for the 810, that was probably considered a pretty good spec for a drive being designed in the late '70s and the drive's cost was already out of reach for most people)

Link to comment
Share on other sites

According to Wikipedia the early Apple II drives were only 113.75kiB.

 

I agree that it was a poor business decision not to make the 1050 true double density, and I also don't understand why they didn't make a double-sided version. The 1050 was designed to use either the WD2793(single sided only) or WD2797(double-sided capable) FDC.

I seem to remember early TRS-80s having single density as well but I think double density was the norm by 1979 so probably before the Atari was released.

 

Link to comment
Share on other sites

There are a lot of repressed memories surrounding the IBM compatibles. I can understand why: Some of the first ones came with 160k SSDD drives and a cassette interface! I still have nightmares

 

So anyway, IBM did go with 360k DDDS drives eventually but only after a false start. There was really no standard back then, it was more of a consumer/market driven. Everyone wanted 360k drives so that became the standard. My earliest references define density as ~bits per inch. So a 128 byte per sector MFM drive has the same bits per inch as a 1,000 byte per sector MFM drive. Yeah, right. It has gone the way of Xerox or Coke, it means 512 byte sectors now.

 

Remember DOS 3? It was criticized because the smallest allocation of disk space was something like 1k? That is, if you had a 10 byte BASIC program you wanted to save, you would use 1k of disk space. So maybe it wasn't too bad of an idea to use 1050 enhanced density. When you start talking about 'true double density' aka IBM sector size, you are talking 1/2 as bad as DOS 3. It is only a problem with a lot of small files of course.

 

I think there were a few equivalents to 6532 on the Intel side. I pulled up one of my old data books and it had an 8155: 256 bytes RAM, two 8 bit ports, one 6 bit port, 14 bit timer/counter.There was also the 6530 which is one odd beast: 1k mask ROM, timers, 64 bytes RAM, two 8 bit ports. I can't remember them from any other manufacturer. I don't think I have ever seen any of the latter in the wild.

Link to comment
Share on other sites

Hey, I still remember something about someone having a 64K PC motherboard in their collection. It was in an IBM case... I kinda wonder if the board was IBM or maybe it had sockets for more RAM but it was supposedly 64K.

The original IBM PC(5150) only had 16kB RAM, multiple 32K and 64K memory cards could be plugged into the option slots to increase memory to 256K . Even the original XT(5160) could only take 256 kB of memory on the main board (using 64 kbit DRAM); later models were expandable to 640 kB.

Link to comment
Share on other sites

Maybe it's because those didn't get used once the newer models came out. ?

No. I originally purchased two 1050s which remain as they were shipped. Later I purchased a pair of US Doublers and a pair of Happy 1050s. I have acquired other disks since. When I pulled my Ataris out of storage (New England attic experiencing extreme hot and cold temperatures annually), I set up all eight drives and tried every single floppy and EVERYTHING worked. So, maybe, just maybe, the sector size and organization contributed to this durability. I have no technical background to support this.
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...